Passive worms can passively propagate through embedding themselves into some sharing files, which can result in significant damage to unstructured P2P networks. To study the passive worm behaviors, this paper firstly ...Passive worms can passively propagate through embedding themselves into some sharing files, which can result in significant damage to unstructured P2P networks. To study the passive worm behaviors, this paper firstly analyzes and obtains the average delay for all peers in the whole transmitting process due to the limitation of network throughput, and then proposes a mathematical model for the propagation of passive worms over the unstructured P2P networks. The model mainly takes the effect of the network throughput into account, and applies a new healthy files dissemination-based defense strategy according to the file popularity which follows the Zipf distribution. The simulation results show that the propagation of passive worms is mainly governed by the number of hops, initially infected files and uninfected files. The larger the number of hops, the more rapidly the passive worms propagate. If the number of the initially infected files is increased by the attackers, the propagation speed of passive worms increases obviously. A larger size of the uninfected file results in a better attack performance. However, the number of files generated by passive worms is not an important factor governing the propagation of passive worms. The effectiveness of healthy files dissemination strategy is verified. This model can provide a guideline in the control of unstructured P2P networks as well as passive worm defense.展开更多
The development of network resources changes the network computing models. P2P networks, a new type of network adopting peer-to-peer strategy for computing, have attracted worldwide attention. The P2P architecture is ...The development of network resources changes the network computing models. P2P networks, a new type of network adopting peer-to-peer strategy for computing, have attracted worldwide attention. The P2P architecture is a type of distributed network in which all participants share their hardware resources and the shared resources can be directly accessed by peer nodes without going through any dedicated servers. The participants in a P2P network are both resource providers and resource consumers. This article on P2P networks is divided into two issues. In the previous issue, P2P architecture, network models and core search algorithms were introduced. The second part in this issue is analyzing the current P2P research and application situations, as well as the impacts of P2P on telecom operators and equipment vendors.展开更多
The Peer-to-Peer(P2P)network traffic identification technology includes Transport Layer Identification(TLI)and Deep Packet Inspection(DPI)methods.By analyzing packets of the transport layer and the traffic characteris...The Peer-to-Peer(P2P)network traffic identification technology includes Transport Layer Identification(TLI)and Deep Packet Inspection(DPI)methods.By analyzing packets of the transport layer and the traffic characteristic in the P2P system,TLI can identify whether or not the network data flow belongs to the P2P system.The DPI method adopts protocol analysis technology and reverting technology.It picks up data from the P2P application layer and analyzes the characteristics of the payload to judge if the network traffic belongs to P2P applications.Due to its accuracy,robustness and classifying ability,DPI is the main method used to identify P2P traffic.Adopting the advantages of TLI and DPI,a precise and efficient technology for P2P network traffic identification can be designed.展开更多
The development of network resources changes network computing models. P2P networks, a new type of network adopting peer-to-peer strategy for computing, have attracted world-wide attention. P2P architecture is a type ...The development of network resources changes network computing models. P2P networks, a new type of network adopting peer-to-peer strategy for computing, have attracted world-wide attention. P2P architecture is a type of distributed network in which all participants share their hardware resources and the shared resources can be directly accessed by peer nodes without the necessity of going through any dedicated servers. The participants in a P2P network are both resource providers and resource consumers. This article on P2P networks will be divided into two issues. In this issue, P2P architecture, network models and core search algorithms are introduced. And the second part in the next issue will analyze the current P2P research and application situations, as well as the impact of P2P on telecom operators and equipment vendors.展开更多
Trust has become an increasingly important issue given society’s growing reliance on electronic transactions.Peer-to-peer(P2P)networks are among the main electronic transaction environments affected by trust issues d...Trust has become an increasingly important issue given society’s growing reliance on electronic transactions.Peer-to-peer(P2P)networks are among the main electronic transaction environments affected by trust issues due to the freedom and anonymity of peers(users)and the inherent openness of these networks.A malicious peer can easily join a P2P network and abuse its peers and resources,resulting in a large-scale failure that might shut down the entire network.Therefore,a plethora of researchers have proposed trust management systems to mitigate the impact of the problem.However,due to the problem’s scale and complexity,more research is necessary.The algorithm proposed here,HierarchTrust,attempts to create a more reliable environment in which the selection of a peer provider of a file or other resource is based on several trust values represented in hierarchical form.The values at the top of the hierarchical form are more trusted than those at the lower end of the hierarchy.Trust,in HierarchTrust,is generally calculated based on the standard deviation.Evaluation via simulation showed that HierarchTrust produced a better success rate than the well-established EigenTrust algorithm.展开更多
One of the key challenges in ad-hoc networks is the resource discovery problem.How efciently&quickly the queried resource/object can be resolved in such a highly dynamic self-evolving network is the underlying que...One of the key challenges in ad-hoc networks is the resource discovery problem.How efciently&quickly the queried resource/object can be resolved in such a highly dynamic self-evolving network is the underlying question?Broadcasting is a basic technique in the Mobile Ad-hoc Networks(MANETs),and it refers to sending a packet from one node to every other node within the transmission range.Flooding is a type of broadcast where the received packet is retransmitted once by every node.The naive ooding technique oods the network with query messages,while the random walk scheme operates by contacting subsets of each node’s neighbors at every step,thereby restricting the search space.Many earlier works have mainly focused on the simulation-based analysis of ooding technique,and its variants,in a wired network scenario.Although,there have been some empirical studies in peer-to-peer(P2P)networks,the analytical results are still lacking,especially in the context of mobile P2P networks.In this article,we mathematically model different widely used existing search techniques,and compare with the proposed improved random walk method,a simple lightweight approach suitable for the non-DHT architecture.We provide analytical expressions to measure the performance of the different ooding-based search techniques,and our proposed technique.We analytically derive 3 relevant key performance measures,i.e.,the avg.number of steps needed to nd a resource,the probability of locating a resource,and the avg.number of messages generated during the entire search process.展开更多
The underlying premise of peer-to-peer(P2P)systems is the trading of digital resources among individual peers to facilitate file sharing,distributed computing,storage,collaborative applications and multimedia streamin...The underlying premise of peer-to-peer(P2P)systems is the trading of digital resources among individual peers to facilitate file sharing,distributed computing,storage,collaborative applications and multimedia streaming.So-called free-riders challenge the foundations of this system by consuming resources from other peers without offering any resources in return,hindering resource exchange among peers.Therefore,immense effort has been invested in discouraging free-riding and overcoming the ill effects of such unfair use of the system.However,previous efforts have all fallen short of effectively addressing free-riding behaviour in P2P networks.This paper proposes a novel approach based on utilising a credit incentive for P2P networks,wherein a grace period is introduced during which free-riders must reimburse resources.In contrast to previous approaches,the proposed system takes into consideration the upload rate of peers and a grace period.The system has been thoroughly tested in a simulated environment,and the results show that the proposed approach effectively mitigates free-riding behaviour.Compared to previous systems,the number of downloads from free-riders decreased while downloads by contributing peers increased.The results also show that under longer grace periods,the number of downloads by fast peers(those reimbursing the system within the grace period)was greater than the number of downloads by slow peers.展开更多
Edge devices in Internet of Things(IoT)applications can form peers to communicate in peer-to-peer(P2P)networks over P2P protocols.Using P2P networks ensures scalability and removes the need for centralized management....Edge devices in Internet of Things(IoT)applications can form peers to communicate in peer-to-peer(P2P)networks over P2P protocols.Using P2P networks ensures scalability and removes the need for centralized management.However,due to the open nature of P2P networks,they often suffer from the existence of malicious peers,especially malicious peers that unite in groups to raise each other’s ratings.This compromises users’safety and makes them lose their confidence about the files or services they are receiving.To address these challenges,we propose a neural networkbased algorithm,which uses the advantages of a machine learning algorithm to identify whether or not a peer is malicious.In this paper,a neural network(NN)was chosen as the machine learning algorithm due to its efficiency in classification.The experiments showed that the NNTrust algorithm is more effective and has a higher potential of reducing the number of invalid files and increasing success rates than other well-known trust management systems.展开更多
may incur significant bandwidth for executing more com- plicated search queries such as multiple-attribute queries. In order to reduce query overhead, KSS (keyword-set search) by Gnawali partitions the index by a set ...may incur significant bandwidth for executing more com- plicated search queries such as multiple-attribute queries. In order to reduce query overhead, KSS (keyword-set search) by Gnawali partitions the index by a set of keywords. However, a KSS index is considerably larger than a standard inverted index, since there are more word sets than there are individual words. And the insert overhead and storage overhead are obviously un- acceptable for full-text search on a collection of documents even if KSS uses the distance window technology. In this paper, we extract the relationship information between query keywords from websites’ queries logs to improve performance of KSS system. Experiments results clearly demonstrated that the improved keyword-set search system based on keywords relationship (KRBKSS) is more efficient than KSS index in insert overhead and storage overhead, and a standard inverted index in terms of communication costs for query.展开更多
Flooding is the most famous technique for locating contents in unstructured P2P networks. Recently traditional flooding has been replaced by more efficient dynamic query (DQ) and different variants of such algorithm...Flooding is the most famous technique for locating contents in unstructured P2P networks. Recently traditional flooding has been replaced by more efficient dynamic query (DQ) and different variants of such algorithms. Dynamic query is a new flooding technique which could estimate a proper time-to-live (TTL) value for a query flooding by estimating the popularity of the searched files, and retrieve sufficient results under controlled flooding range for reducing network traffic. However, all DQ-like search algorithms are "blind" so that a large amount of redundant messages are caused. In this paper, we proposed a new search scheme, called Immune Search Scheme (ISS), to cope with this problem. In ISS, an immune systems inspired concept of similarity-governed clone proliferation and mutation for query message movement is applied. Some assistant strategies, that is, shortcuts creation and peer traveling are incorporated into ISS to develop "immune memory" for improving search performance, which can make ISS not be blind but heuristic.展开更多
Designers search for N-nodes peer-to-peer networks that can have O (1) out-degree with O (log2 N) average distance. Peer-to-peer schemes based on de Bruijn graphs are found to meet this requirement. By defining av...Designers search for N-nodes peer-to-peer networks that can have O (1) out-degree with O (log2 N) average distance. Peer-to-peer schemes based on de Bruijn graphs are found to meet this requirement. By defining average load to evaluate the traffic load in a network, we show that in order to decrease the average load, the average distance of a network should decrease while the out-degree should increase. Especially, given out-degree k and N nodes, peer-to-peer schemes based on de Bruijn graphs have lower average load than other existing systems. The out-degree k of de Bruijn graphs should not be O(1) but should satisfy a lower bound described by an inequality κ^κ≥N^2, to ensure that the average load in peer-to-peer schemes based on de Bruijn graphs will not exceed that in Chord system.展开更多
Peer-to-peer (P2P) lending offers an alternative way to access credit. Unlike established lending institutions with proven credit risk management practices, P2P platforms rely on numerous independent variables to eval...Peer-to-peer (P2P) lending offers an alternative way to access credit. Unlike established lending institutions with proven credit risk management practices, P2P platforms rely on numerous independent variables to evaluate loan applicants’ creditworthiness. This study aims to estimate default probabilities using a mixture-of-experts neural network in P2P lending. The approach involves coupling unsupervised clustering to capture essential data properties with a classification algorithm based on the mixture-of-experts structure. This classic design enhances model capacity without significant computational overhead. The model was tested using P2P data from Lending Club, comparing it to other methods like Logistic Regression, AdaBoost, Gradient Boosting, Decision Tree, Support Vector Machine, and Random Forest. The hybrid model demonstrated superior performance, with a Mean Squared Error reduction of at least 25%.展开更多
The power-law node degree distributions of peer-to-peer overlay networks make them extremely robust to random failures whereas highly vulnerable under intentional targeted attacks. To enhance attack survivability of t...The power-law node degree distributions of peer-to-peer overlay networks make them extremely robust to random failures whereas highly vulnerable under intentional targeted attacks. To enhance attack survivability of these networks, DeepCure, a novel heuristic immunization strategy, is proposed to conduct decentralized but targeted immunization. Different from existing strategies, DeepCure identifies immunization targets as not only the highly-connected nodes but also the nodes with high availability and/or high link load, with the aim of injecting immunization information into just right targets to cure. To better trade off the cost and the efficiency, DeepCure deliberately select these targets from 2-local neighborhood, as well as topologically-remote but semantically-close friends if needed. To remedy the weakness of existing strategies in case of sudden epidemic outbreak, DeepCure is also coupled with a local-hub oriented rate throttling mechanism to enforce proactive rate control. Extensive simulation results show that DeepCure outperforms its competitors, producing an arresting increase of the network attack tolerance, at a lower price of eliminating viruses or malicious attacks.展开更多
Optical neural networks have significant advantages in terms of power consumption,parallelism,and high computing speed,which has intrigued extensive attention in both academic and engineering communities.It has been c...Optical neural networks have significant advantages in terms of power consumption,parallelism,and high computing speed,which has intrigued extensive attention in both academic and engineering communities.It has been considered as one of the powerful tools in promoting the fields of imaging processing and object recognition.However,the existing optical system architecture cannot be reconstructed to the realization of multi-functional artificial intelligence systems simultaneously.To push the development of this issue,we propose the pluggable diffractive neural networks(P-DNN),a general paradigm resorting to the cascaded metasurfaces,which can be applied to recognize various tasks by switching internal plug-ins.As the proof-of-principle,the recognition functions of six types of handwritten digits and six types of fashions are numerical simulated and experimental demonstrated at near-infrared regimes.Encouragingly,the proposed paradigm not only improves the flexibility of the optical neural networks but paves the new route for achieving high-speed,low-power and versatile artificial intelligence systems.展开更多
The demand for adopting neural networks in resource-constrained embedded devices is continuously increasing.Quantization is one of the most promising solutions to reduce computational cost and memory storage on embedd...The demand for adopting neural networks in resource-constrained embedded devices is continuously increasing.Quantization is one of the most promising solutions to reduce computational cost and memory storage on embedded devices.In order to reduce the complexity and overhead of deploying neural networks on Integeronly hardware,most current quantization methods use a symmetric quantization mapping strategy to quantize a floating-point neural network into an integer network.However,although symmetric quantization has the advantage of easier implementation,it is sub-optimal for cases where the range could be skewed and not symmetric.This often comes at the cost of lower accuracy.This paper proposed an activation redistribution-based hybrid asymmetric quantizationmethod for neural networks.The proposedmethod takes data distribution into consideration and can resolve the contradiction between the quantization accuracy and the ease of implementation,balance the trade-off between clipping range and quantization resolution,and thus improve the accuracy of the quantized neural network.The experimental results indicate that the accuracy of the proposed method is 2.02%and 5.52%higher than the traditional symmetric quantization method for classification and detection tasks,respectively.The proposed method paves the way for computationally intensive neural network models to be deployed on devices with limited computing resources.Codes will be available on https://github.com/ycjcy/Hybrid-Asymmetric-Quantization.展开更多
The structural optimization of wireless sensor networks is a critical issue because it impacts energy consumption and hence the network’s lifetime.Many studies have been conducted for homogeneous networks,but few hav...The structural optimization of wireless sensor networks is a critical issue because it impacts energy consumption and hence the network’s lifetime.Many studies have been conducted for homogeneous networks,but few have been performed for heterogeneouswireless sensor networks.This paper utilizes Rao algorithms to optimize the structure of heterogeneous wireless sensor networks according to node locations and their initial energies.The proposed algorithms lack algorithm-specific parameters and metaphorical connotations.The proposed algorithms examine the search space based on the relations of the population with the best,worst,and randomly assigned solutions.The proposed algorithms can be evaluated using any routing protocol,however,we have chosen the well-known routing protocols in the literature:Low Energy Adaptive Clustering Hierarchy(LEACH),Power-Efficient Gathering in Sensor Information Systems(PEAGSIS),Partitioned-based Energy-efficient LEACH(PE-LEACH),and the Power-Efficient Gathering in Sensor Information Systems Neural Network(PEAGSIS-NN)recent routing protocol.We compare our optimized method with the Jaya,the Particle Swarm Optimization-based Energy Efficient Clustering(PSO-EEC)protocol,and the hybrid Harmony Search Algorithm and PSO(HSA-PSO)algorithms.The efficiencies of our proposed algorithms are evaluated by conducting experiments in terms of the network lifetime(first dead node,half dead nodes,and last dead node),energy consumption,packets to cluster head,and packets to the base station.The experimental results were compared with those obtained using the Jaya optimization algorithm.The proposed algorithms exhibited the best performance.The proposed approach successfully prolongs the network lifetime by 71% for the PEAGSIS protocol,51% for the LEACH protocol,10% for the PE-LEACH protocol,and 73% for the PEGSIS-NN protocol;Moreover,it enhances other criteria such as energy conservation,fitness convergence,packets to cluster head,and packets to the base station.展开更多
Biodiversity has become a terminology familiar to virtually every citizen in modern societies.It is said that ecology studies the economy of nature,and economy studies the ecology of humans;then measuring biodiversity...Biodiversity has become a terminology familiar to virtually every citizen in modern societies.It is said that ecology studies the economy of nature,and economy studies the ecology of humans;then measuring biodiversity should be similar with measuring national wealth.Indeed,there have been many parallels between ecology and economics,actually beyond analogies.For example,arguably the second most widely used biodiversity metric,Simpson(1949)’s diversity index,is a function of familiar Gini-index in economics.One of the biggest challenges has been the high“diversity”of diversity indexes due to their excessive“speciation”-there are so many indexes,similar to each country’s sovereign currency-leaving confused diversity practitioners in dilemma.In 1973,Hill introduced the concept of“numbers equivalent”,which is based on Renyi entropy and originated in economics,but possibly due to his abstruse interpretation of the concept,his message was not widely received by ecologists until nearly four decades later.What Hill suggested was similar to link the US dollar to gold at the rate of$35 per ounce under the Bretton Woods system.The Hill numbers now are considered most appropriate biodiversity metrics system,unifying Shannon,Simpson and other diversity indexes.Here,we approach to another paradigmatic shift-measuring biodiversity on ecological networks-demonstrated with animal gastrointestinal microbiomes representing four major invertebrate classes and all six vertebrate classes.The network diversity can reveal the diversity of species interactions,which is a necessary step for understanding the spatial and temporal structures and dynamics of biodiversity across environmental gradients.展开更多
Rapid development in Information Technology(IT)has allowed several novel application regions like large outdoor vehicular networks for Vehicle-to-Vehicle(V2V)transmission.Vehicular networks give a safe and more effect...Rapid development in Information Technology(IT)has allowed several novel application regions like large outdoor vehicular networks for Vehicle-to-Vehicle(V2V)transmission.Vehicular networks give a safe and more effective driving experience by presenting time-sensitive and location-aware data.The communication occurs directly between V2V and Base Station(BS)units such as the Road Side Unit(RSU),named as a Vehicle to Infrastructure(V2I).However,the frequent topology alterations in VANETs generate several problems with data transmission as the vehicle velocity differs with time.Therefore,the scheme of an effectual routing protocol for reliable and stable communications is significant.Current research demonstrates that clustering is an intelligent method for effectual routing in a mobile environment.Therefore,this article presents a Falcon Optimization Algorithm-based Energy Efficient Communication Protocol for Cluster-based Routing(FOA-EECPCR)technique in VANETS.The FOA-EECPCR technique intends to group the vehicles and determine the shortest route in the VANET.To accomplish this,the FOA-EECPCR technique initially clusters the vehicles using FOA with fitness functions comprising energy,distance,and trust level.For the routing process,the Sparrow Search Algorithm(SSA)is derived with a fitness function that encompasses two variables,namely,energy and distance.A series of experiments have been conducted to exhibit the enhanced performance of the FOA-EECPCR method.The experimental outcomes demonstrate the enhanced performance of the FOA-EECPCR approach over other current methods.展开更多
Physics-informed neural networks are a useful machine learning method for solving differential equations,but encounter challenges in effectively learning thin boundary layers within singular perturbation problems.To r...Physics-informed neural networks are a useful machine learning method for solving differential equations,but encounter challenges in effectively learning thin boundary layers within singular perturbation problems.To resolve this issue,multi-scale-matching neural networks are proposed to solve the singular perturbation problems.Inspired by matched asymptotic expansions,the solution is decomposed into inner solutions for small scales and outer solutions for large scales,corresponding to boundary layers and outer regions,respectively.Moreover,to conform neural networks,we introduce exponential stretched variables in the boundary layers to avoid semiinfinite region problems.Numerical results for the thin plate problem validate the proposed method.展开更多
Climate models are vital for understanding and projecting global climate change and its associated impacts.However,these models suffer from biases that limit their accuracy in historical simulations and the trustworth...Climate models are vital for understanding and projecting global climate change and its associated impacts.However,these models suffer from biases that limit their accuracy in historical simulations and the trustworthiness of future projections.Addressing these challenges requires addressing internal variability,hindering the direct alignment between model simulations and observations,and thwarting conventional supervised learning methods.Here,we employ an unsupervised Cycle-consistent Generative Adversarial Network(CycleGAN),to correct daily Sea Surface Temperature(SST)simulations from the Community Earth System Model 2(CESM2).Our results reveal that the CycleGAN not only corrects climatological biases but also improves the simulation of major dynamic modes including the El Niño-Southern Oscillation(ENSO)and the Indian Ocean Dipole mode,as well as SST extremes.Notably,it substantially corrects climatological SST biases,decreasing the globally averaged Root-Mean-Square Error(RMSE)by 58%.Intriguingly,the CycleGAN effectively addresses the well-known excessive westward bias in ENSO SST anomalies,a common issue in climate models that traditional methods,like quantile mapping,struggle to rectify.Additionally,it substantially improves the simulation of SST extremes,raising the pattern correlation coefficient(PCC)from 0.56 to 0.88 and lowering the RMSE from 0.5 to 0.32.This enhancement is attributed to better representations of interannual,intraseasonal,and synoptic scales variabilities.Our study offers a novel approach to correct global SST simulations and underscores its effectiveness across different time scales and primary dynamical modes.展开更多
基金National Natural Science Foundation of China (No.60633020 and No. 90204012)Natural Science Foundation of Hebei Province (No. F2006000177)
文摘Passive worms can passively propagate through embedding themselves into some sharing files, which can result in significant damage to unstructured P2P networks. To study the passive worm behaviors, this paper firstly analyzes and obtains the average delay for all peers in the whole transmitting process due to the limitation of network throughput, and then proposes a mathematical model for the propagation of passive worms over the unstructured P2P networks. The model mainly takes the effect of the network throughput into account, and applies a new healthy files dissemination-based defense strategy according to the file popularity which follows the Zipf distribution. The simulation results show that the propagation of passive worms is mainly governed by the number of hops, initially infected files and uninfected files. The larger the number of hops, the more rapidly the passive worms propagate. If the number of the initially infected files is increased by the attackers, the propagation speed of passive worms increases obviously. A larger size of the uninfected file results in a better attack performance. However, the number of files generated by passive worms is not an important factor governing the propagation of passive worms. The effectiveness of healthy files dissemination strategy is verified. This model can provide a guideline in the control of unstructured P2P networks as well as passive worm defense.
基金Project ofNational "973"Plan (No. 2003CB314806) Projectof National Natural Science Foundation of China(No. 90204003)
文摘The development of network resources changes the network computing models. P2P networks, a new type of network adopting peer-to-peer strategy for computing, have attracted worldwide attention. The P2P architecture is a type of distributed network in which all participants share their hardware resources and the shared resources can be directly accessed by peer nodes without going through any dedicated servers. The participants in a P2P network are both resource providers and resource consumers. This article on P2P networks is divided into two issues. In the previous issue, P2P architecture, network models and core search algorithms were introduced. The second part in this issue is analyzing the current P2P research and application situations, as well as the impacts of P2P on telecom operators and equipment vendors.
基金This work was funded by the National Natural Science Foundation of China under Grant60473090.
文摘The Peer-to-Peer(P2P)network traffic identification technology includes Transport Layer Identification(TLI)and Deep Packet Inspection(DPI)methods.By analyzing packets of the transport layer and the traffic characteristic in the P2P system,TLI can identify whether or not the network data flow belongs to the P2P system.The DPI method adopts protocol analysis technology and reverting technology.It picks up data from the P2P application layer and analyzes the characteristics of the payload to judge if the network traffic belongs to P2P applications.Due to its accuracy,robustness and classifying ability,DPI is the main method used to identify P2P traffic.Adopting the advantages of TLI and DPI,a precise and efficient technology for P2P network traffic identification can be designed.
文摘The development of network resources changes network computing models. P2P networks, a new type of network adopting peer-to-peer strategy for computing, have attracted world-wide attention. P2P architecture is a type of distributed network in which all participants share their hardware resources and the shared resources can be directly accessed by peer nodes without the necessity of going through any dedicated servers. The participants in a P2P network are both resource providers and resource consumers. This article on P2P networks will be divided into two issues. In this issue, P2P architecture, network models and core search algorithms are introduced. And the second part in the next issue will analyze the current P2P research and application situations, as well as the impact of P2P on telecom operators and equipment vendors.
文摘Trust has become an increasingly important issue given society’s growing reliance on electronic transactions.Peer-to-peer(P2P)networks are among the main electronic transaction environments affected by trust issues due to the freedom and anonymity of peers(users)and the inherent openness of these networks.A malicious peer can easily join a P2P network and abuse its peers and resources,resulting in a large-scale failure that might shut down the entire network.Therefore,a plethora of researchers have proposed trust management systems to mitigate the impact of the problem.However,due to the problem’s scale and complexity,more research is necessary.The algorithm proposed here,HierarchTrust,attempts to create a more reliable environment in which the selection of a peer provider of a file or other resource is based on several trust values represented in hierarchical form.The values at the top of the hierarchical form are more trusted than those at the lower end of the hierarchy.Trust,in HierarchTrust,is generally calculated based on the standard deviation.Evaluation via simulation showed that HierarchTrust produced a better success rate than the well-established EigenTrust algorithm.
文摘One of the key challenges in ad-hoc networks is the resource discovery problem.How efciently&quickly the queried resource/object can be resolved in such a highly dynamic self-evolving network is the underlying question?Broadcasting is a basic technique in the Mobile Ad-hoc Networks(MANETs),and it refers to sending a packet from one node to every other node within the transmission range.Flooding is a type of broadcast where the received packet is retransmitted once by every node.The naive ooding technique oods the network with query messages,while the random walk scheme operates by contacting subsets of each node’s neighbors at every step,thereby restricting the search space.Many earlier works have mainly focused on the simulation-based analysis of ooding technique,and its variants,in a wired network scenario.Although,there have been some empirical studies in peer-to-peer(P2P)networks,the analytical results are still lacking,especially in the context of mobile P2P networks.In this article,we mathematically model different widely used existing search techniques,and compare with the proposed improved random walk method,a simple lightweight approach suitable for the non-DHT architecture.We provide analytical expressions to measure the performance of the different ooding-based search techniques,and our proposed technique.We analytically derive 3 relevant key performance measures,i.e.,the avg.number of steps needed to nd a resource,the probability of locating a resource,and the avg.number of messages generated during the entire search process.
文摘The underlying premise of peer-to-peer(P2P)systems is the trading of digital resources among individual peers to facilitate file sharing,distributed computing,storage,collaborative applications and multimedia streaming.So-called free-riders challenge the foundations of this system by consuming resources from other peers without offering any resources in return,hindering resource exchange among peers.Therefore,immense effort has been invested in discouraging free-riding and overcoming the ill effects of such unfair use of the system.However,previous efforts have all fallen short of effectively addressing free-riding behaviour in P2P networks.This paper proposes a novel approach based on utilising a credit incentive for P2P networks,wherein a grace period is introduced during which free-riders must reimburse resources.In contrast to previous approaches,the proposed system takes into consideration the upload rate of peers and a grace period.The system has been thoroughly tested in a simulated environment,and the results show that the proposed approach effectively mitigates free-riding behaviour.Compared to previous systems,the number of downloads from free-riders decreased while downloads by contributing peers increased.The results also show that under longer grace periods,the number of downloads by fast peers(those reimbursing the system within the grace period)was greater than the number of downloads by slow peers.
文摘Edge devices in Internet of Things(IoT)applications can form peers to communicate in peer-to-peer(P2P)networks over P2P protocols.Using P2P networks ensures scalability and removes the need for centralized management.However,due to the open nature of P2P networks,they often suffer from the existence of malicious peers,especially malicious peers that unite in groups to raise each other’s ratings.This compromises users’safety and makes them lose their confidence about the files or services they are receiving.To address these challenges,we propose a neural networkbased algorithm,which uses the advantages of a machine learning algorithm to identify whether or not a peer is malicious.In this paper,a neural network(NN)was chosen as the machine learning algorithm due to its efficiency in classification.The experiments showed that the NNTrust algorithm is more effective and has a higher potential of reducing the number of invalid files and increasing success rates than other well-known trust management systems.
基金Project supported by the National Natural Science Foundation of China (No. 60221120145) and Science & Technology Committee of Shanghai Municipality Key Project (No. 02DJ14045), China
文摘may incur significant bandwidth for executing more com- plicated search queries such as multiple-attribute queries. In order to reduce query overhead, KSS (keyword-set search) by Gnawali partitions the index by a set of keywords. However, a KSS index is considerably larger than a standard inverted index, since there are more word sets than there are individual words. And the insert overhead and storage overhead are obviously un- acceptable for full-text search on a collection of documents even if KSS uses the distance window technology. In this paper, we extract the relationship information between query keywords from websites’ queries logs to improve performance of KSS system. Experiments results clearly demonstrated that the improved keyword-set search system based on keywords relationship (KRBKSS) is more efficient than KSS index in insert overhead and storage overhead, and a standard inverted index in terms of communication costs for query.
基金Supported by the National Natural Science Foundation of China (90604012)
文摘Flooding is the most famous technique for locating contents in unstructured P2P networks. Recently traditional flooding has been replaced by more efficient dynamic query (DQ) and different variants of such algorithms. Dynamic query is a new flooding technique which could estimate a proper time-to-live (TTL) value for a query flooding by estimating the popularity of the searched files, and retrieve sufficient results under controlled flooding range for reducing network traffic. However, all DQ-like search algorithms are "blind" so that a large amount of redundant messages are caused. In this paper, we proposed a new search scheme, called Immune Search Scheme (ISS), to cope with this problem. In ISS, an immune systems inspired concept of similarity-governed clone proliferation and mutation for query message movement is applied. Some assistant strategies, that is, shortcuts creation and peer traveling are incorporated into ISS to develop "immune memory" for improving search performance, which can make ISS not be blind but heuristic.
文摘Designers search for N-nodes peer-to-peer networks that can have O (1) out-degree with O (log2 N) average distance. Peer-to-peer schemes based on de Bruijn graphs are found to meet this requirement. By defining average load to evaluate the traffic load in a network, we show that in order to decrease the average load, the average distance of a network should decrease while the out-degree should increase. Especially, given out-degree k and N nodes, peer-to-peer schemes based on de Bruijn graphs have lower average load than other existing systems. The out-degree k of de Bruijn graphs should not be O(1) but should satisfy a lower bound described by an inequality κ^κ≥N^2, to ensure that the average load in peer-to-peer schemes based on de Bruijn graphs will not exceed that in Chord system.
文摘Peer-to-peer (P2P) lending offers an alternative way to access credit. Unlike established lending institutions with proven credit risk management practices, P2P platforms rely on numerous independent variables to evaluate loan applicants’ creditworthiness. This study aims to estimate default probabilities using a mixture-of-experts neural network in P2P lending. The approach involves coupling unsupervised clustering to capture essential data properties with a classification algorithm based on the mixture-of-experts structure. This classic design enhances model capacity without significant computational overhead. The model was tested using P2P data from Lending Club, comparing it to other methods like Logistic Regression, AdaBoost, Gradient Boosting, Decision Tree, Support Vector Machine, and Random Forest. The hybrid model demonstrated superior performance, with a Mean Squared Error reduction of at least 25%.
基金This research work is supported in part by the National High Technology Research and Development 863 Program of China under Grant No.2004AA104270.
文摘The power-law node degree distributions of peer-to-peer overlay networks make them extremely robust to random failures whereas highly vulnerable under intentional targeted attacks. To enhance attack survivability of these networks, DeepCure, a novel heuristic immunization strategy, is proposed to conduct decentralized but targeted immunization. Different from existing strategies, DeepCure identifies immunization targets as not only the highly-connected nodes but also the nodes with high availability and/or high link load, with the aim of injecting immunization information into just right targets to cure. To better trade off the cost and the efficiency, DeepCure deliberately select these targets from 2-local neighborhood, as well as topologically-remote but semantically-close friends if needed. To remedy the weakness of existing strategies in case of sudden epidemic outbreak, DeepCure is also coupled with a local-hub oriented rate throttling mechanism to enforce proactive rate control. Extensive simulation results show that DeepCure outperforms its competitors, producing an arresting increase of the network attack tolerance, at a lower price of eliminating viruses or malicious attacks.
基金The authors acknowledge the funding provided by the National Key R&D Program of China(2021YFA1401200)Beijing Outstanding Young Scientist Program(BJJWZYJH01201910007022)+2 种基金National Natural Science Foundation of China(No.U21A20140,No.92050117,No.62005017)programBeijing Municipal Science&Technology Commission,Administrative Commission of Zhongguancun Science Park(No.Z211100004821009)This work was supported by the Synergetic Extreme Condition User Facility(SECUF).
文摘Optical neural networks have significant advantages in terms of power consumption,parallelism,and high computing speed,which has intrigued extensive attention in both academic and engineering communities.It has been considered as one of the powerful tools in promoting the fields of imaging processing and object recognition.However,the existing optical system architecture cannot be reconstructed to the realization of multi-functional artificial intelligence systems simultaneously.To push the development of this issue,we propose the pluggable diffractive neural networks(P-DNN),a general paradigm resorting to the cascaded metasurfaces,which can be applied to recognize various tasks by switching internal plug-ins.As the proof-of-principle,the recognition functions of six types of handwritten digits and six types of fashions are numerical simulated and experimental demonstrated at near-infrared regimes.Encouragingly,the proposed paradigm not only improves the flexibility of the optical neural networks but paves the new route for achieving high-speed,low-power and versatile artificial intelligence systems.
基金The Qian Xuesen Youth Innovation Foundation from China Aerospace Science and Technology Corporation(Grant Number 2022JY51).
文摘The demand for adopting neural networks in resource-constrained embedded devices is continuously increasing.Quantization is one of the most promising solutions to reduce computational cost and memory storage on embedded devices.In order to reduce the complexity and overhead of deploying neural networks on Integeronly hardware,most current quantization methods use a symmetric quantization mapping strategy to quantize a floating-point neural network into an integer network.However,although symmetric quantization has the advantage of easier implementation,it is sub-optimal for cases where the range could be skewed and not symmetric.This often comes at the cost of lower accuracy.This paper proposed an activation redistribution-based hybrid asymmetric quantizationmethod for neural networks.The proposedmethod takes data distribution into consideration and can resolve the contradiction between the quantization accuracy and the ease of implementation,balance the trade-off between clipping range and quantization resolution,and thus improve the accuracy of the quantized neural network.The experimental results indicate that the accuracy of the proposed method is 2.02%and 5.52%higher than the traditional symmetric quantization method for classification and detection tasks,respectively.The proposed method paves the way for computationally intensive neural network models to be deployed on devices with limited computing resources.Codes will be available on https://github.com/ycjcy/Hybrid-Asymmetric-Quantization.
文摘The structural optimization of wireless sensor networks is a critical issue because it impacts energy consumption and hence the network’s lifetime.Many studies have been conducted for homogeneous networks,but few have been performed for heterogeneouswireless sensor networks.This paper utilizes Rao algorithms to optimize the structure of heterogeneous wireless sensor networks according to node locations and their initial energies.The proposed algorithms lack algorithm-specific parameters and metaphorical connotations.The proposed algorithms examine the search space based on the relations of the population with the best,worst,and randomly assigned solutions.The proposed algorithms can be evaluated using any routing protocol,however,we have chosen the well-known routing protocols in the literature:Low Energy Adaptive Clustering Hierarchy(LEACH),Power-Efficient Gathering in Sensor Information Systems(PEAGSIS),Partitioned-based Energy-efficient LEACH(PE-LEACH),and the Power-Efficient Gathering in Sensor Information Systems Neural Network(PEAGSIS-NN)recent routing protocol.We compare our optimized method with the Jaya,the Particle Swarm Optimization-based Energy Efficient Clustering(PSO-EEC)protocol,and the hybrid Harmony Search Algorithm and PSO(HSA-PSO)algorithms.The efficiencies of our proposed algorithms are evaluated by conducting experiments in terms of the network lifetime(first dead node,half dead nodes,and last dead node),energy consumption,packets to cluster head,and packets to the base station.The experimental results were compared with those obtained using the Jaya optimization algorithm.The proposed algorithms exhibited the best performance.The proposed approach successfully prolongs the network lifetime by 71% for the PEAGSIS protocol,51% for the LEACH protocol,10% for the PE-LEACH protocol,and 73% for the PEGSIS-NN protocol;Moreover,it enhances other criteria such as energy conservation,fitness convergence,packets to cluster head,and packets to the base station.
基金supported by the National Natural Science Foundation of China(31970116,72274192)。
文摘Biodiversity has become a terminology familiar to virtually every citizen in modern societies.It is said that ecology studies the economy of nature,and economy studies the ecology of humans;then measuring biodiversity should be similar with measuring national wealth.Indeed,there have been many parallels between ecology and economics,actually beyond analogies.For example,arguably the second most widely used biodiversity metric,Simpson(1949)’s diversity index,is a function of familiar Gini-index in economics.One of the biggest challenges has been the high“diversity”of diversity indexes due to their excessive“speciation”-there are so many indexes,similar to each country’s sovereign currency-leaving confused diversity practitioners in dilemma.In 1973,Hill introduced the concept of“numbers equivalent”,which is based on Renyi entropy and originated in economics,but possibly due to his abstruse interpretation of the concept,his message was not widely received by ecologists until nearly four decades later.What Hill suggested was similar to link the US dollar to gold at the rate of$35 per ounce under the Bretton Woods system.The Hill numbers now are considered most appropriate biodiversity metrics system,unifying Shannon,Simpson and other diversity indexes.Here,we approach to another paradigmatic shift-measuring biodiversity on ecological networks-demonstrated with animal gastrointestinal microbiomes representing four major invertebrate classes and all six vertebrate classes.The network diversity can reveal the diversity of species interactions,which is a necessary step for understanding the spatial and temporal structures and dynamics of biodiversity across environmental gradients.
文摘Rapid development in Information Technology(IT)has allowed several novel application regions like large outdoor vehicular networks for Vehicle-to-Vehicle(V2V)transmission.Vehicular networks give a safe and more effective driving experience by presenting time-sensitive and location-aware data.The communication occurs directly between V2V and Base Station(BS)units such as the Road Side Unit(RSU),named as a Vehicle to Infrastructure(V2I).However,the frequent topology alterations in VANETs generate several problems with data transmission as the vehicle velocity differs with time.Therefore,the scheme of an effectual routing protocol for reliable and stable communications is significant.Current research demonstrates that clustering is an intelligent method for effectual routing in a mobile environment.Therefore,this article presents a Falcon Optimization Algorithm-based Energy Efficient Communication Protocol for Cluster-based Routing(FOA-EECPCR)technique in VANETS.The FOA-EECPCR technique intends to group the vehicles and determine the shortest route in the VANET.To accomplish this,the FOA-EECPCR technique initially clusters the vehicles using FOA with fitness functions comprising energy,distance,and trust level.For the routing process,the Sparrow Search Algorithm(SSA)is derived with a fitness function that encompasses two variables,namely,energy and distance.A series of experiments have been conducted to exhibit the enhanced performance of the FOA-EECPCR method.The experimental outcomes demonstrate the enhanced performance of the FOA-EECPCR approach over other current methods.
基金supported by the National Natural Science Foun-dation of China (NSFC) Basic Science Center Program for"Multiscale Problems in Nonlinear Mechanics"(Grant No. 11988102)supported by the National Natural Science Foundation of China (NSFC)(Grant No. 12202451)
文摘Physics-informed neural networks are a useful machine learning method for solving differential equations,but encounter challenges in effectively learning thin boundary layers within singular perturbation problems.To resolve this issue,multi-scale-matching neural networks are proposed to solve the singular perturbation problems.Inspired by matched asymptotic expansions,the solution is decomposed into inner solutions for small scales and outer solutions for large scales,corresponding to boundary layers and outer regions,respectively.Moreover,to conform neural networks,we introduce exponential stretched variables in the boundary layers to avoid semiinfinite region problems.Numerical results for the thin plate problem validate the proposed method.
基金supported by the National Natural Science Foundation of China(Grant Nos.42141019 and 42261144687)the Second Tibetan Plateau Scientific Expedition and Research(STEP)program(Grant No.2019QZKK0102)+4 种基金the Strategic Priority Research Program of the Chinese Academy of Sciences(Grant No.XDB42010404)the National Natural Science Foundation of China(Grant No.42175049)the Guangdong Meteorological Service Science and Technology Research Project(Grant No.GRMC2021M01)the National Key Scientific and Technological Infrastructure project“Earth System Science Numerical Simulator Facility”(EarthLab)for computational support and Prof.Shiming XIANG for many useful discussionsNiklas BOERS acknowledges funding from the Volkswagen foundation.
文摘Climate models are vital for understanding and projecting global climate change and its associated impacts.However,these models suffer from biases that limit their accuracy in historical simulations and the trustworthiness of future projections.Addressing these challenges requires addressing internal variability,hindering the direct alignment between model simulations and observations,and thwarting conventional supervised learning methods.Here,we employ an unsupervised Cycle-consistent Generative Adversarial Network(CycleGAN),to correct daily Sea Surface Temperature(SST)simulations from the Community Earth System Model 2(CESM2).Our results reveal that the CycleGAN not only corrects climatological biases but also improves the simulation of major dynamic modes including the El Niño-Southern Oscillation(ENSO)and the Indian Ocean Dipole mode,as well as SST extremes.Notably,it substantially corrects climatological SST biases,decreasing the globally averaged Root-Mean-Square Error(RMSE)by 58%.Intriguingly,the CycleGAN effectively addresses the well-known excessive westward bias in ENSO SST anomalies,a common issue in climate models that traditional methods,like quantile mapping,struggle to rectify.Additionally,it substantially improves the simulation of SST extremes,raising the pattern correlation coefficient(PCC)from 0.56 to 0.88 and lowering the RMSE from 0.5 to 0.32.This enhancement is attributed to better representations of interannual,intraseasonal,and synoptic scales variabilities.Our study offers a novel approach to correct global SST simulations and underscores its effectiveness across different time scales and primary dynamical modes.