With an increasing urgent demand for fast recovery routing mechanisms in large-scale networks,minimizing network disruption caused by network failure has become critical.However,a large number of relevant studies have...With an increasing urgent demand for fast recovery routing mechanisms in large-scale networks,minimizing network disruption caused by network failure has become critical.However,a large number of relevant studies have shown that network failures occur on the Internet inevitably and frequently.The current routing protocols deployed on the Internet adopt the reconvergence mechanism to cope with network failures.During the reconvergence process,the packets may be lost because of inconsistent routing information,which reduces the network’s availability greatly and affects the Internet service provider’s(ISP’s)service quality and reputation seriously.Therefore,improving network availability has become an urgent problem.As such,the Internet Engineering Task Force suggests the use of downstream path criterion(DC)to address all single-link failure scenarios.However,existing methods for implementing DC schemes are time consuming,require a large amount of router CPU resources,and may deteriorate router capability.Thus,the computation overhead introduced by existing DC schemes is significant,especially in large-scale networks.Therefore,this study proposes an efficient intra-domain routing protection algorithm(ERPA)in large-scale networks.Theoretical analysis indicates that the time complexity of ERPA is less than that of constructing a shortest path tree.Experimental results show that ERPA can reduce the computation overhead significantly compared with the existing algorithms while offering the same network availability as DC.展开更多
Virtual network embedding problem which is NP-hard is a key issue for implementing software-defined network which is brought about by network virtualization. Compared with other studies which focus on designing heuris...Virtual network embedding problem which is NP-hard is a key issue for implementing software-defined network which is brought about by network virtualization. Compared with other studies which focus on designing heuristic algorithms to reduce the hardness of the NP-hard problem we propose a robust VNE algorithm based on component connectivity in large-scale network. We distinguish the different components and embed VN requests onto them respectively. And k-core is applied to identify different VN topologies so that the VN request can be embedded onto its corresponding component. On the other hand, load balancing is also considered in this paper. It could avoid blocked or bottlenecked area of substrate network. Simulation experiments show that compared with other algorithms in large-scale network, acceptance ratio, average revenue and robustness can be obviously improved by our algorithm and average cost can be reduced. It also shows the relationship between the component connectivity including giant component and small components and the performance metrics.展开更多
A reduction in network energy consumption and the establishment of green networks have become key scientific problems in academic and industrial research.Existing energy efficiency schemes are based on a known traffic...A reduction in network energy consumption and the establishment of green networks have become key scientific problems in academic and industrial research.Existing energy efficiency schemes are based on a known traffic matrix,and acquiring a real-time traffic matrix in current complex networks is difficult.Therefore,this research investigates how to reduce network energy consumption without a real-time traffic matrix.In particular,this paper proposes an intra-domain energy-efficient routing scheme based on multipath routing.It analyzes the relationship between routing availability and energy-efficient routing and integrates the two mechanisms to satisfy the requirements of availability and energy efficiency.The main research focus is as follows:(1)A link criticality model is evaluated to quantitatively measure the importance of links in a network.(2)On the basis of the link criticality model,this paper analyzes an energy-efficient routing technology based on multipath routing to achieve the goals of availability and energy efficiency simultaneously.(3)An energy-efficient routing algorithm based on multipath routing in large-scale networks is proposed.(4)The proposed method does not require a real-time traffic matrix in the network and is thus easy to apply in practice.(5)The proposed algorithm is verified in several network topologies.Experimental results show that the algorithm can not only reduce network energy consumption but can also ensure routing availability.展开更多
RECENT advances in sensing,communication and computing have open the door to the deployment of largescale networks of sensors and actuators that allow fine-grain monitoring and control of a multitude of physical proce...RECENT advances in sensing,communication and computing have open the door to the deployment of largescale networks of sensors and actuators that allow fine-grain monitoring and control of a multitude of physical processes and infrastructures.The appellation used by field experts for these paradigms is Cyber-Physical Systems(CPS)because the dynamics among computers,networking media/resources and physical systems interact in a way that multi-disciplinary technologies(embedded systems,computers,communications and controls)are required to accomplish prescribed missions.Moreover,they are expected to play a significant role in the design and development of future engineering applications such as smart grids,transportation systems,nuclear plants and smart factories.展开更多
A major challenge of network virtualization is the virtual network resource allocation problem that deals with efficient mapping of virtual nodes and virtual links onto the substrate network resources. However, the ex...A major challenge of network virtualization is the virtual network resource allocation problem that deals with efficient mapping of virtual nodes and virtual links onto the substrate network resources. However, the existing algorithms are almost concentrated on the randomly small-scale network topology, which is not suitable for practical large-scale network environments, because more time is spent on traversing SN and VN, resulting in VN requests congestion. To address this problem, virtual network mapping algorithm is proposed for large-scale network based on small-world characteristic of complex network and network coordinate system. Compared our algorithm with algorithm D-ViNE, experimental results show that our algorithm improves the overall performance.展开更多
In the graph signal processing(GSP)framework,distributed algorithms are highly desirable in processing signals defined on large-scale networks.However,in most existing distributed algorithms,all nodes homogeneously pe...In the graph signal processing(GSP)framework,distributed algorithms are highly desirable in processing signals defined on large-scale networks.However,in most existing distributed algorithms,all nodes homogeneously perform the local computation,which calls for heavy computational and communication costs.Moreover,in many real-world networks,such as those with straggling nodes,the homogeneous manner may result in serious delay or even failure.To this end,we propose active network decomposition algorithms to select non-straggling nodes(normal nodes)that perform the main computation and communication across the network.To accommodate the decomposition in different kinds of networks,two different approaches are developed,one is centralized decomposition that leverages the adjacency of the network and the other is distributed decomposition that employs the indicator message transmission between neighboring nodes,which constitutes the main contribution of this paper.By incorporating the active decomposition scheme,a distributed Newton method is employed to solve the least squares problem in GSP,where the Hessian inverse is approximately evaluated by patching a series of inverses of local Hessian matrices each of which is governed by one normal node.The proposed algorithm inherits the fast convergence of the second-order algorithms while maintains low computational and communication cost.Numerical examples demonstrate the effectiveness of the proposed algorithm.展开更多
Many science and engineering applications involve solvinga linear least-squares system formed from some field measurements. In the distributed cyber-physical systems(CPS),each sensor node used for measurement often on...Many science and engineering applications involve solvinga linear least-squares system formed from some field measurements. In the distributed cyber-physical systems(CPS),each sensor node used for measurement often only knowspartial independent rows of the least-squares system. To solve the least-squares all the measurements must be gathered at a centralized location and then perform the computa-tion. Such data collection and computation are inefficient because of bandwidth and time constraints and sometimes areinfeasible because of data privacy concerns. Iterative methods are natural candidates for solving the aforementionedproblem and there are many studies regarding this. However,most of the proposed solutions are related to centralized/parallel computations while only a few have the potential to beapplied in distributed networks. Thus distributed computations are strongly preferred or demanded in many of the realworld applications, e.g. smart-grid, target tracking, etc. Thispaper surveys the representative iterative methods for distributed least-squares in networks.展开更多
In this paper, we conduct research on the large-scale network intrusion mode based on the principal component analysis and dropquality sampling. With the growing of network security issues, invasion detection becomes ...In this paper, we conduct research on the large-scale network intrusion mode based on the principal component analysis and dropquality sampling. With the growing of network security issues, invasion detection becomes the study hotspot. There are two main types of thatinvasion detection technology, the fi rst is that misuse detection and the anomaly detection. Misuse detection can more accurately detect attacks,but high non-response rates, anomaly detection could detect the unknown attacks, but higher rate of false positives. Network invasion detectionproblem is summed up in the network data fl ow of discriminant problem, namely the judgment of network data fl ow is normal or malicious andin this sense here invasion detection problem can be understood as a pattern recognition problem. Our research integrates the PCA and samplingtechnique to propose the new idea on the IDS that is innovative and will promote the development of the corresponding techniques.展开更多
With the development of big data and social computing,large-scale group decisionmaking(LGDM)is nowmerging with social networks.Using social network analysis(SNA),this study proposes an LGDM consensus model that consid...With the development of big data and social computing,large-scale group decisionmaking(LGDM)is nowmerging with social networks.Using social network analysis(SNA),this study proposes an LGDM consensus model that considers the trust relationship among decisionmakers(DMs).In the process of consensusmeasurement:the social network is constructed according to the social relationship among DMs,and the Louvain method is introduced to classify social networks to form subgroups.In this study,the weights of each decision maker and each subgroup are computed by comprehensive network weights and trust weights.In the process of consensus improvement:A feedback mechanism with four identification and two direction rules is designed to guide the consensus of the improvement process.Based on the trust relationship among DMs,the preferences are modified,and the corresponding social network is updated to accelerate the consensus.Compared with the previous research,the proposedmodel not only allows the subgroups to be reconstructed and updated during the adjustment process,but also improves the accuracy of the adjustment by the feedbackmechanism.Finally,an example analysis is conducted to verify the effectiveness and flexibility of the proposed method.Moreover,compared with previous studies,the superiority of the proposed method in solving the LGDM problem is highlighted.展开更多
Objective:Epigenetic abnormalities have a critical role in breast cancer by regulating gene expression;however,the intricate interrelationships and key roles of approximately 400 epigenetic regulators in breast cancer...Objective:Epigenetic abnormalities have a critical role in breast cancer by regulating gene expression;however,the intricate interrelationships and key roles of approximately 400 epigenetic regulators in breast cancer remain elusive.It is important to decipher the comprehensive epigenetic regulatory network in breast cancer cells to identify master epigenetic regulators and potential therapeutic targets.Methods:We employed high-throughput sequencing-based high-throughput screening(HTS^(2))to effectively detect changes in the expression of 2,986 genes following the knockdown of 400 epigenetic regulators.Then,bioinformatics analysis tools were used for the resulting gene expression signatures to investigate the epigenetic regulations in breast cancer.Results:Utilizing these gene expression signatures,we classified the epigenetic regulators into five distinct clusters,each characterized by specific functions.We discovered functional similarities between BAZ2B and SETMAR,as well as CLOCK and CBX3.Moreover,we observed that CLOCK functions in a manner opposite to that of HDAC8 in downstream gene regulation.Notably,we constructed an epigenetic regulatory network based on the gene expression signatures,which revealed 8 distinct modules and identified 10 master epigenetic regulators in breast cancer.Conclusions:Our work deciphered the extensive regulation among hundreds of epigenetic regulators.The identification of 10 master epigenetic regulators offers promising therapeutic targets for breast cancer treatment.展开更多
Self-normalizing neural networks(SNN)regulate the activation and gradient flows through activation functions with the self-normalization property.As SNNs do not rely on norms computed from minibatches,they are more fr...Self-normalizing neural networks(SNN)regulate the activation and gradient flows through activation functions with the self-normalization property.As SNNs do not rely on norms computed from minibatches,they are more friendly to data parallelism,kernel fusion,and emerging architectures such as ReRAM-based accelerators.However,existing SNNs have mainly demonstrated their effectiveness on toy datasets and fall short in accuracy when dealing with large-scale tasks like ImageNet.They lack the strong normalization,regularization,and expression power required for wider,deeper models and larger-scale tasks.To enhance the normalization strength,this paper introduces a comprehensive and practical definition of the self-normalization property in terms of the stability and attractiveness of the statistical fixed points.It is comprehensive as it jointly considers all the fixed points used by existing studies:the first and second moment of forward activation and the expected Frobenius norm of backward gradient.The practicality comes from the analytical equations provided by our paper to assess the stability and attractiveness of each fixed point,which are derived from theoretical analysis of the forward and backward signals.The proposed definition is applied to a meta activation function inspired by prior research,leading to a stronger self-normalizing activation function named‘‘bi-scaled exponential linear unit with backward standardized’’(bSELU-BSTD).We provide both theoretical and empirical evidence to show that it is superior to existing studies.To enhance the regularization and expression power,we further propose scaled-Mixup and channel-wise scale&shift.With these three techniques,our approach achieves 75.23%top-1 accuracy on the ImageNet with Conv MobileNet V1,surpassing the performance of existing self-normalizing activation functions.To the best of our knowledge,this is the first SNN that achieves comparable accuracy to batch normalization on ImageNet.展开更多
Large-scale wireless sensor networks(WSNs)play a critical role in monitoring dangerous scenarios and responding to medical emergencies.However,the inherent instability and error-prone nature of wireless links present ...Large-scale wireless sensor networks(WSNs)play a critical role in monitoring dangerous scenarios and responding to medical emergencies.However,the inherent instability and error-prone nature of wireless links present significant challenges,necessitating efficient data collection and reliable transmission services.This paper addresses the limitations of existing data transmission and recovery protocols by proposing a systematic end-to-end design tailored for medical event-driven cluster-based large-scale WSNs.The primary goal is to enhance the reliability of data collection and transmission services,ensuring a comprehensive and practical approach.Our approach focuses on refining the hop-count-based routing scheme to achieve fairness in forwarding reliability.Additionally,it emphasizes reliable data collection within clusters and establishes robust data transmission over multiple hops.These systematic improvements are designed to optimize the overall performance of the WSN in real-world scenarios.Simulation results of the proposed protocol validate its exceptional performance compared to other prominent data transmission schemes.The evaluation spans varying sensor densities,wireless channel conditions,and packet transmission rates,showcasing the protocol’s superiority in ensuring reliable and efficient data transfer.Our systematic end-to-end design successfully addresses the challenges posed by the instability of wireless links in large-scaleWSNs.By prioritizing fairness,reliability,and efficiency,the proposed protocol demonstrates its efficacy in enhancing data collection and transmission services,thereby offering a valuable contribution to the field of medical event-drivenWSNs.展开更多
A low-Earth-orbit(LEO)satellite network can provide full-coverage access services worldwide and is an essential candidate for future 6G networking.However,the large variability of the geographic distribution of the Ea...A low-Earth-orbit(LEO)satellite network can provide full-coverage access services worldwide and is an essential candidate for future 6G networking.However,the large variability of the geographic distribution of the Earth’s population leads to an uneven service volume distribution of access service.Moreover,the limitations on the resources of satellites are far from being able to serve the traffic in hotspot areas.To enhance the forwarding capability of satellite networks,we first assess how hotspot areas under different load cases and spatial scales significantly affect the network throughput of an LEO satellite network overall.Then,we propose a multi-region cooperative traffic scheduling algorithm.The algorithm migrates low-grade traffic from hotspot areas to coldspot areas for forwarding,significantly increasing the overall throughput of the satellite network while sacrificing some latency of end-to-end forwarding.This algorithm can utilize all the global satellite resources and improve the utilization of network resources.We model the cooperative multi-region scheduling of large-scale LEO satellites.Based on the model,we build a system testbed using OMNET++to compare the proposed method with existing techniques.The simulations show that our proposed method can reduce the packet loss probability by 30%and improve the resource utilization ratio by 3.69%.展开更多
Analyzing rock mass seepage using the discrete fracture network(DFN)flow model poses challenges when dealing with complex fracture networks.This paper presents a novel DFN flow model that incorporates the actual conne...Analyzing rock mass seepage using the discrete fracture network(DFN)flow model poses challenges when dealing with complex fracture networks.This paper presents a novel DFN flow model that incorporates the actual connections of large-scale fractures.Notably,this model efficiently manages over 20,000 fractures without necessitating adjustments to the DFN geometry.All geometric analyses,such as identifying connected fractures,dividing the two-dimensional domain into closed loops,triangulating arbitrary loops,and refining triangular elements,are fully automated.The analysis processes are comprehensively introduced,and core algorithms,along with their pseudo-codes,are outlined and explained to assist readers in their programming endeavors.The accuracy of geometric analyses is validated through topological graphs representing the connection relationships between fractures.In practical application,the proposed model is employed to assess the water-sealing effectiveness of an underground storage cavern project.The analysis results indicate that the existing design scheme can effectively prevent the stored oil from leaking in the presence of both dense and sparse fractures.Furthermore,following extensive modification and optimization,the scale and precision of model computation suggest that the proposed model and developed codes can meet the requirements of engineering applications.展开更多
The large spatial/temporal/frequency scale of geoscience and remote-sensing datasets causes memory issues when using convolutional neural networks for(sub-)surface data segmentation.Recently developed fully reversible...The large spatial/temporal/frequency scale of geoscience and remote-sensing datasets causes memory issues when using convolutional neural networks for(sub-)surface data segmentation.Recently developed fully reversible or fully invertible networks can mostly avoid memory limitations by recomputing the states during the backward pass through the network.This results in a low and fixed memory requirement for storing network states,as opposed to the typical linear memory growth with network depth.This work focuses on a fully invertible network based on the telegraph equation.While reversibility saves the major amount of memory used in deep networks by the data,the convolutional kernels can take up most memory if fully invertible networks contain multiple invertible pooling/coarsening layers.We address the explosion of the number of convolutional kernels by combining fully invertible networks with layers that contain the convolutional kernels in a compressed form directly.A second challenge is that invertible networks output a tensor the same size as its input.This property prevents the straightforward application of invertible networks to applications that map between different input-output dimensions,need to map to outputs with more channels than present in the input data,or desire outputs that decrease/increase the resolution compared to the input data.However,we show that by employing invertible networks in a non-standard fashion,we can still use them for these tasks.Examples in hyperspectral land-use classification,airborne geophysical surveying,and seismic imaging illustrate that we can input large data volumes in one chunk and do not need to work on small patches,use dimensionality reduction,or employ methods that classify a patch to a single central pixel.展开更多
With the purpose of making calculation more efficient in practical hydraulic simulations, an improved algorithm was proposed and was applied in the practical water distribution field. This methodology was developed by...With the purpose of making calculation more efficient in practical hydraulic simulations, an improved algorithm was proposed and was applied in the practical water distribution field. This methodology was developed by expanding the traditional loop-equation theory through utilization of the advantages of the graph theory in efficiency. The utilization of the spanning tree technique from graph theory makes the proposed algorithm efficient in calculation and simple to use for computer coding. The algorithms for topological generation and practical implementations are presented in detail in this paper. Through the application to a practical urban system, the consumption of the CPU time and computation memory were decreased while the accuracy was greatly enhanced compared with the present existing methods.展开更多
Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero....Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.展开更多
As a result of rapid development in electronics and communication technology,large-scale unmanned aerial vehicles(UAVs)are harnessed for various promising applications in a coordinated manner.Although it poses numerou...As a result of rapid development in electronics and communication technology,large-scale unmanned aerial vehicles(UAVs)are harnessed for various promising applications in a coordinated manner.Although it poses numerous advantages,resource management among various domains in large-scale UAV communication networks is the key challenge to be solved urgently.Specifically,due to the inherent requirements and future development trend,distributed resource management is suitable.In this article,we investigate the resource management problem for large-scale UAV communication networks from game-theoretic perspective which are exactly coincident with the distributed and autonomous manner.By exploring the inherent features,the distinctive challenges are discussed.Then,we explore several gametheoretic models that not only combat the challenges but also have broad application prospects.We provide the basics of each game-theoretic model and discuss the potential applications for resource management in large-scale UAV communication networks.Specifically,mean-field game,graphical game,Stackelberg game,coalition game and potential game are included.After that,we propose two innovative case studies to highlight the feasibility of such novel game-theoretic models.Finally,we give some future research directions to shed light on future opportunities and applications.展开更多
Assessment of past-climate simulations of regional climate models(RCMs)is important for understanding the reliability of RCMs when used to project future regional climate.Here,we assess the performance and discuss pos...Assessment of past-climate simulations of regional climate models(RCMs)is important for understanding the reliability of RCMs when used to project future regional climate.Here,we assess the performance and discuss possible causes of biases in a WRF-based RCM with a grid spacing of 50 km,named WRFG,from the North American Regional Climate Change Assessment Program(NARCCAP)in simulating wet season precipitation over the Central United States for a period when observational data are available.The RCM reproduces key features of the precipitation distribution characteristics during late spring to early summer,although it tends to underestimate the magnitude of precipitation.This dry bias is partially due to the model’s lack of skill in simulating nocturnal precipitation related to the lack of eastward propagating convective systems in the simulation.Inaccuracy in reproducing large-scale circulation and environmental conditions is another contributing factor.The too weak simulated pressure gradient between the Rocky Mountains and the Gulf of Mexico results in weaker southerly winds in between,leading to a reduction of warm moist air transport from the Gulf to the Central Great Plains.The simulated low-level horizontal convergence fields are less favorable for upward motion than in the NARR and hence,for the development of moist convection as well.Therefore,a careful examination of an RCM’s deficiencies and the identification of the source of errors are important when using the RCM to project precipitation changes in future climate scenarios.展开更多
A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a force...A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a forced aligned Hidden Markov Model(HMM) state sequence obtained from the GMM-based acoustic model. Therefore, it requires a long computation time for training both the GMM-based acoustic model and a deep learning-based acoustic model. In order to solve this problem, an acoustic model using CTC algorithm is proposed. CTC algorithm does not require the GMM-based acoustic model because it does not use the forced aligned HMM state sequence. However, previous works on a LSTM RNN-based acoustic model using CTC used a small-scale training corpus. In this paper, the LSTM RNN-based acoustic model using CTC is trained on a large-scale training corpus and its performance is evaluated. The implemented acoustic model has a performance of 6.18% and 15.01% in terms of Word Error Rate(WER) for clean speech and noisy speech, respectively. This is similar to a performance of the acoustic model based on the hybrid method.展开更多
基金the National Natural Science Foundation of China(No.61702315)the Key R&D program(international science and technology cooperation project)of Shanxi Province China(No.201903D421003)the National Key Research and Development Program of China(No.2018YFB1800401).
文摘With an increasing urgent demand for fast recovery routing mechanisms in large-scale networks,minimizing network disruption caused by network failure has become critical.However,a large number of relevant studies have shown that network failures occur on the Internet inevitably and frequently.The current routing protocols deployed on the Internet adopt the reconvergence mechanism to cope with network failures.During the reconvergence process,the packets may be lost because of inconsistent routing information,which reduces the network’s availability greatly and affects the Internet service provider’s(ISP’s)service quality and reputation seriously.Therefore,improving network availability has become an urgent problem.As such,the Internet Engineering Task Force suggests the use of downstream path criterion(DC)to address all single-link failure scenarios.However,existing methods for implementing DC schemes are time consuming,require a large amount of router CPU resources,and may deteriorate router capability.Thus,the computation overhead introduced by existing DC schemes is significant,especially in large-scale networks.Therefore,this study proposes an efficient intra-domain routing protection algorithm(ERPA)in large-scale networks.Theoretical analysis indicates that the time complexity of ERPA is less than that of constructing a shortest path tree.Experimental results show that ERPA can reduce the computation overhead significantly compared with the existing algorithms while offering the same network availability as DC.
基金supported in part by the National Natural Science Foundation of China under Grant No.61471055
文摘Virtual network embedding problem which is NP-hard is a key issue for implementing software-defined network which is brought about by network virtualization. Compared with other studies which focus on designing heuristic algorithms to reduce the hardness of the NP-hard problem we propose a robust VNE algorithm based on component connectivity in large-scale network. We distinguish the different components and embed VN requests onto them respectively. And k-core is applied to identify different VN topologies so that the VN request can be embedded onto its corresponding component. On the other hand, load balancing is also considered in this paper. It could avoid blocked or bottlenecked area of substrate network. Simulation experiments show that compared with other algorithms in large-scale network, acceptance ratio, average revenue and robustness can be obviously improved by our algorithm and average cost can be reduced. It also shows the relationship between the component connectivity including giant component and small components and the performance metrics.
基金supported by the Program of Hainan Association for Science and Technology Plans to Youth R&D Innovation(QCXM201910)the National Natural Science Foundation of China(Nos.61702315,61802092)+1 种基金the Applied Basic Research Plan of Shanxi Province(No.2201901D211168)the Key R&D Program(International Science and Technology Cooperation Project)of Shanxi Province China(No.201903D421003).
文摘A reduction in network energy consumption and the establishment of green networks have become key scientific problems in academic and industrial research.Existing energy efficiency schemes are based on a known traffic matrix,and acquiring a real-time traffic matrix in current complex networks is difficult.Therefore,this research investigates how to reduce network energy consumption without a real-time traffic matrix.In particular,this paper proposes an intra-domain energy-efficient routing scheme based on multipath routing.It analyzes the relationship between routing availability and energy-efficient routing and integrates the two mechanisms to satisfy the requirements of availability and energy efficiency.The main research focus is as follows:(1)A link criticality model is evaluated to quantitatively measure the importance of links in a network.(2)On the basis of the link criticality model,this paper analyzes an energy-efficient routing technology based on multipath routing to achieve the goals of availability and energy efficiency simultaneously.(3)An energy-efficient routing algorithm based on multipath routing in large-scale networks is proposed.(4)The proposed method does not require a real-time traffic matrix in the network and is thus easy to apply in practice.(5)The proposed algorithm is verified in several network topologies.Experimental results show that the algorithm can not only reduce network energy consumption but can also ensure routing availability.
文摘RECENT advances in sensing,communication and computing have open the door to the deployment of largescale networks of sensors and actuators that allow fine-grain monitoring and control of a multitude of physical processes and infrastructures.The appellation used by field experts for these paradigms is Cyber-Physical Systems(CPS)because the dynamics among computers,networking media/resources and physical systems interact in a way that multi-disciplinary technologies(embedded systems,computers,communications and controls)are required to accomplish prescribed missions.Moreover,they are expected to play a significant role in the design and development of future engineering applications such as smart grids,transportation systems,nuclear plants and smart factories.
基金Sponsored by the Funds for Creative Research Groups of China(Grant No. 60821001)National Natural Science Foundation of China(Grant No.60973108 and 60902050)973 Project of China (Grant No.2007CB310703)
文摘A major challenge of network virtualization is the virtual network resource allocation problem that deals with efficient mapping of virtual nodes and virtual links onto the substrate network resources. However, the existing algorithms are almost concentrated on the randomly small-scale network topology, which is not suitable for practical large-scale network environments, because more time is spent on traversing SN and VN, resulting in VN requests congestion. To address this problem, virtual network mapping algorithm is proposed for large-scale network based on small-world characteristic of complex network and network coordinate system. Compared our algorithm with algorithm D-ViNE, experimental results show that our algorithm improves the overall performance.
基金supported by National Natural Science Foundation of China(Grant No.61761011)Natural Science Foundation of Guangxi(Grant No.2020GXNSFBA297078).
文摘In the graph signal processing(GSP)framework,distributed algorithms are highly desirable in processing signals defined on large-scale networks.However,in most existing distributed algorithms,all nodes homogeneously perform the local computation,which calls for heavy computational and communication costs.Moreover,in many real-world networks,such as those with straggling nodes,the homogeneous manner may result in serious delay or even failure.To this end,we propose active network decomposition algorithms to select non-straggling nodes(normal nodes)that perform the main computation and communication across the network.To accommodate the decomposition in different kinds of networks,two different approaches are developed,one is centralized decomposition that leverages the adjacency of the network and the other is distributed decomposition that employs the indicator message transmission between neighboring nodes,which constitutes the main contribution of this paper.By incorporating the active decomposition scheme,a distributed Newton method is employed to solve the least squares problem in GSP,where the Hessian inverse is approximately evaluated by patching a series of inverses of local Hessian matrices each of which is governed by one normal node.The proposed algorithm inherits the fast convergence of the second-order algorithms while maintains low computational and communication cost.Numerical examples demonstrate the effectiveness of the proposed algorithm.
基金partially supported by US NSF under Grant No.NSF-CNS-1066391and No.NSF-CNS-0914371,NSF-CPS-1135814 and NSF-CDI-1125165
文摘Many science and engineering applications involve solvinga linear least-squares system formed from some field measurements. In the distributed cyber-physical systems(CPS),each sensor node used for measurement often only knowspartial independent rows of the least-squares system. To solve the least-squares all the measurements must be gathered at a centralized location and then perform the computa-tion. Such data collection and computation are inefficient because of bandwidth and time constraints and sometimes areinfeasible because of data privacy concerns. Iterative methods are natural candidates for solving the aforementionedproblem and there are many studies regarding this. However,most of the proposed solutions are related to centralized/parallel computations while only a few have the potential to beapplied in distributed networks. Thus distributed computations are strongly preferred or demanded in many of the realworld applications, e.g. smart-grid, target tracking, etc. Thispaper surveys the representative iterative methods for distributed least-squares in networks.
文摘In this paper, we conduct research on the large-scale network intrusion mode based on the principal component analysis and dropquality sampling. With the growing of network security issues, invasion detection becomes the study hotspot. There are two main types of thatinvasion detection technology, the fi rst is that misuse detection and the anomaly detection. Misuse detection can more accurately detect attacks,but high non-response rates, anomaly detection could detect the unknown attacks, but higher rate of false positives. Network invasion detectionproblem is summed up in the network data fl ow of discriminant problem, namely the judgment of network data fl ow is normal or malicious andin this sense here invasion detection problem can be understood as a pattern recognition problem. Our research integrates the PCA and samplingtechnique to propose the new idea on the IDS that is innovative and will promote the development of the corresponding techniques.
基金The work was supported by Humanities and Social Sciences Fund of the Ministry of Education(No.22YJA630119)the National Natural Science Foundation of China(No.71971051)Natural Science Foundation of Hebei Province(No.G2021501004).
文摘With the development of big data and social computing,large-scale group decisionmaking(LGDM)is nowmerging with social networks.Using social network analysis(SNA),this study proposes an LGDM consensus model that considers the trust relationship among decisionmakers(DMs).In the process of consensusmeasurement:the social network is constructed according to the social relationship among DMs,and the Louvain method is introduced to classify social networks to form subgroups.In this study,the weights of each decision maker and each subgroup are computed by comprehensive network weights and trust weights.In the process of consensus improvement:A feedback mechanism with four identification and two direction rules is designed to guide the consensus of the improvement process.Based on the trust relationship among DMs,the preferences are modified,and the corresponding social network is updated to accelerate the consensus.Compared with the previous research,the proposedmodel not only allows the subgroups to be reconstructed and updated during the adjustment process,but also improves the accuracy of the adjustment by the feedbackmechanism.Finally,an example analysis is conducted to verify the effectiveness and flexibility of the proposed method.Moreover,compared with previous studies,the superiority of the proposed method in solving the LGDM problem is highlighted.
基金supported by grants from the National Natural Science Foundation of China(Grant No.82172723)the Natural Science Foundation of Sichuan(Grant Nos.2023NSFSC1828 and 2022NSFSC1289)+2 种基金the“Xinglin Scholar”Scientific Research Promotion Plan of Chengdu University of Transitional Chinese Medicine(Grant No.BSH2021003)the Innovation Team and Talents Cultivation Program of National Administration of Traditional Chinese Medicine(Grant No.ZYYCXTD-D-202209)the Research Funding of Department of Science and Technology of Qinghai Province(Grant No.2023-ZJ-729)。
文摘Objective:Epigenetic abnormalities have a critical role in breast cancer by regulating gene expression;however,the intricate interrelationships and key roles of approximately 400 epigenetic regulators in breast cancer remain elusive.It is important to decipher the comprehensive epigenetic regulatory network in breast cancer cells to identify master epigenetic regulators and potential therapeutic targets.Methods:We employed high-throughput sequencing-based high-throughput screening(HTS^(2))to effectively detect changes in the expression of 2,986 genes following the knockdown of 400 epigenetic regulators.Then,bioinformatics analysis tools were used for the resulting gene expression signatures to investigate the epigenetic regulations in breast cancer.Results:Utilizing these gene expression signatures,we classified the epigenetic regulators into five distinct clusters,each characterized by specific functions.We discovered functional similarities between BAZ2B and SETMAR,as well as CLOCK and CBX3.Moreover,we observed that CLOCK functions in a manner opposite to that of HDAC8 in downstream gene regulation.Notably,we constructed an epigenetic regulatory network based on the gene expression signatures,which revealed 8 distinct modules and identified 10 master epigenetic regulators in breast cancer.Conclusions:Our work deciphered the extensive regulation among hundreds of epigenetic regulators.The identification of 10 master epigenetic regulators offers promising therapeutic targets for breast cancer treatment.
基金National Key R&D Program of China(2018AAA0102600)National Natural Science Foundation of China(No.61876215,62106119)+1 种基金Beijing Academy of Artificial Intelligence(BAAI),ChinaChinese Institute for Brain Research,Beijing,and the Science and Technology Major Project of Guangzhou,China(202007030006).
文摘Self-normalizing neural networks(SNN)regulate the activation and gradient flows through activation functions with the self-normalization property.As SNNs do not rely on norms computed from minibatches,they are more friendly to data parallelism,kernel fusion,and emerging architectures such as ReRAM-based accelerators.However,existing SNNs have mainly demonstrated their effectiveness on toy datasets and fall short in accuracy when dealing with large-scale tasks like ImageNet.They lack the strong normalization,regularization,and expression power required for wider,deeper models and larger-scale tasks.To enhance the normalization strength,this paper introduces a comprehensive and practical definition of the self-normalization property in terms of the stability and attractiveness of the statistical fixed points.It is comprehensive as it jointly considers all the fixed points used by existing studies:the first and second moment of forward activation and the expected Frobenius norm of backward gradient.The practicality comes from the analytical equations provided by our paper to assess the stability and attractiveness of each fixed point,which are derived from theoretical analysis of the forward and backward signals.The proposed definition is applied to a meta activation function inspired by prior research,leading to a stronger self-normalizing activation function named‘‘bi-scaled exponential linear unit with backward standardized’’(bSELU-BSTD).We provide both theoretical and empirical evidence to show that it is superior to existing studies.To enhance the regularization and expression power,we further propose scaled-Mixup and channel-wise scale&shift.With these three techniques,our approach achieves 75.23%top-1 accuracy on the ImageNet with Conv MobileNet V1,surpassing the performance of existing self-normalizing activation functions.To the best of our knowledge,this is the first SNN that achieves comparable accuracy to batch normalization on ImageNet.
文摘Large-scale wireless sensor networks(WSNs)play a critical role in monitoring dangerous scenarios and responding to medical emergencies.However,the inherent instability and error-prone nature of wireless links present significant challenges,necessitating efficient data collection and reliable transmission services.This paper addresses the limitations of existing data transmission and recovery protocols by proposing a systematic end-to-end design tailored for medical event-driven cluster-based large-scale WSNs.The primary goal is to enhance the reliability of data collection and transmission services,ensuring a comprehensive and practical approach.Our approach focuses on refining the hop-count-based routing scheme to achieve fairness in forwarding reliability.Additionally,it emphasizes reliable data collection within clusters and establishes robust data transmission over multiple hops.These systematic improvements are designed to optimize the overall performance of the WSN in real-world scenarios.Simulation results of the proposed protocol validate its exceptional performance compared to other prominent data transmission schemes.The evaluation spans varying sensor densities,wireless channel conditions,and packet transmission rates,showcasing the protocol’s superiority in ensuring reliable and efficient data transfer.Our systematic end-to-end design successfully addresses the challenges posed by the instability of wireless links in large-scaleWSNs.By prioritizing fairness,reliability,and efficiency,the proposed protocol demonstrates its efficacy in enhancing data collection and transmission services,thereby offering a valuable contribution to the field of medical event-drivenWSNs.
基金This work was supported by the National Key R&D Program of China(2021YFB2900604).
文摘A low-Earth-orbit(LEO)satellite network can provide full-coverage access services worldwide and is an essential candidate for future 6G networking.However,the large variability of the geographic distribution of the Earth’s population leads to an uneven service volume distribution of access service.Moreover,the limitations on the resources of satellites are far from being able to serve the traffic in hotspot areas.To enhance the forwarding capability of satellite networks,we first assess how hotspot areas under different load cases and spatial scales significantly affect the network throughput of an LEO satellite network overall.Then,we propose a multi-region cooperative traffic scheduling algorithm.The algorithm migrates low-grade traffic from hotspot areas to coldspot areas for forwarding,significantly increasing the overall throughput of the satellite network while sacrificing some latency of end-to-end forwarding.This algorithm can utilize all the global satellite resources and improve the utilization of network resources.We model the cooperative multi-region scheduling of large-scale LEO satellites.Based on the model,we build a system testbed using OMNET++to compare the proposed method with existing techniques.The simulations show that our proposed method can reduce the packet loss probability by 30%and improve the resource utilization ratio by 3.69%.
基金sponsored by the General Program of the National Natural Science Foundation of China(Grant Nos.52079129 and 52209148)the Hubei Provincial General Fund,China(Grant No.2023AFB567)。
文摘Analyzing rock mass seepage using the discrete fracture network(DFN)flow model poses challenges when dealing with complex fracture networks.This paper presents a novel DFN flow model that incorporates the actual connections of large-scale fractures.Notably,this model efficiently manages over 20,000 fractures without necessitating adjustments to the DFN geometry.All geometric analyses,such as identifying connected fractures,dividing the two-dimensional domain into closed loops,triangulating arbitrary loops,and refining triangular elements,are fully automated.The analysis processes are comprehensively introduced,and core algorithms,along with their pseudo-codes,are outlined and explained to assist readers in their programming endeavors.The accuracy of geometric analyses is validated through topological graphs representing the connection relationships between fractures.In practical application,the proposed model is employed to assess the water-sealing effectiveness of an underground storage cavern project.The analysis results indicate that the existing design scheme can effectively prevent the stored oil from leaking in the presence of both dense and sparse fractures.Furthermore,following extensive modification and optimization,the scale and precision of model computation suggest that the proposed model and developed codes can meet the requirements of engineering applications.
文摘The large spatial/temporal/frequency scale of geoscience and remote-sensing datasets causes memory issues when using convolutional neural networks for(sub-)surface data segmentation.Recently developed fully reversible or fully invertible networks can mostly avoid memory limitations by recomputing the states during the backward pass through the network.This results in a low and fixed memory requirement for storing network states,as opposed to the typical linear memory growth with network depth.This work focuses on a fully invertible network based on the telegraph equation.While reversibility saves the major amount of memory used in deep networks by the data,the convolutional kernels can take up most memory if fully invertible networks contain multiple invertible pooling/coarsening layers.We address the explosion of the number of convolutional kernels by combining fully invertible networks with layers that contain the convolutional kernels in a compressed form directly.A second challenge is that invertible networks output a tensor the same size as its input.This property prevents the straightforward application of invertible networks to applications that map between different input-output dimensions,need to map to outputs with more channels than present in the input data,or desire outputs that decrease/increase the resolution compared to the input data.However,we show that by employing invertible networks in a non-standard fashion,we can still use them for these tasks.Examples in hyperspectral land-use classification,airborne geophysical surveying,and seismic imaging illustrate that we can input large data volumes in one chunk and do not need to work on small patches,use dimensionality reduction,or employ methods that classify a patch to a single central pixel.
文摘With the purpose of making calculation more efficient in practical hydraulic simulations, an improved algorithm was proposed and was applied in the practical water distribution field. This methodology was developed by expanding the traditional loop-equation theory through utilization of the advantages of the graph theory in efficiency. The utilization of the spanning tree technique from graph theory makes the proposed algorithm efficient in calculation and simple to use for computer coding. The algorithms for topological generation and practical implementations are presented in detail in this paper. Through the application to a practical urban system, the consumption of the CPU time and computation memory were decreased while the accuracy was greatly enhanced compared with the present existing methods.
基金supported by the Scientific Research Project of Xiang Jiang Lab(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(ZC23112101-10)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJ-Z03)the Science and Technology Innovation Program of Humnan Province(2023RC1002)。
文摘Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.
基金This work was supported by National Key R&D Program of China under Grant 2018YFB1800802in part by the National Natural Science Foundation of China under Grant No.61771488,No.61631020 and No.61827801+1 种基金in part by State Key Laboratory of Air Traffic Management System and Technology under Grant No.SKLATM201808in part by Postgraduate Research and Practice Innovation Program of Jiangsu Province under No.KYCX190188.
文摘As a result of rapid development in electronics and communication technology,large-scale unmanned aerial vehicles(UAVs)are harnessed for various promising applications in a coordinated manner.Although it poses numerous advantages,resource management among various domains in large-scale UAV communication networks is the key challenge to be solved urgently.Specifically,due to the inherent requirements and future development trend,distributed resource management is suitable.In this article,we investigate the resource management problem for large-scale UAV communication networks from game-theoretic perspective which are exactly coincident with the distributed and autonomous manner.By exploring the inherent features,the distinctive challenges are discussed.Then,we explore several gametheoretic models that not only combat the challenges but also have broad application prospects.We provide the basics of each game-theoretic model and discuss the potential applications for resource management in large-scale UAV communication networks.Specifically,mean-field game,graphical game,Stackelberg game,coalition game and potential game are included.After that,we propose two innovative case studies to highlight the feasibility of such novel game-theoretic models.Finally,we give some future research directions to shed light on future opportunities and applications.
文摘Assessment of past-climate simulations of regional climate models(RCMs)is important for understanding the reliability of RCMs when used to project future regional climate.Here,we assess the performance and discuss possible causes of biases in a WRF-based RCM with a grid spacing of 50 km,named WRFG,from the North American Regional Climate Change Assessment Program(NARCCAP)in simulating wet season precipitation over the Central United States for a period when observational data are available.The RCM reproduces key features of the precipitation distribution characteristics during late spring to early summer,although it tends to underestimate the magnitude of precipitation.This dry bias is partially due to the model’s lack of skill in simulating nocturnal precipitation related to the lack of eastward propagating convective systems in the simulation.Inaccuracy in reproducing large-scale circulation and environmental conditions is another contributing factor.The too weak simulated pressure gradient between the Rocky Mountains and the Gulf of Mexico results in weaker southerly winds in between,leading to a reduction of warm moist air transport from the Gulf to the Central Great Plains.The simulated low-level horizontal convergence fields are less favorable for upward motion than in the NARR and hence,for the development of moist convection as well.Therefore,a careful examination of an RCM’s deficiencies and the identification of the source of errors are important when using the RCM to project precipitation changes in future climate scenarios.
基金supported by the Ministry of Trade,Industry & Energy(MOTIE,Korea) under Industrial Technology Innovation Program (No.10063424,'development of distant speech recognition and multi-task dialog processing technologies for in-door conversational robots')
文摘A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a forced aligned Hidden Markov Model(HMM) state sequence obtained from the GMM-based acoustic model. Therefore, it requires a long computation time for training both the GMM-based acoustic model and a deep learning-based acoustic model. In order to solve this problem, an acoustic model using CTC algorithm is proposed. CTC algorithm does not require the GMM-based acoustic model because it does not use the forced aligned HMM state sequence. However, previous works on a LSTM RNN-based acoustic model using CTC used a small-scale training corpus. In this paper, the LSTM RNN-based acoustic model using CTC is trained on a large-scale training corpus and its performance is evaluated. The implemented acoustic model has a performance of 6.18% and 15.01% in terms of Word Error Rate(WER) for clean speech and noisy speech, respectively. This is similar to a performance of the acoustic model based on the hybrid method.