The problem of network reconstruction, particularly exploring unknown network structures by analyzing measurable output data from networks, has attracted significant interest in many interdisciplinary fields in recent...The problem of network reconstruction, particularly exploring unknown network structures by analyzing measurable output data from networks, has attracted significant interest in many interdisciplinary fields in recent times. In practice, networks may be very large, and data can often be measured for only some of the nodes in a network while data for other variables are bidden. It is thus crucial to be able to infer networks from partial data. In this article, we study the problem of noise-driven nonlinear networks with some hidden nodes. Various difficulties appear jointly: nonlinearity of network dynamics, the impact of strong noise, the complexity of interaction structures between network nodes, and missing data from certain hidden nodes. We propose using high-order correlation to treat nonlinearity and structural complexity, two-time correlation to decorrelate noise, and higher- order derivatives to overcome the difficulties of hidden nodes. A closed form of network reconstruction is derived, and numerical simulations confirm the theoretical predictions.展开更多
The topological structure of a complex dynamical network plays a vital role in determining the network's evolutionary mecha- nisms and functional behaviors, thus recognizing and inferring the network structure is of ...The topological structure of a complex dynamical network plays a vital role in determining the network's evolutionary mecha- nisms and functional behaviors, thus recognizing and inferring the network structure is of both theoretical and practical signif- icance. Although various approaches have been proposed to estimate network topologies, many are not well established to the noisy nature of network dynamics and ubiquity of transmission delay among network individuals. This paper focuses on to- pology inference of uncertain complex dynamical networks. An auxiliary network is constructed and an adaptive scheme is proposed to track topological parameters. It is noteworthy that the considered network model is supposed to contain practical stochastic perturbations, and noisy observations are taken as control inputs of the constructed auxiliary network. In particular, the control technique can be further employed to locate hidden sources (or latent variables) in networks. Numerical examples are provided to illustrate the effectiveness of the proposed scheme. In addition, the impact of coupling strength and coupling delay on identification performance is assessed. The proposed scheme provides engineers with a convenient approach to infer topologies of general complex dynamical networks and locate hidden sources, and the detailed performance evaluation can further facilitate practical circuit design.展开更多
Suspicious mass traffic constantly evolves,making network behaviour tracing and structure more complex.Neural networks yield promising results by considering a sufficient number of processing elements with strong inte...Suspicious mass traffic constantly evolves,making network behaviour tracing and structure more complex.Neural networks yield promising results by considering a sufficient number of processing elements with strong interconnections between them.They offer efficient computational Hopfield neural networks models and optimization constraints used by undergoing a good amount of parallelism to yield optimal results.Artificial neural network(ANN)offers optimal solutions in classifying and clustering the various reels of data,and the results obtained purely depend on identifying a problem.In this research work,the design of optimized applications is presented in an organized manner.In addition,this research work examines theoretical approaches to achieving optimized results using ANN.It mainly focuses on designing rules.The optimizing design approach of neural networks analyzes the internal process of the neural networks.Practices in developing the network are based on the interconnections among the hidden nodes and their learning parameters.The methodology is proven best for nonlinear resource allocation problems with a suitable design and complex issues.The ANN proposed here considers more or less 46k nodes hidden inside 49 million connections employed on full-fledged parallel processors.The proposed ANN offered optimal results in real-world application problems,and the results were obtained using MATLAB.展开更多
Deep stochastic configuration networks(DSCNs)produce redundant hidden nodes and connections during training,which complicates their model structures.Aiming at the above problems,this paper proposes a double pruning st...Deep stochastic configuration networks(DSCNs)produce redundant hidden nodes and connections during training,which complicates their model structures.Aiming at the above problems,this paper proposes a double pruning structure design algorithm for DSCNs based on mutual information and relevance.During the training process,the mutual information algorithm is used to calculate and sort the importance scores of the nodes in each hidden layer in a layer-by-layer manner,the node pruning rate of each layer is set according to the depth of the DSCN at the current time,the nodes that contribute little to the model are deleted,and the network-related parameters are updated.When the model completes the configuration procedure,the correlation evaluation strategy is used to sort the global connection weights and delete insignificance connections;then,the network parameters are updated after pruning is completed.The experimental results show that the proposed structure design method can effectively compress the scale of a DSCN model and improve its modeling speed;the model accuracy loss is small,and fine-tuning for accuracy restoration is not needed.The obtained DSCN model has certain application value in the field of regression analysis.展开更多
Extreme learning machine (ELM) is a learning algorithm for generalized single-hidden-layer feed-forward networks (SLFNs). In order to obtain a suitable network architecture, Incremental Extreme Learning Machine (...Extreme learning machine (ELM) is a learning algorithm for generalized single-hidden-layer feed-forward networks (SLFNs). In order to obtain a suitable network architecture, Incremental Extreme Learning Machine (I-ELM) is a sort of ELM constructing SLFNs by adding hidden nodes one by one. Although kinds of I-ELM-class algorithms were proposed to improve the convergence rate or to obtain minimal training error, they do not change the construction way of I-ELM or face the over-fitting risk. Making the testing error converge quickly and stably therefore becomes an important issue. In this paper, we proposed a new incremental ELM which is referred to as Length-Changeable Incremental Extreme Learning Machine (LCI-ELM). It allows more than one hidden node to be added to the network and the existing network will be regarded as a whole in output weights tuning. The output weights of newly added hidden nodes are determined using a partial error-minimizing method. We prove that an SLFN constructed using LCI-ELM has approximation capability on a universal compact input set as well as on a finite training set. Experimental results demonstrate that LCI-ELM achieves higher convergence rate as well as lower over-fitting risk than some competitive I-ELM-class algorithms.展开更多
基金supported by the National Natural Science Foundation of China(Grant No.11135001)China Postdoctoral Science Foundation(Grant No.2015M581905)
文摘The problem of network reconstruction, particularly exploring unknown network structures by analyzing measurable output data from networks, has attracted significant interest in many interdisciplinary fields in recent times. In practice, networks may be very large, and data can often be measured for only some of the nodes in a network while data for other variables are bidden. It is thus crucial to be able to infer networks from partial data. In this article, we study the problem of noise-driven nonlinear networks with some hidden nodes. Various difficulties appear jointly: nonlinearity of network dynamics, the impact of strong noise, the complexity of interaction structures between network nodes, and missing data from certain hidden nodes. We propose using high-order correlation to treat nonlinearity and structural complexity, two-time correlation to decorrelate noise, and higher- order derivatives to overcome the difficulties of hidden nodes. A closed form of network reconstruction is derived, and numerical simulations confirm the theoretical predictions.
基金supported by the National Science and Technology Major Project of China(Grant No.2014ZX10004001-014)the National Natural Science Foundation of China(Grant Nos.61573262,61532020&11472290)the Fundamental Research Funds for the Central Universities(Grant No.2014201020206)
文摘The topological structure of a complex dynamical network plays a vital role in determining the network's evolutionary mecha- nisms and functional behaviors, thus recognizing and inferring the network structure is of both theoretical and practical signif- icance. Although various approaches have been proposed to estimate network topologies, many are not well established to the noisy nature of network dynamics and ubiquity of transmission delay among network individuals. This paper focuses on to- pology inference of uncertain complex dynamical networks. An auxiliary network is constructed and an adaptive scheme is proposed to track topological parameters. It is noteworthy that the considered network model is supposed to contain practical stochastic perturbations, and noisy observations are taken as control inputs of the constructed auxiliary network. In particular, the control technique can be further employed to locate hidden sources (or latent variables) in networks. Numerical examples are provided to illustrate the effectiveness of the proposed scheme. In addition, the impact of coupling strength and coupling delay on identification performance is assessed. The proposed scheme provides engineers with a convenient approach to infer topologies of general complex dynamical networks and locate hidden sources, and the detailed performance evaluation can further facilitate practical circuit design.
基金This research is funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R 151)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Suspicious mass traffic constantly evolves,making network behaviour tracing and structure more complex.Neural networks yield promising results by considering a sufficient number of processing elements with strong interconnections between them.They offer efficient computational Hopfield neural networks models and optimization constraints used by undergoing a good amount of parallelism to yield optimal results.Artificial neural network(ANN)offers optimal solutions in classifying and clustering the various reels of data,and the results obtained purely depend on identifying a problem.In this research work,the design of optimized applications is presented in an organized manner.In addition,this research work examines theoretical approaches to achieving optimized results using ANN.It mainly focuses on designing rules.The optimizing design approach of neural networks analyzes the internal process of the neural networks.Practices in developing the network are based on the interconnections among the hidden nodes and their learning parameters.The methodology is proven best for nonlinear resource allocation problems with a suitable design and complex issues.The ANN proposed here considers more or less 46k nodes hidden inside 49 million connections employed on full-fledged parallel processors.The proposed ANN offered optimal results in real-world application problems,and the results were obtained using MATLAB.
基金supported by the National Natural Science Foundation of China(62073006)the Beijing Natural Science Foundation of China(4212032)
文摘Deep stochastic configuration networks(DSCNs)produce redundant hidden nodes and connections during training,which complicates their model structures.Aiming at the above problems,this paper proposes a double pruning structure design algorithm for DSCNs based on mutual information and relevance.During the training process,the mutual information algorithm is used to calculate and sort the importance scores of the nodes in each hidden layer in a layer-by-layer manner,the node pruning rate of each layer is set according to the depth of the DSCN at the current time,the nodes that contribute little to the model are deleted,and the network-related parameters are updated.When the model completes the configuration procedure,the correlation evaluation strategy is used to sort the global connection weights and delete insignificance connections;then,the network parameters are updated after pruning is completed.The experimental results show that the proposed structure design method can effectively compress the scale of a DSCN model and improve its modeling speed;the model accuracy loss is small,and fine-tuning for accuracy restoration is not needed.The obtained DSCN model has certain application value in the field of regression analysis.
基金This work was partially supported by the National Natural Science Foundation of China under Grant Nos. 61673159 and 61370144, and the Natural Science Foundation of Hebei Province of China under Grant No. F2016202145.
文摘Extreme learning machine (ELM) is a learning algorithm for generalized single-hidden-layer feed-forward networks (SLFNs). In order to obtain a suitable network architecture, Incremental Extreme Learning Machine (I-ELM) is a sort of ELM constructing SLFNs by adding hidden nodes one by one. Although kinds of I-ELM-class algorithms were proposed to improve the convergence rate or to obtain minimal training error, they do not change the construction way of I-ELM or face the over-fitting risk. Making the testing error converge quickly and stably therefore becomes an important issue. In this paper, we proposed a new incremental ELM which is referred to as Length-Changeable Incremental Extreme Learning Machine (LCI-ELM). It allows more than one hidden node to be added to the network and the existing network will be regarded as a whole in output weights tuning. The output weights of newly added hidden nodes are determined using a partial error-minimizing method. We prove that an SLFN constructed using LCI-ELM has approximation capability on a universal compact input set as well as on a finite training set. Experimental results demonstrate that LCI-ELM achieves higher convergence rate as well as lower over-fitting risk than some competitive I-ELM-class algorithms.