Phishing attacks present a persistent and evolving threat in the cybersecurity land-scape,necessitating the development of more sophisticated detection methods.Traditional machine learning approaches to phishing detec...Phishing attacks present a persistent and evolving threat in the cybersecurity land-scape,necessitating the development of more sophisticated detection methods.Traditional machine learning approaches to phishing detection have relied heavily on feature engineering and have often fallen short in adapting to the dynamically changing patterns of phishingUniformResource Locator(URLs).Addressing these challenge,we introduce a framework that integrates the sequential data processing strengths of a Recurrent Neural Network(RNN)with the hyperparameter optimization prowess of theWhale Optimization Algorithm(WOA).Ourmodel capitalizes on an extensive Kaggle dataset,featuring over 11,000 URLs,each delineated by 30 attributes.The WOA’s hyperparameter optimization enhances the RNN’s performance,evidenced by a meticulous validation process.The results,encapsulated in precision,recall,and F1-score metrics,surpass baseline models,achieving an overall accuracy of 92%.This study not only demonstrates the RNN’s proficiency in learning complex patterns but also underscores the WOA’s effectiveness in refining machine learning models for the critical task of phishing detection.展开更多
At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns st...At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns still need enhancement,particularly accuracy,sensitivity,false positive and false negative,to improve the brain tumor prediction system symmetrically.Therefore,this work proposed an Extended Deep Learning Algorithm(EDLA)to measure performance parameters such as accuracy,sensitivity,and false positive and false negative rates.In addition,these iterated measures were analyzed by comparing the EDLA method with the Convolutional Neural Network(CNN)way further using the SPSS tool,and respective graphical illustrations were shown.The results were that the mean performance measures for the proposed EDLA algorithm were calculated,and those measured were accuracy(97.665%),sensitivity(97.939%),false positive(3.012%),and false negative(3.182%)for ten iterations.Whereas in the case of the CNN,the algorithm means accuracy gained was 94.287%,mean sensitivity 95.612%,mean false positive 5.328%,and mean false negative 4.756%.These results show that the proposed EDLA method has outperformed existing algorithms,including CNN,and ensures symmetrically improved parameters.Thus EDLA algorithm introduces novelty concerning its performance and particular activation function.This proposed method will be utilized effectively in brain tumor detection in a precise and accurate manner.This algorithm would apply to brain tumor diagnosis and be involved in various medical diagnoses aftermodification.If the quantity of dataset records is enormous,then themethod’s computation power has to be updated.展开更多
Many network presentation learning algorithms(NPLA)have originated from the process of the random walk between nodes in recent years.Despite these algorithms can obtain great embedding results,there may be also some l...Many network presentation learning algorithms(NPLA)have originated from the process of the random walk between nodes in recent years.Despite these algorithms can obtain great embedding results,there may be also some limitations.For instance,only the structural information of nodes is considered when these kinds of algorithms are constructed.Aiming at this issue,a label and community information-based network presentation learning algorithm(LC-NPLA)is proposed in this paper.First of all,by using the community information and the label information of nodes,the first-order neighbors of nodes are reconstructed.In the next,the random walk strategy is improved by integrating the degree information and label information of nodes.Then,the node sequence obtained from random walk sampling is transformed into the node representation vector by the Skip-Gram model.At last,the experimental results on ten real-world networks demonstrate that the proposed algorithm has great advantages in the label classification,network reconstruction and link prediction tasks,compared with three benchmark algorithms.展开更多
In the post-genomic biology era,the reconstruction of gene regulatory networks from microarray gene expression data is very important to understand the underlying biological system,and it has been a challenging task i...In the post-genomic biology era,the reconstruction of gene regulatory networks from microarray gene expression data is very important to understand the underlying biological system,and it has been a challenging task in bioinformatics.The Bayesian network model has been used in reconstructing the gene regulatory network for its advantages,but how to determine the network structure and parameters is still important to be explored.This paper proposes a two-stage structure learning algorithm which integrates immune evolution algorithm to build a Bayesian network.The new algorithm is evaluated with the use of both simulated and yeast cell cycle data.The experimental results indicate that the proposed algorithm can find many of the known real regulatory relationships from literature and predict the others unknown with high validity and accuracy.展开更多
Finding out reasonable structures from bulky data is one of the difficulties in modeling of Bayesian network (BN), which is also necessary in promoting the application of BN. This pa- per proposes an immune algorith...Finding out reasonable structures from bulky data is one of the difficulties in modeling of Bayesian network (BN), which is also necessary in promoting the application of BN. This pa- per proposes an immune algorithm based method (BN-IA) for the learning of the BN structure with the idea of vaccination. Further- more, the methods on how to extract the effective vaccines from local optimal structure and root nodes are also described in details. Finally, the simulation studies are implemented with the helicopter convertor BN model and the car start BN model. The comparison results show that the proposed vaccines and the BN-IA can learn the BN structure effectively and efficiently.展开更多
A new method to evaluate the fitness of the Bayesian networks according to the observed data is provided. The main advantage of this criterion is that it is suitable for both the complete and incomplete cases while th...A new method to evaluate the fitness of the Bayesian networks according to the observed data is provided. The main advantage of this criterion is that it is suitable for both the complete and incomplete cases while the others not. Moreover it facilitates the computation greatly. In order to reduce the search space, the notation of equivalent class proposed by David Chickering is adopted. Instead of using the method directly, the novel criterion, variable ordering, and equivalent class are combined,moreover the proposed mthod avoids some problems caused by the previous one. Later, the genetic algorithm which allows global convergence, lack in the most of the methods searching for Bayesian network is applied to search for a good model in thisspace. To speed up the convergence, the genetic algorithm is combined with the greedy algorithm. Finally, the simulation shows the validity of the proposed approach.展开更多
Now a days,Remote Sensing(RS)techniques are used for earth observation and for detection of soil types with high accuracy and better reliability.This technique provides perspective view of spatial resolution and aids ...Now a days,Remote Sensing(RS)techniques are used for earth observation and for detection of soil types with high accuracy and better reliability.This technique provides perspective view of spatial resolution and aids in instantaneous measurement of soil’s minerals and its characteristics.There are a few challenges that is present in soil classification using image enhancement such as,locating and plotting soil boundaries,slopes,hazardous areas,drainage condition,land use,vegetation etc.There are some traditional approaches which involves few drawbacks such as,manual involvement which results in inaccuracy due to human interference,time consuming,inconsistent prediction etc.To overcome these draw backs and to improve the predictive analysis of soil characteristics,we propose a Hybrid Deep Learning improved BAT optimization algorithm(HDIB)for soil classification using remote sensing hyperspectral features.In HDIB,we propose a spontaneous BAT optimization algorithm for feature extraction of both spectral-spatial features by choosing pure pixels from the Hyper Spectral(HS)image.Spectral-spatial vector as training illustrations is attained by merging spatial and spectral vector by means of priority stacking methodology.Then,a recurring Deep Learning(DL)Neural Network(NN)is used for classifying the HS images,considering the datasets of Pavia University,Salinas and Tamil Nadu Hill Scene,which in turn improves the reliability of classification.Finally,the performance of the proposed HDIB based soil classifier is compared and analyzed with existing methodologies like Single Layer Perceptron(SLP),Convolutional Neural Networks(CNN)and Deep Metric Learning(DML)and it shows an improved classification accuracy of 99.87%,98.34%and 99.9%for Tamil Nadu Hills dataset,Pavia University and Salinas scene datasets respectively.展开更多
With the rapid development of sports,the number of sports images has increased dramatically.Intelligent and automatic processing and analysis of moving images are significant,which can not only facilitate users to qui...With the rapid development of sports,the number of sports images has increased dramatically.Intelligent and automatic processing and analysis of moving images are significant,which can not only facilitate users to quickly search and access moving images but also facilitate staff to store and manage moving image data and contribute to the intellectual development of the sports industry.In this paper,a method of table tennis identification and positioning based on a convolutional neural network is proposed,which solves the problem that the identification and positioning method based on color features and contour features is not adaptable in various environments.At the same time,the learning methods and techniques of table tennis detection,positioning,and trajectory prediction are studied.A deep learning framework for recognition learning of rotating flying table tennis is put forward.The mechanism and methods of positioning,trajectory prediction,and intelligent automatic processing of moving images are studied,and the self-built data sets are trained and verified.展开更多
Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,exces...Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,excessive computing power,and so on.Spiking neural networks(SNNs)provide a new approach combined with brain-like science to improve the computational energy efficiency,computational architecture,and biological credibility of current deep learning applications.In the early stage of development,its poor performance hindered the application of SNNs in real-world scenarios.In recent years,SNNs have made great progress in computational performance and practicability compared with the earlier research results,and are continuously producing significant results.Although there are already many pieces of literature on SNNs,there is still a lack of comprehensive review on SNNs from the perspective of improving performance and practicality as well as incorporating the latest research results.Starting from this issue,this paper elaborates on SNNs along the complete usage process of SNNs including network construction,data processing,model training,development,and deployment,aiming to provide more comprehensive and practical guidance to promote the development of SNNs.Therefore,the connotation and development status of SNNcomputing is reviewed systematically and comprehensively from four aspects:composition structure,data set,learning algorithm,software/hardware development platform.Then the development characteristics of SNNs in intelligent computing are summarized,the current challenges of SNNs are discussed and the future development directions are also prospected.Our research shows that in the fields of machine learning and intelligent computing,SNNs have comparable network scale and performance to ANNs and the ability to challenge large datasets and a variety of tasks.The advantages of SNNs over ANNs in terms of energy efficiency and spatial-temporal data processing have been more fully exploited.And the development of programming and deployment tools has lowered the threshold for the use of SNNs.SNNs show a broad development prospect for brain-like computing.展开更多
Web application fingerprint recognition is an effective security technology designed to identify and classify web applications,thereby enhancing the detection of potential threats and attacks.Traditional fingerprint r...Web application fingerprint recognition is an effective security technology designed to identify and classify web applications,thereby enhancing the detection of potential threats and attacks.Traditional fingerprint recognition methods,which rely on preannotated feature matching,face inherent limitations due to the ever-evolving nature and diverse landscape of web applications.In response to these challenges,this work proposes an innovative web application fingerprint recognition method founded on clustering techniques.The method involves extensive data collection from the Tranco List,employing adjusted feature selection built upon Wappalyzer and noise reduction through truncated SVD dimensionality reduction.The core of the methodology lies in the application of the unsupervised OPTICS clustering algorithm,eliminating the need for preannotated labels.By transforming web applications into feature vectors and leveraging clustering algorithms,our approach accurately categorizes diverse web applications,providing comprehensive and precise fingerprint recognition.The experimental results,which are obtained on a dataset featuring various web application types,affirm the efficacy of the method,demonstrating its ability to achieve high accuracy and broad coverage.This novel approach not only distinguishes between different web application types effectively but also demonstrates superiority in terms of classification accuracy and coverage,offering a robust solution to the challenges of web application fingerprint recognition.展开更多
Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent ...Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data.展开更多
This paper deals with deriving the properties of updated neural network model that is exploited to identify an unknown nonlinear system via the standard gradient learning algorithm. The convergence of this algorithm f...This paper deals with deriving the properties of updated neural network model that is exploited to identify an unknown nonlinear system via the standard gradient learning algorithm. The convergence of this algorithm for online training the three-layer neural networks in stochastic environment is studied. A special case where an unknown nonlinearity can exactly be approximated by some neural network with a nonlinear activation function for its output layer is considered. To analyze the asymptotic behavior of the learning processes, the so-called Lyapunov-like approach is utilized. As the Lyapunov function, the expected value of the square of approximation error depending on network parameters is chosen. Within this approach, sufficient conditions guaranteeing the convergence of learning algorithm with probability 1 are derived. Simulation results are presented to support the theoretical analysis.展开更多
The fault diagnosis model for FMS based on multi layer feedforward neural networks was discussed An improved BP algorithm,the tactic of initial value selection based on genetic algorithm and the method of network st...The fault diagnosis model for FMS based on multi layer feedforward neural networks was discussed An improved BP algorithm,the tactic of initial value selection based on genetic algorithm and the method of network structure optimization were presented for training this model ANN(artificial neural network)fault diagnosis model for the robot in FMS was made by the new algorithm The result is superior to the rtaditional algorithm展开更多
Additive manufacturing(AM),also known as three-dimensional printing,is gaining increasing attention from academia and industry due to the unique advantages it has in comparison with traditional subtractive manufacturi...Additive manufacturing(AM),also known as three-dimensional printing,is gaining increasing attention from academia and industry due to the unique advantages it has in comparison with traditional subtractive manufacturing.However,AM processing parameters are difficult to tune,since they can exert a huge impact on the printed microstructure and on the performance of the subsequent products.It is a difficult task to build a process-structure-property-performance(PSPP)relationship for AM using traditional numerical and analytical models.Today,the machine learning(ML)method has been demonstrated to be a valid way to perform complex pattern recognition and regression analysis without an explicit need to construct and solve the underlying physical models.Among ML algorithms,the neural network(NN)is the most widely used model due to the large dataset that is currently available,strong computational power,and sophisticated algorithm architecture.This paper overviews the progress of applying the NN algorithm to several aspects of the AM whole chain,including model design,in situ monitoring,and quality evaluation.Current challenges in applying NNs to AM and potential solutions for these problems are then outlined.Finally,future trends are proposed in order to provide an overall discussion of this interdisciplinary area.展开更多
为了增加网络吞吐量并改善用户体验,提出一种基于Q学习(Q-learning)的多业务网络选择博弈(Multi-Service Network Selection Game based on Q-learning,QSNG)策略。该策略通过模糊推理和综合属性评估获得多业务网络效用函数,并将其用作Q...为了增加网络吞吐量并改善用户体验,提出一种基于Q学习(Q-learning)的多业务网络选择博弈(Multi-Service Network Selection Game based on Q-learning,QSNG)策略。该策略通过模糊推理和综合属性评估获得多业务网络效用函数,并将其用作Q-learning的奖励。用户通过博弈算法预测网络选择策略收益,避免访问负载较重的网络。同时,使用二进制指数退避算法减少多个用户并发访问某个网络的概率。仿真结果表明,所提策略可以根据用户的QoS需求和价格偏好自适应地切换到最合适的网络,将其与基于强化学习的网络辅助反馈(Reinforcement Learning with Network-Assisted Feedback,RLNF)策略和无线网络选择博弈(Radio Network Selection Games,RSG)策略相比,所提策略可以分别减少总切换数量的80%和60%,使网络吞吐量分别提高了7%和8%,并且可以保证系统的公平性。展开更多
The Volterra feedforward neural network with nonlinear interconnections and related homotopy learning algorithm are proposed in the paper. It is shown that Volterra neural network and the homolopy learning algorithms ...The Volterra feedforward neural network with nonlinear interconnections and related homotopy learning algorithm are proposed in the paper. It is shown that Volterra neural network and the homolopy learning algorithms are significant potentials in nonlinear approximation ability,convergent speeds and global optimization than the classical neural networks and the standard BP algorithm, and related computer simulations and theoretical analysis are given too.展开更多
Big data analytic techniques associated with machine learning algorithms are playing an increasingly important role in various application fields,including stock market investment.However,few studies have focused on f...Big data analytic techniques associated with machine learning algorithms are playing an increasingly important role in various application fields,including stock market investment.However,few studies have focused on forecasting daily stock market returns,especially when using powerful machine learning techniques,such as deep neural networks(DNNs),to perform the analyses.DNNs employ various deep learning algorithms based on the combination of network structure,activation function,and model parameters,with their performance depending on the format of the data representation.This paper presents a comprehensive big data analytics process to predict the daily return direction of the SPDR S&P 500 ETF(ticker symbol:SPY)based on 60 financial and economic features.DNNs and traditional artificial neural networks(ANNs)are then deployed over the entire preprocessed but untransformed dataset,along with two datasets transformed via principal component analysis(PCA),to predict the daily direction of future stock market index returns.While controlling for overfitting,a pattern for the classification accuracy of the DNNs is detected and demonstrated as the number of the hidden layers increases gradually from 12 to 1000.Moreover,a set of hypothesis testing procedures are implemented on the classification,and the simulation results show that the DNNs using two PCA-represented datasets give significantly higher classification accuracy than those using the entire untransformed dataset,as well as several other hybrid machine learning algorithms.In addition,the trading strategies guided by the DNN classification process based on PCA-represented data perform slightly better than the others tested,including in a comparison against two standard benchmarks.展开更多
The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We...The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks.展开更多
For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and de...For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN.展开更多
Focused on various BP algorithms with variable learning rate based on network system error gradient, a modified learning strategy for training non-linear network models is developed with both the incremental and the d...Focused on various BP algorithms with variable learning rate based on network system error gradient, a modified learning strategy for training non-linear network models is developed with both the incremental and the decremental factors of network learning rate being adjusted adaptively and dynamically. The golden section law is put forward to build a relationship between the network training parameters, and a series of data from an existing model is used to train and test the network parameters. By means of the evaluation of network performance in respect to convergent speed and predicting precision, the effectiveness of the proposed learning strategy can be illustrated.展开更多
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2024R 343)PrincessNourah bint Abdulrahman University,Riyadh,Saudi ArabiaDeanship of Scientific Research at Northern Border University,Arar,Kingdom of Saudi Arabia,for funding this researchwork through the project number“NBU-FFR-2024-1092-02”.
文摘Phishing attacks present a persistent and evolving threat in the cybersecurity land-scape,necessitating the development of more sophisticated detection methods.Traditional machine learning approaches to phishing detection have relied heavily on feature engineering and have often fallen short in adapting to the dynamically changing patterns of phishingUniformResource Locator(URLs).Addressing these challenge,we introduce a framework that integrates the sequential data processing strengths of a Recurrent Neural Network(RNN)with the hyperparameter optimization prowess of theWhale Optimization Algorithm(WOA).Ourmodel capitalizes on an extensive Kaggle dataset,featuring over 11,000 URLs,each delineated by 30 attributes.The WOA’s hyperparameter optimization enhances the RNN’s performance,evidenced by a meticulous validation process.The results,encapsulated in precision,recall,and F1-score metrics,surpass baseline models,achieving an overall accuracy of 92%.This study not only demonstrates the RNN’s proficiency in learning complex patterns but also underscores the WOA’s effectiveness in refining machine learning models for the critical task of phishing detection.
基金supported by Project No.R-2023-23 of the Deanship of Scientific Research at Majmaah University.
文摘At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns still need enhancement,particularly accuracy,sensitivity,false positive and false negative,to improve the brain tumor prediction system symmetrically.Therefore,this work proposed an Extended Deep Learning Algorithm(EDLA)to measure performance parameters such as accuracy,sensitivity,and false positive and false negative rates.In addition,these iterated measures were analyzed by comparing the EDLA method with the Convolutional Neural Network(CNN)way further using the SPSS tool,and respective graphical illustrations were shown.The results were that the mean performance measures for the proposed EDLA algorithm were calculated,and those measured were accuracy(97.665%),sensitivity(97.939%),false positive(3.012%),and false negative(3.182%)for ten iterations.Whereas in the case of the CNN,the algorithm means accuracy gained was 94.287%,mean sensitivity 95.612%,mean false positive 5.328%,and mean false negative 4.756%.These results show that the proposed EDLA method has outperformed existing algorithms,including CNN,and ensures symmetrically improved parameters.Thus EDLA algorithm introduces novelty concerning its performance and particular activation function.This proposed method will be utilized effectively in brain tumor detection in a precise and accurate manner.This algorithm would apply to brain tumor diagnosis and be involved in various medical diagnoses aftermodification.If the quantity of dataset records is enormous,then themethod’s computation power has to be updated.
基金What is more,we thank the National Natural Science Foundation of China(Nos.61966039,62241604)the Scientific Research Fund Project of the Education Department of Yunnan Province(No.2023Y0565)Also,this work was supported in part by the Xingdian Talent Support Program for Young Talents(No.XDYC-QNRC-2022-0518).
文摘Many network presentation learning algorithms(NPLA)have originated from the process of the random walk between nodes in recent years.Despite these algorithms can obtain great embedding results,there may be also some limitations.For instance,only the structural information of nodes is considered when these kinds of algorithms are constructed.Aiming at this issue,a label and community information-based network presentation learning algorithm(LC-NPLA)is proposed in this paper.First of all,by using the community information and the label information of nodes,the first-order neighbors of nodes are reconstructed.In the next,the random walk strategy is improved by integrating the degree information and label information of nodes.Then,the node sequence obtained from random walk sampling is transformed into the node representation vector by the Skip-Gram model.At last,the experimental results on ten real-world networks demonstrate that the proposed algorithm has great advantages in the label classification,network reconstruction and link prediction tasks,compared with three benchmark algorithms.
基金supported by National Natural Science Foundation of China (Grant Nos. 60433020, 60175024 and 60773095)European Commission under grant No. TH/Asia Link/010 (111084)the Key Science-Technology Project of the National Education Ministry of China (Grant No. 02090),and the Key Laboratory of Symbol Computation and Knowledge Engineering of Ministry of Education, Jilin University, P. R. China
文摘In the post-genomic biology era,the reconstruction of gene regulatory networks from microarray gene expression data is very important to understand the underlying biological system,and it has been a challenging task in bioinformatics.The Bayesian network model has been used in reconstructing the gene regulatory network for its advantages,but how to determine the network structure and parameters is still important to be explored.This paper proposes a two-stage structure learning algorithm which integrates immune evolution algorithm to build a Bayesian network.The new algorithm is evaluated with the use of both simulated and yeast cell cycle data.The experimental results indicate that the proposed algorithm can find many of the known real regulatory relationships from literature and predict the others unknown with high validity and accuracy.
基金supported by the National Natural Science Foundation of China(7110111671271170)+1 种基金the Program for New Century Excellent Talents in University(NCET-13-0475)the Basic Research Foundation of NPU(JC20120228)
文摘Finding out reasonable structures from bulky data is one of the difficulties in modeling of Bayesian network (BN), which is also necessary in promoting the application of BN. This pa- per proposes an immune algorithm based method (BN-IA) for the learning of the BN structure with the idea of vaccination. Further- more, the methods on how to extract the effective vaccines from local optimal structure and root nodes are also described in details. Finally, the simulation studies are implemented with the helicopter convertor BN model and the car start BN model. The comparison results show that the proposed vaccines and the BN-IA can learn the BN structure effectively and efficiently.
基金This project was supported by the National Natural Science Foundation of China (70572045).
文摘A new method to evaluate the fitness of the Bayesian networks according to the observed data is provided. The main advantage of this criterion is that it is suitable for both the complete and incomplete cases while the others not. Moreover it facilitates the computation greatly. In order to reduce the search space, the notation of equivalent class proposed by David Chickering is adopted. Instead of using the method directly, the novel criterion, variable ordering, and equivalent class are combined,moreover the proposed mthod avoids some problems caused by the previous one. Later, the genetic algorithm which allows global convergence, lack in the most of the methods searching for Bayesian network is applied to search for a good model in thisspace. To speed up the convergence, the genetic algorithm is combined with the greedy algorithm. Finally, the simulation shows the validity of the proposed approach.
文摘Now a days,Remote Sensing(RS)techniques are used for earth observation and for detection of soil types with high accuracy and better reliability.This technique provides perspective view of spatial resolution and aids in instantaneous measurement of soil’s minerals and its characteristics.There are a few challenges that is present in soil classification using image enhancement such as,locating and plotting soil boundaries,slopes,hazardous areas,drainage condition,land use,vegetation etc.There are some traditional approaches which involves few drawbacks such as,manual involvement which results in inaccuracy due to human interference,time consuming,inconsistent prediction etc.To overcome these draw backs and to improve the predictive analysis of soil characteristics,we propose a Hybrid Deep Learning improved BAT optimization algorithm(HDIB)for soil classification using remote sensing hyperspectral features.In HDIB,we propose a spontaneous BAT optimization algorithm for feature extraction of both spectral-spatial features by choosing pure pixels from the Hyper Spectral(HS)image.Spectral-spatial vector as training illustrations is attained by merging spatial and spectral vector by means of priority stacking methodology.Then,a recurring Deep Learning(DL)Neural Network(NN)is used for classifying the HS images,considering the datasets of Pavia University,Salinas and Tamil Nadu Hill Scene,which in turn improves the reliability of classification.Finally,the performance of the proposed HDIB based soil classifier is compared and analyzed with existing methodologies like Single Layer Perceptron(SLP),Convolutional Neural Networks(CNN)and Deep Metric Learning(DML)and it shows an improved classification accuracy of 99.87%,98.34%and 99.9%for Tamil Nadu Hills dataset,Pavia University and Salinas scene datasets respectively.
文摘With the rapid development of sports,the number of sports images has increased dramatically.Intelligent and automatic processing and analysis of moving images are significant,which can not only facilitate users to quickly search and access moving images but also facilitate staff to store and manage moving image data and contribute to the intellectual development of the sports industry.In this paper,a method of table tennis identification and positioning based on a convolutional neural network is proposed,which solves the problem that the identification and positioning method based on color features and contour features is not adaptable in various environments.At the same time,the learning methods and techniques of table tennis detection,positioning,and trajectory prediction are studied.A deep learning framework for recognition learning of rotating flying table tennis is put forward.The mechanism and methods of positioning,trajectory prediction,and intelligent automatic processing of moving images are studied,and the self-built data sets are trained and verified.
基金supported by the National Natural Science Foundation of China(Nos.61974164,62074166,62004219,62004220,and 62104256).
文摘Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,excessive computing power,and so on.Spiking neural networks(SNNs)provide a new approach combined with brain-like science to improve the computational energy efficiency,computational architecture,and biological credibility of current deep learning applications.In the early stage of development,its poor performance hindered the application of SNNs in real-world scenarios.In recent years,SNNs have made great progress in computational performance and practicability compared with the earlier research results,and are continuously producing significant results.Although there are already many pieces of literature on SNNs,there is still a lack of comprehensive review on SNNs from the perspective of improving performance and practicality as well as incorporating the latest research results.Starting from this issue,this paper elaborates on SNNs along the complete usage process of SNNs including network construction,data processing,model training,development,and deployment,aiming to provide more comprehensive and practical guidance to promote the development of SNNs.Therefore,the connotation and development status of SNNcomputing is reviewed systematically and comprehensively from four aspects:composition structure,data set,learning algorithm,software/hardware development platform.Then the development characteristics of SNNs in intelligent computing are summarized,the current challenges of SNNs are discussed and the future development directions are also prospected.Our research shows that in the fields of machine learning and intelligent computing,SNNs have comparable network scale and performance to ANNs and the ability to challenge large datasets and a variety of tasks.The advantages of SNNs over ANNs in terms of energy efficiency and spatial-temporal data processing have been more fully exploited.And the development of programming and deployment tools has lowered the threshold for the use of SNNs.SNNs show a broad development prospect for brain-like computing.
基金supported in part by the National Science Foundation of China under Grants U22B2027,62172297,62102262,61902276 and 62272311,Tianjin Intelligent Manufacturing Special Fund Project under Grant 20211097the China Guangxi Science and Technology Plan Project(Guangxi Science and Technology Base and Talent Special Project)under Grant AD23026096(Application Number 2022AC20001)+1 种基金Hainan Provincial Natural Science Foundation of China under Grant 622RC616CCF-Nsfocus Kunpeng Fund Project under Grant CCF-NSFOCUS202207.
文摘Web application fingerprint recognition is an effective security technology designed to identify and classify web applications,thereby enhancing the detection of potential threats and attacks.Traditional fingerprint recognition methods,which rely on preannotated feature matching,face inherent limitations due to the ever-evolving nature and diverse landscape of web applications.In response to these challenges,this work proposes an innovative web application fingerprint recognition method founded on clustering techniques.The method involves extensive data collection from the Tranco List,employing adjusted feature selection built upon Wappalyzer and noise reduction through truncated SVD dimensionality reduction.The core of the methodology lies in the application of the unsupervised OPTICS clustering algorithm,eliminating the need for preannotated labels.By transforming web applications into feature vectors and leveraging clustering algorithms,our approach accurately categorizes diverse web applications,providing comprehensive and precise fingerprint recognition.The experimental results,which are obtained on a dataset featuring various web application types,affirm the efficacy of the method,demonstrating its ability to achieve high accuracy and broad coverage.This novel approach not only distinguishes between different web application types effectively but also demonstrates superiority in terms of classification accuracy and coverage,offering a robust solution to the challenges of web application fingerprint recognition.
文摘Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data.
文摘This paper deals with deriving the properties of updated neural network model that is exploited to identify an unknown nonlinear system via the standard gradient learning algorithm. The convergence of this algorithm for online training the three-layer neural networks in stochastic environment is studied. A special case where an unknown nonlinearity can exactly be approximated by some neural network with a nonlinear activation function for its output layer is considered. To analyze the asymptotic behavior of the learning processes, the so-called Lyapunov-like approach is utilized. As the Lyapunov function, the expected value of the square of approximation error depending on network parameters is chosen. Within this approach, sufficient conditions guaranteeing the convergence of learning algorithm with probability 1 are derived. Simulation results are presented to support the theoretical analysis.
文摘The fault diagnosis model for FMS based on multi layer feedforward neural networks was discussed An improved BP algorithm,the tactic of initial value selection based on genetic algorithm and the method of network structure optimization were presented for training this model ANN(artificial neural network)fault diagnosis model for the robot in FMS was made by the new algorithm The result is superior to the rtaditional algorithm
文摘Additive manufacturing(AM),also known as three-dimensional printing,is gaining increasing attention from academia and industry due to the unique advantages it has in comparison with traditional subtractive manufacturing.However,AM processing parameters are difficult to tune,since they can exert a huge impact on the printed microstructure and on the performance of the subsequent products.It is a difficult task to build a process-structure-property-performance(PSPP)relationship for AM using traditional numerical and analytical models.Today,the machine learning(ML)method has been demonstrated to be a valid way to perform complex pattern recognition and regression analysis without an explicit need to construct and solve the underlying physical models.Among ML algorithms,the neural network(NN)is the most widely used model due to the large dataset that is currently available,strong computational power,and sophisticated algorithm architecture.This paper overviews the progress of applying the NN algorithm to several aspects of the AM whole chain,including model design,in situ monitoring,and quality evaluation.Current challenges in applying NNs to AM and potential solutions for these problems are then outlined.Finally,future trends are proposed in order to provide an overall discussion of this interdisciplinary area.
文摘为了增加网络吞吐量并改善用户体验,提出一种基于Q学习(Q-learning)的多业务网络选择博弈(Multi-Service Network Selection Game based on Q-learning,QSNG)策略。该策略通过模糊推理和综合属性评估获得多业务网络效用函数,并将其用作Q-learning的奖励。用户通过博弈算法预测网络选择策略收益,避免访问负载较重的网络。同时,使用二进制指数退避算法减少多个用户并发访问某个网络的概率。仿真结果表明,所提策略可以根据用户的QoS需求和价格偏好自适应地切换到最合适的网络,将其与基于强化学习的网络辅助反馈(Reinforcement Learning with Network-Assisted Feedback,RLNF)策略和无线网络选择博弈(Radio Network Selection Games,RSG)策略相比,所提策略可以分别减少总切换数量的80%和60%,使网络吞吐量分别提高了7%和8%,并且可以保证系统的公平性。
文摘The Volterra feedforward neural network with nonlinear interconnections and related homotopy learning algorithm are proposed in the paper. It is shown that Volterra neural network and the homolopy learning algorithms are significant potentials in nonlinear approximation ability,convergent speeds and global optimization than the classical neural networks and the standard BP algorithm, and related computer simulations and theoretical analysis are given too.
文摘Big data analytic techniques associated with machine learning algorithms are playing an increasingly important role in various application fields,including stock market investment.However,few studies have focused on forecasting daily stock market returns,especially when using powerful machine learning techniques,such as deep neural networks(DNNs),to perform the analyses.DNNs employ various deep learning algorithms based on the combination of network structure,activation function,and model parameters,with their performance depending on the format of the data representation.This paper presents a comprehensive big data analytics process to predict the daily return direction of the SPDR S&P 500 ETF(ticker symbol:SPY)based on 60 financial and economic features.DNNs and traditional artificial neural networks(ANNs)are then deployed over the entire preprocessed but untransformed dataset,along with two datasets transformed via principal component analysis(PCA),to predict the daily direction of future stock market index returns.While controlling for overfitting,a pattern for the classification accuracy of the DNNs is detected and demonstrated as the number of the hidden layers increases gradually from 12 to 1000.Moreover,a set of hypothesis testing procedures are implemented on the classification,and the simulation results show that the DNNs using two PCA-represented datasets give significantly higher classification accuracy than those using the entire untransformed dataset,as well as several other hybrid machine learning algorithms.In addition,the trading strategies guided by the DNN classification process based on PCA-represented data perform slightly better than the others tested,including in a comparison against two standard benchmarks.
文摘The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks.
基金Supported by the National Natural Science Foundation of China (60904018, 61203040)the Natural Science Foundation of Fujian Province of China (2009J05147, 2011J01352)+1 种基金the Foundation for Distinguished Young Scholars of Higher Education of Fujian Province of China (JA10004)the Science Research Foundation of Huaqiao University (09BS617)
文摘For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN.
文摘Focused on various BP algorithms with variable learning rate based on network system error gradient, a modified learning strategy for training non-linear network models is developed with both the incremental and the decremental factors of network learning rate being adjusted adaptively and dynamically. The golden section law is put forward to build a relationship between the network training parameters, and a series of data from an existing model is used to train and test the network parameters. By means of the evaluation of network performance in respect to convergent speed and predicting precision, the effectiveness of the proposed learning strategy can be illustrated.