期刊文献+
共找到557篇文章
< 1 2 28 >
每页显示 20 50 100
Artificial Neural Network Modeling for Predicting Thermal Conductivity of EG/Water-Based CNC Nanofluid for Engine Cooling Using Different Activation Functions
1
作者 MdMunirul Hasan MdMustafizur Rahman +5 位作者 Mohammad Saiful Islam Wong Hung Chan Yasser M.Alginahi Muhammad Nomani Kabir Suraya Abu Bakar Devarajan Ramasamy 《Frontiers in Heat and Mass Transfer》 EI 2024年第2期537-556,共20页
A vehicle engine cooling system is of utmost importance to ensure that the engine operates in a safe temperature range.In most radiators that are used to cool an engine,water serves as a cooling fluid.The performance ... A vehicle engine cooling system is of utmost importance to ensure that the engine operates in a safe temperature range.In most radiators that are used to cool an engine,water serves as a cooling fluid.The performance of a radiator in terms of heat transmission is significantly influenced by the incorporation of nanoparticles into the cooling water.Concentration and uniformity of nanoparticle distribution are the two major factors for the practical use of nanofluids.The shape and size of nanoparticles also have a great impact on the performance of heat transfer.Many researchers are investigating the impact of nanoparticles on heat transfer.This study aims to develop an artificial neural network(ANN)model for predicting the thermal conductivity of an ethylene glycol(EG)/waterbased crystalline nanocellulose(CNC)nanofluid for cooling internal combustion engine.The implementation of an artificial neural network considering different activation functions in the hidden layer is made to find the bestmodel for the cooling of an engine using the nanofluid.Accuracies of the model with different activation functions in artificial neural networks are analyzed for different nanofluid concentrations and temperatures.In artificial neural networks,Levenberg–Marquardt is an optimization approach used with activation functions,including Tansig and Logsig functions in the training phase.The findings of each training,testing,and validation phase are presented to demonstrate the network that provides the highest level of accuracy.The best result was obtained with Tansig,which has a correlation of 0.99903 and an error of 3.7959×10^(–8).It has also been noticed that the Logsig function can also be a good model due to its correlation of 0.99890 and an error of 4.9218×10^(–8).Thus ourANNwith Tansig and Logsig functions demonstrates a high correlation between the actual output and the predicted output. 展开更多
关键词 Artificial neural network activation function thermal conductivity NANOCELLULOSE
下载PDF
Multistability of delayed complex-valued recurrent neural networks with discontinuous real-imaginarytype activation functions
2
作者 黄玉娇 胡海根 《Chinese Physics B》 SCIE EI CAS CSCD 2015年第12期271-279,共9页
In this paper, the multistability issue is discussed for delayed complex-valued recurrent neural networks with discontinuous real-imaginary-type activation functions. Based on a fixed theorem and stability definition,... In this paper, the multistability issue is discussed for delayed complex-valued recurrent neural networks with discontinuous real-imaginary-type activation functions. Based on a fixed theorem and stability definition, sufficient criteria are established for the existence and stability of multiple equilibria of complex-valued recurrent neural networks. The number of stable equilibria is larger than that of real-valued recurrent neural networks, which can be used to achieve high-capacity associative memories. One numerical example is provided to show the effectiveness and superiority of the presented results. 展开更多
关键词 complex-valued recurrent neural network discontinuous real-imaginary-type activation function MULTISTABILITY delay
下载PDF
Coexistence and local Mittag–Leffler stability of fractional-order recurrent neural networks with discontinuous activation functions
3
作者 Yu-Jiao Huang Shi-Jun Chen +1 位作者 Xu-Hua Yang Jie Xiao 《Chinese Physics B》 SCIE EI CAS CSCD 2019年第4期131-140,共10页
In this paper, coexistence and local Mittag–Leffler stability of fractional-order recurrent neural networks with discontinuous activation functions are addressed. Because of the discontinuity of the activation functi... In this paper, coexistence and local Mittag–Leffler stability of fractional-order recurrent neural networks with discontinuous activation functions are addressed. Because of the discontinuity of the activation function, Filippov solution of the neural network is defined. Based on Brouwer's fixed point theorem and definition of Mittag–Leffler stability, sufficient criteria are established to ensure the existence of (2k + 3)~n (k ≥ 1) equilibrium points, among which (k + 2)~n equilibrium points are locally Mittag–Leffler stable. Compared with the existing results, the derived results cover local Mittag–Leffler stability of both fractional-order and integral-order recurrent neural networks. Meanwhile discontinuous networks might have higher storage capacity than the continuous ones. Two numerical examples are elaborated to substantiate the effective of the theoretical results. 展开更多
关键词 FRACTIONAL-ORDER RECURRENT neural network LOCAL Mittag–Leffler STABILITY DISCONTINUOUS activation function
下载PDF
Finite-time Mittag-Leffler synchronization of fractional-order delayed memristive neural networks with parameters uncertainty and discontinuous activation functions
4
作者 Chong Chen Zhixia Ding +1 位作者 Sai Li Liheng Wang 《Chinese Physics B》 SCIE EI CAS CSCD 2020年第4期127-138,共12页
The finite-time Mittag-Leffler synchronization is investigated for fractional-order delayed memristive neural networks(FDMNN)with parameters uncertainty and discontinuous activation functions.The relevant results are ... The finite-time Mittag-Leffler synchronization is investigated for fractional-order delayed memristive neural networks(FDMNN)with parameters uncertainty and discontinuous activation functions.The relevant results are obtained under the framework of Filippov for such systems.Firstly,the novel feedback controller,which includes the discontinuous functions and time delays,is proposed to investigate such systems.Secondly,the conditions on finite-time Mittag-Leffler synchronization of FDMNN are established according to the properties of fractional-order calculus and inequality analysis technique.At the same time,the upper bound of the settling time for Mittag-Leffler synchronization is accurately estimated.In addition,by selecting the appropriate parameters of the designed controller and utilizing the comparison theorem for fractional-order systems,the global asymptotic synchronization is achieved as a corollary.Finally,a numerical example is given to indicate the correctness of the obtained conclusions. 展开更多
关键词 FRACTIONAL-ORDER DELAYED memristive neural networks(FDMNN) parameters uncertainty DISCONTINUOUS activation functions FINITE-TIME Mittag-Leffler SYNCHRONIZATION
下载PDF
Introducing atmospheric angular momentum into prediction of length of day change by generalized regression neural network model 被引量:9
5
作者 王琪洁 杜亚男 刘建 《Journal of Central South University》 SCIE EI CAS 2014年第4期1396-1401,共6页
The general regression neural network(GRNN) model was proposed to model and predict the length of day(LOD) change, which has very complicated time-varying characteristics. Meanwhile, considering that the axial atmosph... The general regression neural network(GRNN) model was proposed to model and predict the length of day(LOD) change, which has very complicated time-varying characteristics. Meanwhile, considering that the axial atmospheric angular momentum(AAM) function is tightly correlated with the LOD changes, it was introduced into the GRNN prediction model to further improve the accuracy of prediction. Experiments with the observational data of LOD changes show that the prediction accuracy of the GRNN model is 6.1% higher than that of BP network, and after introducing AAM function, the improvement of prediction accuracy further increases to 14.7%. The results show that the GRNN with AAM function is an effective prediction method for LOD changes. 展开更多
关键词 general regression neural network(GRNN) length of day atmospheric angular momentum(AAM) function prediction
下载PDF
Fusion of Activation Functions: An Alternative to Improving Prediction Accuracy in Artificial Neural Networks
6
作者 Justice Awosonviri Akodia Clement K. Dzidonu +1 位作者 David King Boison Philip Kisembe 《World Journal of Engineering and Technology》 2024年第4期836-850,共15页
The purpose of this study was to address the challenges in predicting and classifying accuracy in modeling Container Dwell Time (CDT) using Artificial Neural Networks (ANN). This objective was driven by the suboptimal... The purpose of this study was to address the challenges in predicting and classifying accuracy in modeling Container Dwell Time (CDT) using Artificial Neural Networks (ANN). This objective was driven by the suboptimal outcomes reported in previous studies and sought to apply an innovative approach to improve these results. To achieve this, the study applied the Fusion of Activation Functions (FAFs) to a substantial dataset. This dataset included 307,594 container records from the Port of Tema from 2014 to 2022, encompassing both import and transit containers. The RandomizedSearchCV algorithm from Python’s Scikit-learn library was utilized in the methodological approach to yield the optimal activation function for prediction accuracy. The results indicated that “ajaLT”, a fusion of the Logistic and Hyperbolic Tangent Activation Functions, provided the best prediction accuracy, reaching a high of 82%. Despite these encouraging findings, it’s crucial to recognize the study’s limitations. While Fusion of Activation Functions is a promising method, further evaluation is necessary across different container types and port operations to ascertain the broader applicability and generalizability of these findings. The original value of this study lies in its innovative application of FAFs to CDT. Unlike previous studies, this research evaluates the method based on prediction accuracy rather than training time. It opens new avenues for machine learning engineers and researchers in applying FAFs to enhance prediction accuracy in CDT modeling, contributing to a previously underexplored area. 展开更多
关键词 Artificial neural networks Container Dwell Time Fusion of activation functions Randomized Search CV Algorithm Prediction Accuracy
下载PDF
A Universal Activation Function for Deep Learning
7
作者 Seung-Yeon Hwang Jeong-Joon Kim 《Computers, Materials & Continua》 SCIE EI 2023年第5期3553-3569,共17页
Recently,deep learning has achieved remarkable results in fields that require human cognitive ability,learning ability,and reasoning ability.Activation functions are very important because they provide the ability of ... Recently,deep learning has achieved remarkable results in fields that require human cognitive ability,learning ability,and reasoning ability.Activation functions are very important because they provide the ability of artificial neural networks to learn complex patterns through nonlinearity.Various activation functions are being studied to solve problems such as vanishing gradients and dying nodes that may occur in the deep learning process.However,it takes a lot of time and effort for researchers to use the existing activation function in their research.Therefore,in this paper,we propose a universal activation function(UA)so that researchers can easily create and apply various activation functions and improve the performance of neural networks.UA can generate new types of activation functions as well as functions like traditional activation functions by properly adjusting three hyperparameters.The famous Convolutional Neural Network(CNN)and benchmark datasetwere used to evaluate the experimental performance of the UA proposed in this study.We compared the performance of the artificial neural network to which the traditional activation function is applied and the artificial neural network to which theUA is applied.In addition,we evaluated the performance of the new activation function generated by adjusting the hyperparameters of theUA.The experimental performance evaluation results showed that the classification performance of CNNs improved by up to 5%through the UA,although most of them showed similar performance to the traditional activation function. 展开更多
关键词 Deep learning activation function convolutional neural network benchmark datasets universal activation function
下载PDF
H_(∞)/Passive Synchronization of Semi-Markov Jump Neural Networks Subject to Hybrid Attacks via an Activation Function Division Approach
8
作者 ZHANG Ziwei SHEN Hao SU Lei 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2024年第3期1023-1036,共14页
In this work,an H_(∞)/passive-based secure synchronization control problem is investigated for continuous-time semi-Markov neural networks subject to hybrid attacks,in which hybrid attacks are the combinations of den... In this work,an H_(∞)/passive-based secure synchronization control problem is investigated for continuous-time semi-Markov neural networks subject to hybrid attacks,in which hybrid attacks are the combinations of denial-of-service attacks and deception attacks,and they are described by two groups of independent Bernoulli distributions.On this foundation,via the Lyapunov stability theory and linear matrix inequality technology,the H_(∞)/passive-based performance criteria for semi-Markov jump neural networks are obtained.Additionally,an activation function division approach for neural networks is adopted to further reduce the conservatism of the criteria.Finally,a simulation example is provided to verify the validity and feasibility of the proposed method. 展开更多
关键词 activation function division approach deception attacks denial-of-service attacks H_(∞)/passive synchronization semi-Markov jump neural networks
原文传递
Global Exponential Periodicity of a Class of Recurrent Neural Networks with Non-Monotone Activation Functions and Time-Varying Delays
9
作者 LI Biwen 《Wuhan University Journal of Natural Sciences》 CAS 2009年第6期475-480,共6页
The stability of a periodic oscillation and the global exponential class of recurrent neural networks with non-monotone activation functions and time-varying delays are analyzed. For two sets of activation functions, ... The stability of a periodic oscillation and the global exponential class of recurrent neural networks with non-monotone activation functions and time-varying delays are analyzed. For two sets of activation functions, some algebraic criteria for ascertaining global exponential periodicity and global exponential stability of the class of recurrent neural networks are derived by using the comparison principle and the theory of monotone operator. These conditions are easy to check in terms of system parameters. In addition, we provide a new and efficacious method for the qualitative analysis of various neural networks. 展开更多
关键词 recurrent neural networks non-monotone activation functions global exponential stability comparison principle monotone operator
原文传递
Learning Specialized Activation Functions for Physics-Informed Neural Networks
10
作者 Honghui Wang Lu Lu +1 位作者 Shiji Song Gao Huang 《Communications in Computational Physics》 SCIE 2023年第9期869-906,共38页
Physics-informed neural networks(PINNs)are known to suffer from optimization difficulty.In this work,we reveal the connection between the optimization difficulty of PINNs and activation functions.Specifically,we show ... Physics-informed neural networks(PINNs)are known to suffer from optimization difficulty.In this work,we reveal the connection between the optimization difficulty of PINNs and activation functions.Specifically,we show that PINNs exhibit high sensitivity to activation functions when solving PDEs with distinct properties.Existing works usually choose activation functions by inefficient trial-and-error.To avoid the inefficient manual selection and to alleviate the optimization difficulty of PINNs,we introduce adaptive activation functions to search for the optimal function when solving different problems.We compare different adaptive activation functions and discuss their limitations in the context of PINNs.Furthermore,we propose to tailor the idea of learning combinations of candidate activation functions to the PINNs optimization,which has a higher requirement for the smoothness and diversity on learned functions.This is achieved by removing activation functions which cannot provide higher-order derivatives from the candidate set and incorporating elementary functions with different properties according to our prior knowledge about the PDE at hand.We further enhance the search space with adaptive slopes.The proposed adaptive activation function can be used to solve different PDE systems in an interpretable way.Its effectiveness is demonstrated on a series of benchmarks.Code is available at https://github.com/LeapLabTHU/AdaAFforPINNs. 展开更多
关键词 Partial differential equations deep learning adaptive activation functions physicsinformed neural networks
原文传递
Robust stability of mixed Cohen–Grossberg neural networks with discontinuous activation functions
11
作者 Cheng-De Zheng Ye Liu Yan Xiao 《International Journal of Intelligent Computing and Cybernetics》 EI 2019年第1期82-101,共20页
Purpose–The purpose of this paper is to develop a method for the existence,uniqueness and globally robust stability of the equilibrium point for Cohen–Grossberg neural networks with time-varying delays,continuous di... Purpose–The purpose of this paper is to develop a method for the existence,uniqueness and globally robust stability of the equilibrium point for Cohen–Grossberg neural networks with time-varying delays,continuous distributed delays and a kind of discontinuous activation functions.Design/methodology/approach–Basedonthe Leray–Schauderalternativetheoremand chainrule,by using a novel integral inequality dealing with monotone non-decreasing function,the authors obtain a delay-dependent sufficient condition with less conservativeness for robust stability of considered neural networks.Findings–Itturns out thattheauthors’delay-dependent sufficientcondition canbeformed intermsof linear matrix inequalities conditions.Two examples show the effectiveness of the obtained results.Originality/value–The novelty of the proposed approach lies in dealing with a new kind of discontinuous activation functions by using the Leray–Schauder alternative theorem,chain rule and a novel integral inequality on monotone non-decreasing function. 展开更多
关键词 Cohen–Grossberg neural networks Discontinuous activation functions Filippov solution Globally robust stability Lyapunov–Krasovskii functional
原文传递
GAAF:Searching Activation Functions for Binary Neural Networks Through Genetic Algorithm 被引量:1
12
作者 Yanfei Li Tong Geng +2 位作者 Samuel Stein Ang Li Huimin Yu 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2023年第1期207-220,共14页
Binary neural networks(BNNs)show promising utilization in cost and power-restricted domains such as edge devices and mobile systems.This is due to its significantly less computation and storage demand,but at the cost ... Binary neural networks(BNNs)show promising utilization in cost and power-restricted domains such as edge devices and mobile systems.This is due to its significantly less computation and storage demand,but at the cost of degraded performance.To close the accuracy gap,in this paper we propose to add a complementary activation function(AF)ahead of the sign based binarization,and rely on the genetic algorithm(GA)to automatically search for the ideal AFs.These AFs can help extract extra information from the input data in the forward pass,while allowing improved gradient approximation in the backward pass.Fifteen novel AFs are identified through our GA-based search,while most of them show improved performance(up to 2.54%on ImageNet)when testing on different datasets and network models.Interestingly,periodic functions are identified as a key component for most of the discovered AFs,which rarely exist in human designed AFs.Our method offers a novel approach for designing general and application-specific BNN architecture.GAAF will be released on GitHub. 展开更多
关键词 binary neural networks(BNNs) genetic algorithm activation function
原文传递
Nonparametric Statistical Feature Scaling Based Quadratic Regressive Convolution Deep Neural Network for Software Fault Prediction
13
作者 Sureka Sivavelu Venkatesh Palanisamy 《Computers, Materials & Continua》 SCIE EI 2024年第3期3469-3487,共19页
The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software w... The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods. 展开更多
关键词 Software defect prediction feature selection nonparametric statistical Torgerson-Gower scaling technique quadratic censored regressive convolution deep neural network softstep activation function nelder-mead method
下载PDF
Large-scale self-normalizing neural networks
14
作者 Zhaodong Chen Weiqin Zhao +4 位作者 Lei Deng Yufei Ding Qinghao Wen Guoqi Li Yuan Xie 《Journal of Automation and Intelligence》 2024年第2期101-110,共10页
Self-normalizing neural networks(SNN)regulate the activation and gradient flows through activation functions with the self-normalization property.As SNNs do not rely on norms computed from minibatches,they are more fr... Self-normalizing neural networks(SNN)regulate the activation and gradient flows through activation functions with the self-normalization property.As SNNs do not rely on norms computed from minibatches,they are more friendly to data parallelism,kernel fusion,and emerging architectures such as ReRAM-based accelerators.However,existing SNNs have mainly demonstrated their effectiveness on toy datasets and fall short in accuracy when dealing with large-scale tasks like ImageNet.They lack the strong normalization,regularization,and expression power required for wider,deeper models and larger-scale tasks.To enhance the normalization strength,this paper introduces a comprehensive and practical definition of the self-normalization property in terms of the stability and attractiveness of the statistical fixed points.It is comprehensive as it jointly considers all the fixed points used by existing studies:the first and second moment of forward activation and the expected Frobenius norm of backward gradient.The practicality comes from the analytical equations provided by our paper to assess the stability and attractiveness of each fixed point,which are derived from theoretical analysis of the forward and backward signals.The proposed definition is applied to a meta activation function inspired by prior research,leading to a stronger self-normalizing activation function named‘‘bi-scaled exponential linear unit with backward standardized’’(bSELU-BSTD).We provide both theoretical and empirical evidence to show that it is superior to existing studies.To enhance the regularization and expression power,we further propose scaled-Mixup and channel-wise scale&shift.With these three techniques,our approach achieves 75.23%top-1 accuracy on the ImageNet with Conv MobileNet V1,surpassing the performance of existing self-normalizing activation functions.To the best of our knowledge,this is the first SNN that achieves comparable accuracy to batch normalization on ImageNet. 展开更多
关键词 Self-normalizing neural network Mean-field theory Block dynamical isometry activation function
下载PDF
Adaptive proportional integral differential control based on radial basis function neural network identification of a two-degree-of-freedom closed-chain robot
15
作者 陈正洪 王勇 李艳 《Journal of Shanghai University(English Edition)》 CAS 2008年第5期457-461,共5页
A closed-chain robot has several advantages over an open-chain robot, such as high mechanical rigidity, high payload, high precision. Accurate trajectory control of a robot is essential in practical-use. This paper pr... A closed-chain robot has several advantages over an open-chain robot, such as high mechanical rigidity, high payload, high precision. Accurate trajectory control of a robot is essential in practical-use. This paper presents an adaptive proportional integral differential (PID) control algorithm based on radial basis function (RBF) neural network for trajectory tracking of a two-degree-of-freedom (2-DOF) closed-chain robot. In this scheme, an RBF neural network is used to approximate the unknown nonlinear dynamics of the robot, at the same time, the PID parameters can be adjusted online and the high precision can be obtained. Simulation results show that the control algorithm accurately tracks a 2-DOF closed-chain robot trajectories. The results also indicate that the system robustness and tracking performance are superior to the classic PID method. 展开更多
关键词 closed-chain robot radial basis function (RBF) neural network adaptive proportional integral differential pid control identification neural network
下载PDF
Learning Performance of Linear and Exponential Activity Function with Multi-layered Neural Networks
16
作者 Betere Job Isaac Hiroshi Kinjo +1 位作者 Kunihiko Nakazono Naoki Oshiro 《Journal of Electrical Engineering》 2018年第5期289-294,共6页
This paper presents a study on the improvement of MLNNs(multi-layer neural networks)performance by an activity function for multi logic training patterns.Our model network has L hidden layers of two inputs and three,f... This paper presents a study on the improvement of MLNNs(multi-layer neural networks)performance by an activity function for multi logic training patterns.Our model network has L hidden layers of two inputs and three,four to six output training using BP(backpropagation)neural network.We used logic functions of XOR(exclusive OR),OR,AND,NAND(not AND),NXOR(not exclusive OR)and NOR(not OR)as the multi logic teacher signals to evaluate the training performance of MLNNs by an activity function for information and data enlargement in signal processing(synaptic divergence state).We specifically used four activity functions from which we modified one and called it L&exp.function as it could give the highest training abilities compared to the original activity functions of Sigmoid,ReLU and Step during simulation and training in the network.And finally,we propose L&exp.function as being good for MLNNs and it may be applicable for signal processing of data and information enlargement because of its performance training characteristics with multiple training logic patterns hence can be adopted in machine deep learning. 展开更多
关键词 MULTI-LAYER neural networks LEARNING performance multi logic training patterns ACTIVITY function BP neural network deep LEARNING
下载PDF
Chip-Based High-Dimensional Optical Neural Network 被引量:6
17
作者 Xinyu Wang Peng Xie +1 位作者 Bohan Chen Xingcai Zhang 《Nano-Micro Letters》 SCIE EI CAS CSCD 2022年第12期570-578,共9页
Parallel multi-thread processing in advanced intelligent processors is the core to realize high-speed and high-capacity signal processing systems.Optical neural network(ONN)has the native advantages of high paralleliz... Parallel multi-thread processing in advanced intelligent processors is the core to realize high-speed and high-capacity signal processing systems.Optical neural network(ONN)has the native advantages of high parallelization,large bandwidth,and low power consumption to meet the demand of big data.Here,we demonstrate the dual-layer ONN with Mach-Zehnder interferometer(MZI)network and nonlinear layer,while the nonlinear activation function is achieved by optical-electronic signal conversion.Two frequency components from the microcomb source carrying digit datasets are simultaneously imposed and intelligently recognized through the ONN.We successfully achieve the digit classification of different frequency components by demultiplexing the output signal and testing power distribution.Efficient parallelization feasibility with wavelength division multiplexing is demonstrated in our high-dimensional ONN.This work provides a high-performance architecture for future parallel high-capacity optical analog computing. 展开更多
关键词 Integrated optics Optical neural network High-dimension Mach-Zehnder interferometer Nonlinear activation function Parallel high-capacity analog computing
下载PDF
Complex-Valued Neural Networks:A Comprehensive Survey 被引量:4
18
作者 ChiYan Lee Hideyuki Hasegawa Shangce Gao 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第8期1406-1426,共21页
Complex-valued neural networks(CVNNs)have shown their excellent efficiency compared to their real counterparts in speech enhancement,image and signal processing.Researchers throughout the years have made many efforts ... Complex-valued neural networks(CVNNs)have shown their excellent efficiency compared to their real counterparts in speech enhancement,image and signal processing.Researchers throughout the years have made many efforts to improve the learning algorithms and activation functions of CVNNs.Since CVNNs have proven to have better performance in handling the naturally complex-valued data and signals,this area of study will grow and expect the arrival of some effective improvements in the future.Therefore,there exists an obvious reason to provide a comprehensive survey paper that systematically collects and categorizes the advancement of CVNNs.In this paper,we discuss and summarize the recent advances based on their learning algorithms,activation functions,which is the most challenging part of building a CVNN,and applications.Besides,we outline the structure and applications of complex-valued convolutional,residual and recurrent neural networks.Finally,we also present some challenges and future research directions to facilitate the exploration of the ability of CVNNs. 展开更多
关键词 Complex activation function complex backpropagation algorithm complex-valued learning algorithm complex-valued neural network deep learning
下载PDF
Synthesization of high-capacity auto-associative memories using complex-valued neural networks 被引量:1
19
作者 黄玉娇 汪晓妍 +1 位作者 龙海霞 杨旭华 《Chinese Physics B》 SCIE EI CAS CSCD 2016年第12期194-201,共8页
In this paper, a novel design procedure is proposed for synthesizing high-capacity auto-associative memories based on complex-valued neural networks with real-imaginary-type activation functions and constant delays. S... In this paper, a novel design procedure is proposed for synthesizing high-capacity auto-associative memories based on complex-valued neural networks with real-imaginary-type activation functions and constant delays. Stability criteria dependent on external inputs of neural networks are derived. The designed networks can retrieve the stored patterns by external inputs rather than initial conditions. The derivation can memorize the desired patterns with lower-dimensional neural networks than real-valued neural networks, and eliminate spurious equilibria of complex-valued neural networks. One numerical example is provided to show the effectiveness and superiority of the presented results. 展开更多
关键词 associative memory complex-valued neural network real-imaginary-type activation function external input
下载PDF
Global exponential periodicity of a class of impulsive neural networks
20
作者 梁金玲 《Journal of Southeast University(English Edition)》 EI CAS 2005年第4期509-512,共4页
By the Lyapunov function method, combined with the inequality techniques, some criteria are established to ensure the existence, uniqueness and global exponential stability of the periodic solution for a class of impu... By the Lyapunov function method, combined with the inequality techniques, some criteria are established to ensure the existence, uniqueness and global exponential stability of the periodic solution for a class of impulsive neural networks. The results obtained only require the activation functions to be globally Lipschitz continuous without assuming their boundedness, monotonicity or differentiability. The conditions are easy to check in practice and they can be applied to design globally exponentially periodic impulsive neural networks. 展开更多
关键词 global exponential periodicity impulsive neural networks Lyapunov function Lipschitz activation function
下载PDF
上一页 1 2 28 下一页 到第
使用帮助 返回顶部