期刊文献+
共找到7篇文章
< 1 >
每页显示 20 50 100
A Study on Classification and Detection of Small Moths Using CNN Model 被引量:3
1
作者 Sang-Hyun Lee 《Computers, Materials & Continua》 SCIE EI 2022年第4期1987-1998,共12页
Currently,there are many limitations to classify images of small objects.In addition,there are limitations such as error detection due to external factors,and there is also a disadvantage that it is difficult to accur... Currently,there are many limitations to classify images of small objects.In addition,there are limitations such as error detection due to external factors,and there is also a disadvantage that it is difficult to accurately distinguish between various objects.This paper uses a convolutional neural network(CNN)algorithm to recognize and classify object images of very small moths and obtain precise data images.A convolution neural network algorithm is used for image data classification,and the classified image is transformed into image data to learn the topological structure of the image.To improve the accuracy of the image classification and reduce the loss rate,a parameter for finding a fast-optimal point of image classification is set by a convolutional neural network and a pixel image as a preprocessor.As a result of this study,we applied a convolution neural network algorithm to classify the images of very small moths by capturing precise images of the moths.Experimental results showed that the accuracy of classification of very small moths was more than 90%. 展开更多
关键词 Convolution neural network rectified linear unit activation function pooling feature map
下载PDF
Reducing parameter space for neural network training 被引量:1
2
作者 Tong Qin Ling Zhou Dongbin Xiu 《Theoretical & Applied Mechanics Letters》 CAS CSCD 2020年第3期170-181,共12页
For neural networks(NNs)with rectified linear unit(ReLU)or binary activation functions,we show that their training can be accomplished in a reduced parameter space.Specifically,the weights in each neuron can be traine... For neural networks(NNs)with rectified linear unit(ReLU)or binary activation functions,we show that their training can be accomplished in a reduced parameter space.Specifically,the weights in each neuron can be trained on the unit sphere,as opposed to the entire space,and the threshold can be trained in a bounded interval,as opposed to the real line.We show that the NNs in the reduced parameter space are mathematically equivalent to the standard NNs with parameters in the whole space.The reduced parameter space shall facilitate the optimization procedure for the network training,as the search space becomes(much)smaller.We demonstrate the improved training performance using numerical examples. 展开更多
关键词 Rectified linear unit network Universal approximator Reduced space
下载PDF
Autonomous Surveillance of Infants’ Needs Using CNN Model for Audio Cry Classification
3
作者 Geofrey Owino Anthony Waititu +1 位作者 Anthony Wanjoya John Okwiri 《Journal of Data Analysis and Information Processing》 2022年第4期198-219,共22页
Infants portray suggestive unique cries while sick, having belly pain, discomfort, tiredness, attention and desire for a change of diapers among other needs. There exists limited knowledge in accessing the infants’ n... Infants portray suggestive unique cries while sick, having belly pain, discomfort, tiredness, attention and desire for a change of diapers among other needs. There exists limited knowledge in accessing the infants’ needs as they only relay information through suggestive cries. Many teenagers tend to give birth at an early age, thereby exposing them to be the key monitors of their own babies. They tend not to have sufficient skills in monitoring the infant’s dire needs, more so during the early stages of infant development. Artificial intelligence has shown promising efficient predictive analytics from supervised, and unsupervised to reinforcement learning models. This study, therefore, seeks to develop an android app that could be used to discriminate the infant audio cries by leveraging the strength of convolution neural networks as a classifier model. Audio analytics from many kinds of literature is an untapped area by researchers as it’s attributed to messy and huge data generation. This study, therefore, strongly leverages convolution neural networks, a deep learning model that is capable of handling more than one-dimensional datasets. To achieve this, the audio data in form of a wave was converted to images through Mel spectrum frequencies which were classified using the computer vision CNN model. The Librosa library was used to convert the audio to Mel spectrum which was then presented as pixels serving as the input for classifying the audio classes such as sick, burping, tired, and hungry. The study goal was to incorporate the model as an android tool that can be utilized at the domestic level and hospital facilities for surveillance of the infant’s health and social needs status all time round. 展开更多
关键词 Convolutional Neural Network (CNN) Mel Frequency Cepstral Coefficients (MFCCs) Rectified Linear Unit (ReLU) Activation Function Audio Analytics Deep Neural Network (DNN)
下载PDF
Generic functional modelling of multi-pulse auto-transformer rectifier units for more-electric aircraft applications 被引量:1
4
作者 Tao YANG Serhiy BOZHKO +2 位作者 Patrick WHEELER Shaoping WANG Shuai WU 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2018年第5期883-891,共9页
The Auto-Transformer Rectifier Unit(ATRU) is one preferred solution for high-power AC/DC power conversion in aircraft. This is mainly due to its simple structure, high reliability and reduced k VA ratings. Indeed, t... The Auto-Transformer Rectifier Unit(ATRU) is one preferred solution for high-power AC/DC power conversion in aircraft. This is mainly due to its simple structure, high reliability and reduced k VA ratings. Indeed, the ATRU has become a preferred AC/DC solution to supply power to the electric environment control system on-board future aircraft. In this paper, a general modelling method for ATRUs is introduced. The developed model is based on the fact that the DC voltage and current are strongly related to the voltage and current vectors at the AC terminals of ATRUs. In this paper, we carry on our research in modelling symmetric 18-pulse ATRUs and develop a generic modelling technique. The developed generic model can study not only symmetric but also asymmetric ATRUs. An 18-pulse asymmetric ATRU is used to demonstrate the accuracy and efficiency of the developed model by comparing with corresponding detailed switching SABER models provided by our industrial partner. The functional models also allow accelerated and accurate simulations and thus enable whole-scale more-electric aircraft electrical power system studies in the future. 展开更多
关键词 Asymmetric transformer Functional modelling More-Electric Aircraft Multi-pulse rectifier Transformer rectifier unit
原文传递
Symmetric-threshold ReLU for Fast and Nearly Lossless ANN-SNN Conversion
5
作者 Jianing Han Ziming Wang +1 位作者 Jiangrong Shen Huajin Tang 《Machine Intelligence Research》 EI CSCD 2023年第3期435-446,共12页
The artificial neural network-spiking neural network(ANN-SNN)conversion,as an efficient algorithm for deep SNNs training,promotes the performance of shallow SNNs,and expands the application in various tasks.However,th... The artificial neural network-spiking neural network(ANN-SNN)conversion,as an efficient algorithm for deep SNNs training,promotes the performance of shallow SNNs,and expands the application in various tasks.However,the existing conversion methods still face the problem of large conversion error within low conversion time steps.In this paper,a heuristic symmetric-threshold rectified linear unit(stReLU)activation function for ANNs is proposed,based on the intrinsically different responses between the integrate-and-fire(IF)neurons in SNNs and the activation functions in ANNs.The negative threshold in stReLU can guarantee the conversion of negative activations,and the symmetric thresholds enable positive error to offset negative error between activation value and spike firing rate,thus reducing the conversion error from ANNs to SNNs.The lossless conversion from ANNs with stReLU to SNNs is demonstrated by theoretical formulation.By contrasting stReLU with asymmetric-threshold LeakyReLU and threshold ReLU,the effectiveness of symmetric thresholds is further explored.The results show that ANNs with stReLU can decrease the conversion error and achieve nearly lossless conversion based on the MNIST,Fashion-MNIST,and CIFAR10 datasets,with 6×to 250 speedup compared with other methods.Moreover,the comparison of energy consumption between ANNs and SNNs indicates that this novel conversion algorithm can also significantly reduce energy consumption. 展开更多
关键词 Symmetric-threshold rectified linear unit(stReLU) deep spiking neural networks artificial neural network-spiking neural network(ANN-SNN)conversion lossless conversion double thresholds
原文传递
Verifying ReLU Neural Networks from a Model Checking Perspective 被引量:3
6
作者 Wan-Wei Liu Fu Song +1 位作者 Tang-Hao-Ran Zhang Ji Wang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2020年第6期1365-1381,共17页
Neural networks, as an important computing model, have a wide application in artificial intelligence (AI) domain. From the perspective of computer science, such a computing model requires a formal description of its b... Neural networks, as an important computing model, have a wide application in artificial intelligence (AI) domain. From the perspective of computer science, such a computing model requires a formal description of its behaviors, particularly the relation between input and output. In addition, such specifications ought to be verified automatically. ReLU (rectified linear unit) neural networks are intensively used in practice. In this paper, we present ReLU Temporal Logic (ReTL), whose semantics is defined with respect to ReLU neural networks, which could specify value-related properties about the network. We show that the model checking algorithm for theΣ2∪Π2 fragment of ReTL, which can express properties such as output reachability, is decidable in EXPSPACE. We have also implemented our algorithm with a prototype tool, and experimental results demonstrate the feasibility of the presented model checking approach. 展开更多
关键词 model checking rectified linear unit neural(ReLU)network temporal logic
原文传递
Feature Representations Using the Reflected Rectified Linear Unit(RReLU) Activation 被引量:7
7
作者 Chaity Banerjee Tathagata Mukherjee Eduardo Pasiliao Jr. 《Big Data Mining and Analytics》 2020年第2期102-120,共19页
Deep Neural Networks(DNNs)have become the tool of choice for machine learning practitioners today.One important aspect of designing a neural network is the choice of the activation function to be used at the neurons o... Deep Neural Networks(DNNs)have become the tool of choice for machine learning practitioners today.One important aspect of designing a neural network is the choice of the activation function to be used at the neurons of the different layers.In this work,we introduce a four-output activation function called the Reflected Rectified Linear Unit(RRe LU)activation which considers both a feature and its negation during computation.Our activation function is"sparse",in that only two of the four possible outputs are active at a given time.We test our activation function on the standard MNIST and CIFAR-10 datasets,which are classification problems,as well as on a novel Computational Fluid Dynamics(CFD)dataset which is posed as a regression problem.On the baseline network for the MNIST dataset,having two hidden layers,our activation function improves the validation accuracy from 0.09 to 0.97 compared to the well-known Re LU activation.For the CIFAR-10 dataset,we use a deep baseline network that achieves 0.78 validation accuracy with 20 epochs but overfits the data.Using the RRe LU activation,we can achieve the same accuracy without overfitting the data.For the CFD dataset,we show that the RRe LU activation can reduce the number of epochs from 100(using Re LU)to 10 while obtaining the same levels of performance. 展开更多
关键词 deep learning feature space APPROXIMATIONS multi-output activations Rectified Linear Unit(ReLU)
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部