期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Efficient Training of Multi-Layer Neural Networks to Achieve Faster Validation 被引量:1
1
作者 Adel Saad Assiri 《Computer Systems Science & Engineering》 SCIE EI 2021年第3期435-450,共16页
Artificial neural networks(ANNs)are one of the hottest topics in computer science and artificial intelligence due to their potential and advantages in analyzing real-world problems in various disciplines,including but... Artificial neural networks(ANNs)are one of the hottest topics in computer science and artificial intelligence due to their potential and advantages in analyzing real-world problems in various disciplines,including but not limited to physics,biology,chemistry,and engineering.However,ANNs lack several key characteristics of biological neural networks,such as sparsity,scale-freeness,and small-worldness.The concept of sparse and scale-free neural networks has been introduced to fill this gap.Network sparsity is implemented by removing weak weights between neurons during the learning process and replacing them with random weights.When the network is initialized,the neural network is fully connected,which means the number of weights is four times the number of neurons.In this study,considering that a biological neural network has some degree of initial sparsity,we design an ANN with a prescribed level of initial sparsity.The neural network is tested on handwritten digits,Arabic characters,CIFAR-10,and Reuters newswire topics.Simulations show that it is possible to reduce the number of weights by up to 50%without losing prediction accuracy.Moreover,in both cases,the testing time is dramatically reduced compared with fully connected ANNs. 展开更多
关键词 SPARSITY weak weights MULTI-LAYER neural network NN training with initial sparsity
下载PDF
The "Competent Literary" Teacher: The New Perspectives of Initial Teacher Training 被引量:1
2
作者 Antonella Nuzzaci Stefania Nirchi Luca Luciani 《Journal of Literature and Art Studies》 2016年第5期530-548,共19页
The aim of the paper is to describe the literary competence of teachers as a dimension of the knowledge for the teaching, which can be extended to all aspects of the professional teacher transformation. Among the vari... The aim of the paper is to describe the literary competence of teachers as a dimension of the knowledge for the teaching, which can be extended to all aspects of the professional teacher transformation. Among the various knowledge, competences, and dispositions the shape its legacy, the literary competence becomes the main turning point between pedagogical knowledge and specialized knowledge, between core and characterizing disciplines. The literary competence lives in the intersection between different kinds of competence (linguistic competence, communication competence, reading competence, writing competences, interpretive competence, discursive competence, cultural competence, cross-cultural competence, pragmatic and strategic competence and social competence). The approach offers a framework for pedagogy of the literary competence in the teacher training as substantial experience and expertise able to increase the professional repertoire of teachers in training fields. This paper provides food for thought for enrichment and enhancement of the literary competence and its use in teaching. 展开更多
关键词 literary competence higher education initial teacher training teaching-learning processes
下载PDF
A sparse algorithm for adaptive pruning least square support vector regression machine based on global representative point ranking 被引量:2
3
作者 HU Lei YI Guoxing HUANG Chao 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2021年第1期151-162,共12页
Least square support vector regression(LSSVR)is a method for function approximation,whose solutions are typically non-sparse,which limits its application especially in some occasions of fast prediction.In this paper,a... Least square support vector regression(LSSVR)is a method for function approximation,whose solutions are typically non-sparse,which limits its application especially in some occasions of fast prediction.In this paper,a sparse algorithm for adaptive pruning LSSVR algorithm based on global representative point ranking(GRPR-AP-LSSVR)is proposed.At first,the global representative point ranking(GRPR)algorithm is given,and relevant data analysis experiment is implemented which depicts the importance ranking of data points.Furthermore,the pruning strategy of removing two samples in the decremental learning procedure is designed to accelerate the training speed and ensure the sparsity.The removed data points are utilized to test the temporary learning model which ensures the regression accuracy.Finally,the proposed algorithm is verified on artificial datasets and UCI regression datasets,and experimental results indicate that,compared with several benchmark algorithms,the GRPR-AP-LSSVR algorithm has excellent sparsity and prediction speed without impairing the generalization performance. 展开更多
关键词 least square support vector regression(LSSVR) global representative point ranking(GRPR) initial training dataset pruning strategy sparsity regression accuracy
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部