期刊文献+
共找到8篇文章
< 1 >
每页显示 20 50 100
Multi-objective evolutionary optimization for hardware-aware neural network pruning
1
作者 Wenjing Hong Guiying Li +2 位作者 Shengcai Liu Peng Yang Ke Tang 《Fundamental Research》 CAS CSCD 2024年第4期941-950,共10页
Neural network pruning is a popular approach to reducing the computational complexity of deep neural networks.In recent years,as growing evidence shows that conventional network pruning methods employ inappropriate pr... Neural network pruning is a popular approach to reducing the computational complexity of deep neural networks.In recent years,as growing evidence shows that conventional network pruning methods employ inappropriate proxy metrics,and as new types of hardware become increasingly available,hardware-aware network pruning that incorporates hardware characteristics in the loop of network pruning has gained growing attention,Both network accuracy and hardware efficiency(latency,memory consumption,etc.)are critical objectives to the success of network pruning,but the conflict between the multiple objectives makes it impossible to find a single optimal solution.Previous studies mostly convert the hardware-aware network pruning to optimization problems with a single objective.In this paper,we propose to solve the hardware-aware network pruning problem with Multi-Objective Evolutionary Algorithms(MOEAs).Specifically,we formulate the problem as a multi-objective optimization problem,and propose a novel memetic MOEA,namely HAMP,that combines an efficient portfoliobased selection and a surrogate-assisted local search,to solve it.Empirical studies demonstrate the potential of MOEAs in providing simultaneously a set of alternative solutions and the superiority of HAMP compared to the state-of-the-art hardware-aware network pruning method. 展开更多
关键词 Multi-objective optimization Evolutionary algorithm Neural network pruning Hardware-awaremachine learning Hardware efficiency
原文传递
Pruning-aware Sparse Regularization for Network Pruning 被引量:1
2
作者 Nan-Fei Jiang Xu Zhao +3 位作者 Chao-Yang Zhao Yong-Qi An Ming Tang Jin-Qiao Wang 《Machine Intelligence Research》 EI CSCD 2023年第1期109-120,共12页
Structural neural network pruning aims to remove the redundant channels in the deep convolutional neural networks(CNNs)by pruning the filters of less importance to the final output accuracy.To reduce the degradation o... Structural neural network pruning aims to remove the redundant channels in the deep convolutional neural networks(CNNs)by pruning the filters of less importance to the final output accuracy.To reduce the degradation of performance after pruning,many methods utilize the loss with sparse regularization to produce structured sparsity.In this paper,we analyze these sparsity-training-based methods and find that the regularization of unpruned channels is unnecessary.Moreover,it restricts the network′s capacity,which leads to under-fitting.To solve this problem,we propose a novel pruning method,named Mask Sparsity,with pruning-aware sparse regularization.Mask Sparsity imposes the fine-grained sparse regularization on the specific filters selected by a pruning mask,rather than all the filters of the model.Before the fine-grained sparse regularization of Mask Sparity,we can use many methods to get the pruning mask,such as running the global sparse regularization.Mask Sparsity achieves a 63.03%float point operations(FLOPs)reduction on Res Net-110 by removing 60.34%of the parameters,with no top-1 accuracy loss on CIFAR-10.On ILSVRC-2012,Mask Sparsity reduces more than 51.07%FLOPs on Res Net-50,with only a loss of 0.76%in the top-1 accuracy.The code of this paper is released at https://github.com/CASIA-IVA-Lab/Mask Sparsity.We have also integrated the code into a self-developed Py Torch pruning toolkit,named Easy Pruner,at https://gitee.com/casia_iva_engineer/easypruner. 展开更多
关键词 Deep learning convolutional neural network(CNN) model compression and acceleration network pruning regula rization
原文传递
An Investigation of Frequency-Domain Pruning Algorithms for Accelerating Human Activity Recognition Tasks Based on Sensor Data
3
作者 Jian Su Haijian Shao +1 位作者 Xing Deng Yingtao Jiang 《Computers, Materials & Continua》 SCIE EI 2024年第11期2219-2242,共24页
The rapidly advancing Convolutional Neural Networks(CNNs)have brought about a paradigm shift in various computer vision tasks,while also garnering increasing interest and application in sensor-based Human Activity Rec... The rapidly advancing Convolutional Neural Networks(CNNs)have brought about a paradigm shift in various computer vision tasks,while also garnering increasing interest and application in sensor-based Human Activity Recognition(HAR)efforts.However,the significant computational demands and memory requirements hinder the practical deployment of deep networks in resource-constrained systems.This paper introduces a novel network pruning method based on the energy spectral density of data in the frequency domain,which reduces the model’s depth and accelerates activity inference.Unlike traditional pruning methods that focus on the spatial domain and the importance of filters,this method converts sensor data,such as HAR data,to the frequency domain for analysis.It emphasizes the low-frequency components by calculating their energy spectral density values.Subsequently,filters that meet the predefined thresholds are retained,and redundant filters are removed,leading to a significant reduction in model size without compromising performance or incurring additional computational costs.Notably,the proposed algorithm’s effectiveness is empirically validated on a standard five-layer CNNs backbone architecture.The computational feasibility and data sensitivity of the proposed scheme are thoroughly examined.Impressively,the classification accuracy on three benchmark HAR datasets UCI-HAR,WISDM,and PAMAP2 reaches 96.20%,98.40%,and 92.38%,respectively.Concurrently,our strategy achieves a reduction in Floating Point Operations(FLOPs)by 90.73%,93.70%,and 90.74%,respectively,along with a corresponding decrease in memory consumption by 90.53%,93.43%,and 90.05%. 展开更多
关键词 Convolutional neural networks human activity recognition network pruning frequency-domain transformation
下载PDF
Optimizing Deep Neural Networks for Face Recognition to Increase Training Speed and Improve Model Accuracy
4
作者 Mostafa Diba Hossein Khosravi 《Intelligent Automation & Soft Computing》 2023年第12期315-332,共18页
Convolutional neural networks continually evolve to enhance accuracy in addressing various problems,leading to an increase in computational cost and model size.This paper introduces a novel approach for pruning face r... Convolutional neural networks continually evolve to enhance accuracy in addressing various problems,leading to an increase in computational cost and model size.This paper introduces a novel approach for pruning face recognition models based on convolutional neural networks.The proposed method identifies and removes inefficient filters based on the information volume in feature maps.In each layer,some feature maps lack useful information,and there exists a correlation between certain feature maps.Filters associated with these two types of feature maps impose additional computational costs on the model.By eliminating filters related to these categories of feature maps,the reduction of both computational cost and model size can be achieved.The approach employs a combination of correlation analysis and the summation of matrix elements within each feature map to detect and eliminate inefficient filters.The method was applied to two face recognition models utilizing the VGG16 and ResNet50V2 backbone architectures.In the proposed approach,the number of filters removed in each layer varies,and the removal process is independent of the adjacent layers.The convolutional layers of both backbone models were initialized with pre-trained weights from ImageNet.For training,the CASIA-WebFace dataset was utilized,and the Labeled Faces in the Wild(LFW)dataset was employed for benchmarking purposes.In the VGG16-based face recognition model,a 0.74%accuracy improvement was achieved while reducing the number of convolution parameters by 26.85%and decreasing Floating-point operations per second(FLOPs)by 47.96%.For the face recognition model based on the ResNet50V2 architecture,the ArcFace method was implemented.The removal of inactive filters in this model led to a slight decrease in accuracy by 0.11%.However,it resulted in enhanced training speed,a reduction of 59.38%in convolution parameters,and a 57.29%decrease in FLOPs. 展开更多
关键词 Face recognition network pruning FLOPs reduction deep learning ArcFace
下载PDF
PAL-BERT:An Improved Question Answering Model
5
作者 Wenfeng Zheng Siyu Lu +3 位作者 Zhuohang Cai Ruiyang Wang Lei Wang Lirong Yin 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2729-2745,共17页
In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and comput... In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and computing power advance,the issue of increasingly larger models and a growing number of parameters has surfaced.Consequently,model training has become more costly and less efficient.To enhance the efficiency and accuracy of the training process while reducing themodel volume,this paper proposes a first-order pruningmodel PAL-BERT based on the ALBERT model according to the characteristics of question-answering(QA)system and language model.Firstly,a first-order network pruning method based on the ALBERT model is designed,and the PAL-BERT model is formed.Then,the parameter optimization strategy of the PAL-BERT model is formulated,and the Mish function was used as an activation function instead of ReLU to improve the performance.Finally,after comparison experiments with traditional deep learning models TextCNN and BiLSTM,it is confirmed that PALBERT is a pruning model compression method that can significantly reduce training time and optimize training efficiency.Compared with traditional models,PAL-BERT significantly improves the NLP task’s performance. 展开更多
关键词 PAL-BERT question answering model pretraining language models ALBERT pruning model network pruning TextCNN BiLSTM
下载PDF
Tour Planning Design for Mobile Robots Using Pruned Adaptive Resonance Theory Networks 被引量:1
6
作者 S.Palani Murugan M.Chinnadurai S.Manikandan 《Computers, Materials & Continua》 SCIE EI 2022年第1期181-194,共14页
The development of intelligent algorithms for controlling autonomous mobile robots in real-time activities has increased dramatically in recent years.However,conventional intelligent algorithms currently fail to accur... The development of intelligent algorithms for controlling autonomous mobile robots in real-time activities has increased dramatically in recent years.However,conventional intelligent algorithms currently fail to accurately predict unexpected obstacles involved in tour paths and thereby suffer from inefficient tour trajectories.The present study addresses these issues by proposing a potential field integrated pruned adaptive resonance theory(PPART)neural network for effectively managing the touring process of autonomous mobile robots in real-time.The proposed system is implemented using the AlphaBot platform,and the performance of the system is evaluated according to the obstacle prediction accuracy,path detection accuracy,time-lapse,tour length,and the overall accuracy of the system.The proposed system provide a very high obstacle prediction accuracy of 99.61%.Accordingly,the proposed tour planning design effectively predicts unexpected obstacles in the environment and thereby increases the overall efficiency of tour navigation. 展开更多
关键词 Autonomous mobile robots path exploration NAVIGATION tour planning tour process potential filed integrated pruned ART networks AlphaBot platform
下载PDF
基于多智能体深度强化学习的无人机路径规划 被引量:4
7
作者 司鹏搏 吴兵 +2 位作者 杨睿哲 李萌 孙艳华 《北京工业大学学报》 CAS CSCD 北大核心 2023年第4期449-458,共10页
为解决多无人机(unmanned aerial vehicle, UAV)在复杂环境下的路径规划问题,提出一个多智能体深度强化学习UAV路径规划框架.该框架首先将路径规划问题建模为部分可观测马尔可夫过程,采用近端策略优化算法将其扩展至多智能体,通过设计UA... 为解决多无人机(unmanned aerial vehicle, UAV)在复杂环境下的路径规划问题,提出一个多智能体深度强化学习UAV路径规划框架.该框架首先将路径规划问题建模为部分可观测马尔可夫过程,采用近端策略优化算法将其扩展至多智能体,通过设计UAV的状态观测空间、动作空间及奖赏函数等实现多UAV无障碍路径规划;其次,为适应UAV搭载的有限计算资源条件,进一步提出基于网络剪枝的多智能体近端策略优化(network pruning-based multi-agent proximal policy optimization, NP-MAPPO)算法,提高了训练效率.仿真结果验证了提出的多UAV路径规划框架在各参数配置下的有效性及NP-MAPPO算法在训练时间上的优越性. 展开更多
关键词 无人机(unmanned aerial vehicle UAV) 复杂环境 路径规划 马尔可夫决策过程 多智能体近端策略优化算法(multi-agent proximal policy optimization MAPPO) 网络剪枝(network pruning NP)
下载PDF
A pruning algorithm with L_(1/2) regularizer for extreme learning machine 被引量:1
8
作者 Ye-tian FAN Wei WU +2 位作者 Wen-yu YANG Qin-wei FAN Jian WANG 《Journal of Zhejiang University-Science C(Computers and Electronics)》 SCIE EI 2014年第2期119-125,共7页
Compared with traditional learning methods such as the back propagation(BP)method,extreme learning machine provides much faster learning speed and needs less human intervention,and thus has been widely used.In this pa... Compared with traditional learning methods such as the back propagation(BP)method,extreme learning machine provides much faster learning speed and needs less human intervention,and thus has been widely used.In this paper we combine the L1/2regularization method with extreme learning machine to prune extreme learning machine.A variable learning coefcient is employed to prevent too large a learning increment.A numerical experiment demonstrates that a network pruned by L1/2regularization has fewer hidden nodes but provides better performance than both the original network and the network pruned by L2regularization. 展开更多
关键词 Extreme learning machine(ELM) L1/2 regularizer network pruning
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部