Polyethylene terephthalate(PET)and polyethylene(PE)fibers were surface photo-grafted with acrylic acid(AA)by using UV irradiation photochemical initiationduring a continuous winding process within 1-2 min-utes.The gra...Polyethylene terephthalate(PET)and polyethylene(PE)fibers were surface photo-grafted with acrylic acid(AA)by using UV irradiation photochemical initiationduring a continuous winding process within 1-2 min-utes.The grafted fibers were characterized by measure-ments of dye uptaking,moisture regain,pull-out forcesof monofilament from cured matrix,as well as by analy-sis of ESCA and ATR-FTIR spectra.All these resultsconfirm that the surface behavior of the UV-irradiationgrafted fibers was greatly improved.It was also provedthat the original excellent mechanical properties of the fi-bers were well-retained after the surface grafting treat-ment.展开更多
Structural neural network pruning aims to remove the redundant channels in the deep convolutional neural networks(CNNs)by pruning the filters of less importance to the final output accuracy.To reduce the degradation o...Structural neural network pruning aims to remove the redundant channels in the deep convolutional neural networks(CNNs)by pruning the filters of less importance to the final output accuracy.To reduce the degradation of performance after pruning,many methods utilize the loss with sparse regularization to produce structured sparsity.In this paper,we analyze these sparsity-training-based methods and find that the regularization of unpruned channels is unnecessary.Moreover,it restricts the network′s capacity,which leads to under-fitting.To solve this problem,we propose a novel pruning method,named Mask Sparsity,with pruning-aware sparse regularization.Mask Sparsity imposes the fine-grained sparse regularization on the specific filters selected by a pruning mask,rather than all the filters of the model.Before the fine-grained sparse regularization of Mask Sparity,we can use many methods to get the pruning mask,such as running the global sparse regularization.Mask Sparsity achieves a 63.03%float point operations(FLOPs)reduction on Res Net-110 by removing 60.34%of the parameters,with no top-1 accuracy loss on CIFAR-10.On ILSVRC-2012,Mask Sparsity reduces more than 51.07%FLOPs on Res Net-50,with only a loss of 0.76%in the top-1 accuracy.The code of this paper is released at https://github.com/CASIA-IVA-Lab/Mask Sparsity.We have also integrated the code into a self-developed Py Torch pruning toolkit,named Easy Pruner,at https://gitee.com/casia_iva_engineer/easypruner.展开更多
文摘Polyethylene terephthalate(PET)and polyethylene(PE)fibers were surface photo-grafted with acrylic acid(AA)by using UV irradiation photochemical initiationduring a continuous winding process within 1-2 min-utes.The grafted fibers were characterized by measure-ments of dye uptaking,moisture regain,pull-out forcesof monofilament from cured matrix,as well as by analy-sis of ESCA and ATR-FTIR spectra.All these resultsconfirm that the surface behavior of the UV-irradiationgrafted fibers was greatly improved.It was also provedthat the original excellent mechanical properties of the fi-bers were well-retained after the surface grafting treat-ment.
基金supported by National Natural Science Foundation of China(Nos.62176254,61976210,61876086,62076235,62002356,62006230 and 62002357)National Key R&D Program of China(No.2021ZD0110403).
文摘Structural neural network pruning aims to remove the redundant channels in the deep convolutional neural networks(CNNs)by pruning the filters of less importance to the final output accuracy.To reduce the degradation of performance after pruning,many methods utilize the loss with sparse regularization to produce structured sparsity.In this paper,we analyze these sparsity-training-based methods and find that the regularization of unpruned channels is unnecessary.Moreover,it restricts the network′s capacity,which leads to under-fitting.To solve this problem,we propose a novel pruning method,named Mask Sparsity,with pruning-aware sparse regularization.Mask Sparsity imposes the fine-grained sparse regularization on the specific filters selected by a pruning mask,rather than all the filters of the model.Before the fine-grained sparse regularization of Mask Sparity,we can use many methods to get the pruning mask,such as running the global sparse regularization.Mask Sparsity achieves a 63.03%float point operations(FLOPs)reduction on Res Net-110 by removing 60.34%of the parameters,with no top-1 accuracy loss on CIFAR-10.On ILSVRC-2012,Mask Sparsity reduces more than 51.07%FLOPs on Res Net-50,with only a loss of 0.76%in the top-1 accuracy.The code of this paper is released at https://github.com/CASIA-IVA-Lab/Mask Sparsity.We have also integrated the code into a self-developed Py Torch pruning toolkit,named Easy Pruner,at https://gitee.com/casia_iva_engineer/easypruner.