期刊文献+
共找到5篇文章
< 1 >
每页显示 20 50 100
An Optimized Deep Residual Network with a Depth Concatenated Block for Handwritten Characters Classification 被引量:3
1
作者 Gibrael Abosamra Hadi Oqaibi 《Computers, Materials & Continua》 SCIE EI 2021年第7期1-28,共28页
Even though much advancements have been achieved with regards to the recognition of handwritten characters,researchers still face difficulties with the handwritten character recognition problem,especially with the adv... Even though much advancements have been achieved with regards to the recognition of handwritten characters,researchers still face difficulties with the handwritten character recognition problem,especially with the advent of new datasets like the Extended Modified National Institute of Standards and Technology dataset(EMNIST).The EMNIST dataset represents a challenge for both machine-learning and deep-learning techniques due to inter-class similarity and intra-class variability.Inter-class similarity exists because of the similarity between the shapes of certain characters in the dataset.The presence of intra-class variability is mainly due to different shapes written by different writers for the same character.In this research,we have optimized a deep residual network to achieve higher accuracy vs.the published state-of-the-art results.This approach is mainly based on the prebuilt deep residual network model ResNet18,whose architecture has been enhanced by using the optimal number of residual blocks and the optimal size of the receptive field of the first convolutional filter,the replacement of the first max-pooling filter by an average pooling filter,and the addition of a drop-out layer before the fully connected layer.A distinctive modification has been introduced by replacing the final addition layer with a depth concatenation layer,which resulted in a novel deep architecture having higher accuracy vs.the pure residual architecture.Moreover,the dataset images’sizes have been adjusted to optimize their visibility in the network.Finally,by tuning the training hyperparameters and using rotation and shear augmentations,the proposed model outperformed the state-of-the-art models by achieving average accuracies of 95.91%and 90.90%for the Letters and Balanced dataset sections,respectively.Furthermore,the average accuracies were improved to 95.9%and 91.06%for the Letters and Balanced sections,respectively,by using a group of 5 instances of the trained models and averaging the output class probabilities. 展开更多
关键词 Handwritten character classification deep convolutional neural networks residual networks GoogLeNet ResNet18 DenseNet DROP-OUT l2 regularization factor learning rate
下载PDF
Generating Cartoon Images from Face Photos with Cycle-Consistent Adversarial Networks 被引量:1
2
作者 Tao Zhang Zhanjie Zhang +2 位作者 Wenjing Jia Xiangjian He Jie Yang 《Computers, Materials & Continua》 SCIE EI 2021年第11期2733-2747,共15页
The generative adversarial network(GAN)is first proposed in 2014,and this kind of network model is machine learning systems that can learn to measure a given distribution of data,one of the most important applications... The generative adversarial network(GAN)is first proposed in 2014,and this kind of network model is machine learning systems that can learn to measure a given distribution of data,one of the most important applications is style transfer.Style transfer is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image.CYCLE-GAN is a classic GAN model,which has a wide range of scenarios in style transfer.Considering its unsupervised learning characteristics,the mapping is easy to be learned between an input image and an output image.However,it is difficult for CYCLE-GAN to converge and generate high-quality images.In order to solve this problem,spectral normalization is introduced into each convolutional kernel of the discriminator.Every convolutional kernel reaches Lipschitz stability constraint with adding spectral normalization and the value of the convolutional kernel is limited to[0,1],which promotes the training process of the proposed model.Besides,we use pretrained model(VGG16)to control the loss of image content in the position of l1 regularization.To avoid overfitting,l1 regularization term and l2 regularization term are both used in the object loss function.In terms of Frechet Inception Distance(FID)score evaluation,our proposed model achieves outstanding performance and preserves more discriminative features.Experimental results show that the proposed model converges faster and achieves better FID scores than the state of the art. 展开更多
关键词 Generative adversarial network spectral normalization Lipschitz stability constraint VGG16 l1 regularization term l2 regularization term Frechet inception distance
下载PDF
Towards Securing Machine Learning Models Against Membership Inference Attacks
3
作者 Sana Ben Hamida Hichem Mrabet +2 位作者 Sana Belguith Adeeb Alhomoud Abderrazak Jemai 《Computers, Materials & Continua》 SCIE EI 2022年第3期4897-4919,共23页
From fraud detection to speech recognition,including price prediction,Machine Learning(ML)applications are manifold and can significantly improve different areas.Nevertheless,machine learning models are vulnerable and... From fraud detection to speech recognition,including price prediction,Machine Learning(ML)applications are manifold and can significantly improve different areas.Nevertheless,machine learning models are vulnerable and are exposed to different security and privacy attacks.Hence,these issues should be addressed while using ML models to preserve the security and privacy of the data used.There is a need to secure ML models,especially in the training phase to preserve the privacy of the training datasets and to minimise the information leakage.In this paper,we present an overview of ML threats and vulnerabilities,and we highlight current progress in the research works proposing defence techniques againstML security and privacy attacks.The relevant background for the different attacks occurring in both the training and testing/inferring phases is introduced before presenting a detailed overview of Membership Inference Attacks(MIA)and the related countermeasures.In this paper,we introduce a countermeasure against membership inference attacks(MIA)on Conventional Neural Networks(CNN)based on dropout and L2 regularization.Through experimental analysis,we demonstrate that this defence technique can mitigate the risks of MIA attacks while ensuring an acceptable accuracy of the model.Indeed,using CNN model training on two datasets CIFAR-10 and CIFAR-100,we empirically verify the ability of our defence strategy to decrease the impact of MIA on our model and we compare results of five different classifiers.Moreover,we present a solution to achieve a trade-off between the performance of themodel and the mitigation of MIA attack. 展开更多
关键词 Machine learning security and privacy defence techniques membership inference attacks DROPOUT l2 regularization
下载PDF
A Sharp Nonasymptotic Bound and Phase Diagram of L1/2 Regularization 被引量:1
4
作者 Hai ZHANG Zong Ben XU +2 位作者 Yao WANG Xiang Yu CHANG Yong LIANG 《Acta Mathematica Sinica,English Series》 SCIE CSCD 2014年第7期1242-1258,共17页
We derive a sharp nonasymptotic bound of parameter estimation of the L1/2 regularization. The bound shows that the solutions of the L1/2 regularization can achieve a loss within logarithmic factor of an ideal mean squ... We derive a sharp nonasymptotic bound of parameter estimation of the L1/2 regularization. The bound shows that the solutions of the L1/2 regularization can achieve a loss within logarithmic factor of an ideal mean squared error and therefore underlies the feasibility and effectiveness of the L1/2 regularization. Interestingly, when applied to compressive sensing, the L1/2 regularization scheme has exhibited a very promising capability of completed recovery from a much less sampling information. As compared with the Lp (0 〈 p 〈 1) penalty, it is appeared that the L1/2 penalty can always yield the most sparse solution among all the Lv penalty when 1/2 〈 p 〈 1, and when 0 〈 p 〈 1/2, the Lp penalty exhibits the similar properties as the L1/2 penalty. This suggests that the L1/2 regularization scheme can be accepted as the best and therefore the representative of all the Lp (0 〈 p 〈 1) regularization schemes. 展开更多
关键词 L1/2 regularization phase diagram compressive sensing
原文传递
A pruning algorithm with L_(1/2) regularizer for extreme learning machine 被引量:1
5
作者 Ye-tian FAN Wei WU +2 位作者 Wen-yu YANG Qin-wei FAN Jian WANG 《Journal of Zhejiang University-Science C(Computers and Electronics)》 SCIE EI 2014年第2期119-125,共7页
Compared with traditional learning methods such as the back propagation(BP)method,extreme learning machine provides much faster learning speed and needs less human intervention,and thus has been widely used.In this pa... Compared with traditional learning methods such as the back propagation(BP)method,extreme learning machine provides much faster learning speed and needs less human intervention,and thus has been widely used.In this paper we combine the L1/2regularization method with extreme learning machine to prune extreme learning machine.A variable learning coefcient is employed to prevent too large a learning increment.A numerical experiment demonstrates that a network pruned by L1/2regularization has fewer hidden nodes but provides better performance than both the original network and the network pruned by L2regularization. 展开更多
关键词 Extreme learning machine(ELM) L1/2 regularizer Network pruning
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部