期刊文献+

An Optimized Deep Residual Network with a Depth Concatenated Block for Handwritten Characters Classification 被引量:3

下载PDF
导出
摘要 Even though much advancements have been achieved with regards to the recognition of handwritten characters,researchers still face difficulties with the handwritten character recognition problem,especially with the advent of new datasets like the Extended Modified National Institute of Standards and Technology dataset(EMNIST).The EMNIST dataset represents a challenge for both machine-learning and deep-learning techniques due to inter-class similarity and intra-class variability.Inter-class similarity exists because of the similarity between the shapes of certain characters in the dataset.The presence of intra-class variability is mainly due to different shapes written by different writers for the same character.In this research,we have optimized a deep residual network to achieve higher accuracy vs.the published state-of-the-art results.This approach is mainly based on the prebuilt deep residual network model ResNet18,whose architecture has been enhanced by using the optimal number of residual blocks and the optimal size of the receptive field of the first convolutional filter,the replacement of the first max-pooling filter by an average pooling filter,and the addition of a drop-out layer before the fully connected layer.A distinctive modification has been introduced by replacing the final addition layer with a depth concatenation layer,which resulted in a novel deep architecture having higher accuracy vs.the pure residual architecture.Moreover,the dataset images’sizes have been adjusted to optimize their visibility in the network.Finally,by tuning the training hyperparameters and using rotation and shear augmentations,the proposed model outperformed the state-of-the-art models by achieving average accuracies of 95.91%and 90.90%for the Letters and Balanced dataset sections,respectively.Furthermore,the average accuracies were improved to 95.9%and 91.06%for the Letters and Balanced sections,respectively,by using a group of 5 instances of the trained models and averaging the output class probabilities.
出处 《Computers, Materials & Continua》 SCIE EI 2021年第7期1-28,共28页 计算机、材料和连续体(英文)
  • 相关文献

参考文献1

二级参考文献2

共引文献1

同被引文献7

引证文献3

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部