Image Captioning is an emergent topic of research in the domain of artificial intelligence(AI).It utilizes an integration of Computer Vision(CV)and Natural Language Processing(NLP)for generating the image descriptions...Image Captioning is an emergent topic of research in the domain of artificial intelligence(AI).It utilizes an integration of Computer Vision(CV)and Natural Language Processing(NLP)for generating the image descriptions.Itfinds use in several application areas namely recommendation in editing applications,utilization in virtual assistance,etc.The development of NLP and deep learning(DL)modelsfind useful to derive a bridge among the visual details and textual semantics.In this view,this paper introduces an Oppositional Harris Hawks Optimization with Deep Learning based Image Captioning(OHHO-DLIC)technique.The OHHO-DLIC technique involves the design of distinct levels of pre-processing.Moreover,the feature extraction of the images is carried out by the use of EfficientNet model.Furthermore,the image captioning is performed by bidirectional long short term memory(BiLSTM)model,comprising encoder as well as decoder.At last,the oppositional Harris Hawks optimization(OHHO)based hyperparameter tuning process is performed for effectively adjusting the hyperparameter of the EfficientNet and BiLSTM models.The experimental analysis of the OHHO-DLIC technique is carried out on the Flickr 8k Dataset and a comprehensive comparative analysis highlighted the better performance over the recent approaches.展开更多
基金supported by the Soonchunhyang University Research Fund andUniversity Innovation Support Project.
文摘Image Captioning is an emergent topic of research in the domain of artificial intelligence(AI).It utilizes an integration of Computer Vision(CV)and Natural Language Processing(NLP)for generating the image descriptions.Itfinds use in several application areas namely recommendation in editing applications,utilization in virtual assistance,etc.The development of NLP and deep learning(DL)modelsfind useful to derive a bridge among the visual details and textual semantics.In this view,this paper introduces an Oppositional Harris Hawks Optimization with Deep Learning based Image Captioning(OHHO-DLIC)technique.The OHHO-DLIC technique involves the design of distinct levels of pre-processing.Moreover,the feature extraction of the images is carried out by the use of EfficientNet model.Furthermore,the image captioning is performed by bidirectional long short term memory(BiLSTM)model,comprising encoder as well as decoder.At last,the oppositional Harris Hawks optimization(OHHO)based hyperparameter tuning process is performed for effectively adjusting the hyperparameter of the EfficientNet and BiLSTM models.The experimental analysis of the OHHO-DLIC technique is carried out on the Flickr 8k Dataset and a comprehensive comparative analysis highlighted the better performance over the recent approaches.