The recent developments in Multimedia Internet of Things(MIoT)devices,empowered with Natural Language Processing(NLP)model,seem to be a promising future of smart devices.It plays an important role in industrial models...The recent developments in Multimedia Internet of Things(MIoT)devices,empowered with Natural Language Processing(NLP)model,seem to be a promising future of smart devices.It plays an important role in industrial models such as speech understanding,emotion detection,home automation,and so on.If an image needs to be captioned,then the objects in that image,its actions and connections,and any silent feature that remains under-projected or missing from the images should be identified.The aim of the image captioning process is to generate a caption for image.In next step,the image should be provided with one of the most significant and detailed descriptions that is syntactically as well as semantically correct.In this scenario,computer vision model is used to identify the objects and NLP approaches are followed to describe the image.The current study develops aNatural Language Processing with Optimal Deep Learning Enabled Intelligent Image Captioning System(NLPODL-IICS).The aim of the presented NLPODL-IICS model is to produce a proper description for input image.To attain this,the proposed NLPODL-IICS follows two stages such as encoding and decoding processes.Initially,at the encoding side,the proposed NLPODL-IICS model makes use of Hunger Games Search(HGS)with Neural Search Architecture Network(NASNet)model.This model represents the input data appropriately by inserting it into a predefined length vector.Besides,during decoding phase,Chimp Optimization Algorithm(COA)with deeper Long Short Term Memory(LSTM)approach is followed to concatenate the description sentences 4436 CMC,2023,vol.74,no.2 produced by the method.The application of HGS and COA algorithms helps in accomplishing proper parameter tuning for NASNet and LSTM models respectively.The proposed NLPODL-IICS model was experimentally validated with the help of two benchmark datasets.Awidespread comparative analysis confirmed the superior performance of NLPODL-IICS model over other models.展开更多
跨模态图像-文本检索是一项在给定一种模态(如文本)的查询条件下检索另一种模态(如图像)的任务.该任务的关键问题在于如何准确地测量图文两种模态之间的相似性,在减少视觉和语言这两种异构模态之间的视觉语义差异中起着至关重要的作用....跨模态图像-文本检索是一项在给定一种模态(如文本)的查询条件下检索另一种模态(如图像)的任务.该任务的关键问题在于如何准确地测量图文两种模态之间的相似性,在减少视觉和语言这两种异构模态之间的视觉语义差异中起着至关重要的作用.传统的检索范式依靠深度学习提取图像和文本的特征表示,并将其映射到一个公共表示空间中进行匹配.然而,这种方法更多地依赖数据表面的相关关系,无法挖掘数据背后真实的因果关系,在高层语义信息的表示和可解释性方面面临着挑战.为此,在深度学习的基础上引入因果推断和嵌入共识知识,提出嵌入共识知识的因果图文检索方法.具体而言,将因果干预引入视觉特征提取模块,通过因果关系替换相关关系学习常识因果视觉特征,并与原始视觉特征进行连接得到最终的视觉特征表示.为解决本方法文本特征表示不足的问题,采用更强大的文本特征提取模型BERT(Bidirectional encoder representations from transformers,双向编码器表示),并且嵌入两种模态数据之间共享的共识知识对图文特征进行共识级的表示学习.在MS-COCO数据集以及MS-COCO到Flickr30k上的跨数据集实验,证明了本文方法可以在双向图文检索任务上实现召回率和平均召回率的一致性改进.展开更多
At present,the entity and relation joint extraction task has attracted more and more scholars'attention in the field of natural language processing(NLP).However,most of their methods rely on NLP tools to construct...At present,the entity and relation joint extraction task has attracted more and more scholars'attention in the field of natural language processing(NLP).However,most of their methods rely on NLP tools to construct dependency trees to obtain sentence structure information.The adjacency matrix constructed by the dependency tree can convey syntactic information.Dependency trees obtained through NLP tools are too dependent on the tools and may not be very accurate in contextual semantic description.At the same time,a large amount of irrelevant information will cause redundancy.This paper presents a novel end-to-end entity and relation joint extraction based on the multihead attention graph convolutional network model(MAGCN),which does not rely on external tools.MAGCN generates an adjacency matrix through a multi-head attention mechanism to form an attention graph convolutional network model,uses head selection to identify multiple relations,and effectively improve the prediction result of overlapping relations.The authors extensively experiment and prove the method's effectiveness on three public datasets:NYT,WebNLG,and CoNLL04.The results show that the authors’method outperforms the state-of-the-art research results for the task of entities and relation extraction.展开更多
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R161)PrincessNourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the|Deanship of Scientific Research at Umm Al-Qura University|for supporting this work by Grant Code:(22UQU4310373DSR33).
文摘The recent developments in Multimedia Internet of Things(MIoT)devices,empowered with Natural Language Processing(NLP)model,seem to be a promising future of smart devices.It plays an important role in industrial models such as speech understanding,emotion detection,home automation,and so on.If an image needs to be captioned,then the objects in that image,its actions and connections,and any silent feature that remains under-projected or missing from the images should be identified.The aim of the image captioning process is to generate a caption for image.In next step,the image should be provided with one of the most significant and detailed descriptions that is syntactically as well as semantically correct.In this scenario,computer vision model is used to identify the objects and NLP approaches are followed to describe the image.The current study develops aNatural Language Processing with Optimal Deep Learning Enabled Intelligent Image Captioning System(NLPODL-IICS).The aim of the presented NLPODL-IICS model is to produce a proper description for input image.To attain this,the proposed NLPODL-IICS follows two stages such as encoding and decoding processes.Initially,at the encoding side,the proposed NLPODL-IICS model makes use of Hunger Games Search(HGS)with Neural Search Architecture Network(NASNet)model.This model represents the input data appropriately by inserting it into a predefined length vector.Besides,during decoding phase,Chimp Optimization Algorithm(COA)with deeper Long Short Term Memory(LSTM)approach is followed to concatenate the description sentences 4436 CMC,2023,vol.74,no.2 produced by the method.The application of HGS and COA algorithms helps in accomplishing proper parameter tuning for NASNet and LSTM models respectively.The proposed NLPODL-IICS model was experimentally validated with the help of two benchmark datasets.Awidespread comparative analysis confirmed the superior performance of NLPODL-IICS model over other models.
文摘跨模态图像-文本检索是一项在给定一种模态(如文本)的查询条件下检索另一种模态(如图像)的任务.该任务的关键问题在于如何准确地测量图文两种模态之间的相似性,在减少视觉和语言这两种异构模态之间的视觉语义差异中起着至关重要的作用.传统的检索范式依靠深度学习提取图像和文本的特征表示,并将其映射到一个公共表示空间中进行匹配.然而,这种方法更多地依赖数据表面的相关关系,无法挖掘数据背后真实的因果关系,在高层语义信息的表示和可解释性方面面临着挑战.为此,在深度学习的基础上引入因果推断和嵌入共识知识,提出嵌入共识知识的因果图文检索方法.具体而言,将因果干预引入视觉特征提取模块,通过因果关系替换相关关系学习常识因果视觉特征,并与原始视觉特征进行连接得到最终的视觉特征表示.为解决本方法文本特征表示不足的问题,采用更强大的文本特征提取模型BERT(Bidirectional encoder representations from transformers,双向编码器表示),并且嵌入两种模态数据之间共享的共识知识对图文特征进行共识级的表示学习.在MS-COCO数据集以及MS-COCO到Flickr30k上的跨数据集实验,证明了本文方法可以在双向图文检索任务上实现召回率和平均召回率的一致性改进.
基金State Key Program of National Natural Science of China,Grant/Award Number:61533018National Natural Science Foundation of China,Grant/Award Number:61402220+2 种基金Philosophy and Social Science Foundation of Hunan Province,Grant/Award Number:16YBA323Natural Science Foundation of Hunan Province,Grant/Award Number:2020JJ4525Scientific Research Fund of Hunan Provincial Education Department,Grant/Award Numbers:18B279,19A439。
文摘At present,the entity and relation joint extraction task has attracted more and more scholars'attention in the field of natural language processing(NLP).However,most of their methods rely on NLP tools to construct dependency trees to obtain sentence structure information.The adjacency matrix constructed by the dependency tree can convey syntactic information.Dependency trees obtained through NLP tools are too dependent on the tools and may not be very accurate in contextual semantic description.At the same time,a large amount of irrelevant information will cause redundancy.This paper presents a novel end-to-end entity and relation joint extraction based on the multihead attention graph convolutional network model(MAGCN),which does not rely on external tools.MAGCN generates an adjacency matrix through a multi-head attention mechanism to form an attention graph convolutional network model,uses head selection to identify multiple relations,and effectively improve the prediction result of overlapping relations.The authors extensively experiment and prove the method's effectiveness on three public datasets:NYT,WebNLG,and CoNLL04.The results show that the authors’method outperforms the state-of-the-art research results for the task of entities and relation extraction.