As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocrea...As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocreate a misleading perception among users. While early research primarily focused on text-based features forfake news detection mechanisms, there has been relatively limited exploration of learning shared representationsin multimodal (text and visual) contexts. To address these limitations, this paper introduces a multimodal modelfor detecting fake news, which relies on similarity reasoning and adversarial networks. The model employsBidirectional Encoder Representation from Transformers (BERT) and Text Convolutional Neural Network (Text-CNN) for extracting textual features while utilizing the pre-trained Visual Geometry Group 19-layer (VGG-19) toextract visual features. Subsequently, the model establishes similarity representations between the textual featuresextracted by Text-CNN and visual features through similarity learning and reasoning. Finally, these features arefused to enhance the accuracy of fake news detection, and adversarial networks have been employed to investigatethe relationship between fake news and events. This paper validates the proposed model using publicly availablemultimodal datasets from Weibo and Twitter. Experimental results demonstrate that our proposed approachachieves superior performance on Twitter, with an accuracy of 86%, surpassing traditional unimodalmodalmodelsand existing multimodal models. In contrast, the overall better performance of our model on the Weibo datasetsurpasses the benchmark models across multiple metrics. The application of similarity reasoning and adversarialnetworks in multimodal fake news detection significantly enhances detection effectiveness in this paper. However,current research is limited to the fusion of only text and image modalities. Future research directions should aimto further integrate features fromadditionalmodalities to comprehensively represent themultifaceted informationof fake news.展开更多
Identity-recognition technologies require assistive equipment,whereas they are poor in recognition accuracy and expensive.To overcome this deficiency,this paper proposes several gait feature identification algorithms....Identity-recognition technologies require assistive equipment,whereas they are poor in recognition accuracy and expensive.To overcome this deficiency,this paper proposes several gait feature identification algorithms.First,in combination with the collected gait information of individuals from triaxial accelerometers on smartphones,the collected information is preprocessed,and multimodal fusion is used with the existing standard datasets to yield a multimodal synthetic dataset;then,with the multimodal characteristics of the collected biological gait information,a Convolutional Neural Network based Gait Recognition(CNN-GR)model and the related scheme for the multimodal features are developed;at last,regarding the proposed CNN-GR model and scheme,a unimodal gait feature identity single-gait feature identification algorithm and a multimodal gait feature fusion identity multimodal gait information algorithm are proposed.Experimental results show that the proposed algorithms perform well in recognition accuracy,the confusion matrix,and the kappa statistic,and they have better recognition scores and robustness than the compared algorithms;thus,the proposed algorithm has prominent promise in practice.展开更多
As feature data in multimodal remote sensing images belong to multiple modes and are complementary to each other,the traditional method of single-mode data analysis and processing cannot effectively fuse the data of d...As feature data in multimodal remote sensing images belong to multiple modes and are complementary to each other,the traditional method of single-mode data analysis and processing cannot effectively fuse the data of different modes and express the correlation between different modes.In order to solve this problem,make better fusion of different modal data and the relationship between the said features,this paper proposes a fusion method of multiple modal spectral characteristics and radar remote sensing imageaccording to the spatial dimension in the form of a vector or matrix for effective integration,by training the SVM model.Experimental results show that the method based on band selection and multi-mode feature fusion can effectively improve the robustness of remote sensing image features.Compared with other methods,the fusion method can achieve higher classification accuracy and better classification effect.展开更多
As an emerging research field of brain science,multimodal data fusion analysis has attracted broader attention in the study of complex brain diseases such as Parkinson's disease(PD).However,current studies primari...As an emerging research field of brain science,multimodal data fusion analysis has attracted broader attention in the study of complex brain diseases such as Parkinson's disease(PD).However,current studies primarily lie with detecting the association among different modal data and reducing data attributes.The data mining method after fusion and the overall analysis framework are neglected.In this study,we propose a weighted random forest(WRF)model as the feature screening classifier.The interactions between genes and brain regions are detected as input multimodal fusion features by the correlation analysis method.We implement sample classification and optimal feature selection based on WRF,and construct a multimodal analysis framework for exploring the pathogenic factors of PD.The experimental results in Parkinson's Progression Markers Initiative(PPMI)database show that WRF performs better compared with some advanced methods,and the brain regions and genes related to PD are detected.The fusion of multi-modal data can improve the classification of PD patients and detect the pathogenic factors more comprehensively,which provides a novel perspective for the diagnosis and research of PD.We also show the great potential of WRF to perform the multimodal data fusion analysis of other brain diseases.展开更多
基金the National Natural Science Foundation of China(No.62302540)with author F.F.S.For more information,please visit their website at https://www.nsfc.gov.cn/.Additionally,it is also funded by the Open Foundation of Henan Key Laboratory of Cyberspace Situation Awareness(No.HNTS2022020)+1 种基金where F.F.S is an author.Further details can be found at http://xt.hnkjt.gov.cn/data/pingtai/.The research is also supported by the Natural Science Foundation of Henan Province Youth Science Fund Project(No.232300420422)for more information,you can visit https://kjt.henan.gov.cn/2022/09-02/2599082.html.Lastly,it receives funding from the Natural Science Foundation of Zhongyuan University of Technology(No.K2023QN018),where F.F.S is an author.You can find more information at https://www.zut.edu.cn/.
文摘As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocreate a misleading perception among users. While early research primarily focused on text-based features forfake news detection mechanisms, there has been relatively limited exploration of learning shared representationsin multimodal (text and visual) contexts. To address these limitations, this paper introduces a multimodal modelfor detecting fake news, which relies on similarity reasoning and adversarial networks. The model employsBidirectional Encoder Representation from Transformers (BERT) and Text Convolutional Neural Network (Text-CNN) for extracting textual features while utilizing the pre-trained Visual Geometry Group 19-layer (VGG-19) toextract visual features. Subsequently, the model establishes similarity representations between the textual featuresextracted by Text-CNN and visual features through similarity learning and reasoning. Finally, these features arefused to enhance the accuracy of fake news detection, and adversarial networks have been employed to investigatethe relationship between fake news and events. This paper validates the proposed model using publicly availablemultimodal datasets from Weibo and Twitter. Experimental results demonstrate that our proposed approachachieves superior performance on Twitter, with an accuracy of 86%, surpassing traditional unimodalmodalmodelsand existing multimodal models. In contrast, the overall better performance of our model on the Weibo datasetsurpasses the benchmark models across multiple metrics. The application of similarity reasoning and adversarialnetworks in multimodal fake news detection significantly enhances detection effectiveness in this paper. However,current research is limited to the fusion of only text and image modalities. Future research directions should aimto further integrate features fromadditionalmodalities to comprehensively represent themultifaceted informationof fake news.
基金supported by the Smart Manufacturing New Model Application Project Ministry of Industry and Information Technology(No.ZH-XZ-18004)Future Research Projects Funds for Science and Technology Department of Jiangsu Province(No.BY2013015-23)+2 种基金the Fundamental Research Funds for the Ministry of Education(No.JUSRP211A 41)the Fundamental Research Funds for the Central Universities(No.JUSRP42003)the 111 Project(No.B2018)。
文摘Identity-recognition technologies require assistive equipment,whereas they are poor in recognition accuracy and expensive.To overcome this deficiency,this paper proposes several gait feature identification algorithms.First,in combination with the collected gait information of individuals from triaxial accelerometers on smartphones,the collected information is preprocessed,and multimodal fusion is used with the existing standard datasets to yield a multimodal synthetic dataset;then,with the multimodal characteristics of the collected biological gait information,a Convolutional Neural Network based Gait Recognition(CNN-GR)model and the related scheme for the multimodal features are developed;at last,regarding the proposed CNN-GR model and scheme,a unimodal gait feature identity single-gait feature identification algorithm and a multimodal gait feature fusion identity multimodal gait information algorithm are proposed.Experimental results show that the proposed algorithms perform well in recognition accuracy,the confusion matrix,and the kappa statistic,and they have better recognition scores and robustness than the compared algorithms;thus,the proposed algorithm has prominent promise in practice.
文摘As feature data in multimodal remote sensing images belong to multiple modes and are complementary to each other,the traditional method of single-mode data analysis and processing cannot effectively fuse the data of different modes and express the correlation between different modes.In order to solve this problem,make better fusion of different modal data and the relationship between the said features,this paper proposes a fusion method of multiple modal spectral characteristics and radar remote sensing imageaccording to the spatial dimension in the form of a vector or matrix for effective integration,by training the SVM model.Experimental results show that the method based on band selection and multi-mode feature fusion can effectively improve the robustness of remote sensing image features.Compared with other methods,the fusion method can achieve higher classification accuracy and better classification effect.
基金This work was supported by the National Natural Science Foundation of China under Grant No.62072173the Natural Science Foundation of Hunan Province of China under Grant No.2020JJ4432+3 种基金the Key Scientific Research Projects of Department of Education of Hunan Province under Grant No.20A296the Degree and Postgraduate Education Reform Project of Hunan Province under Grant No.2019JGYB091Hunan Provincial Science and Technology Project Foundation under Grant No.2018TP1018,and the InnovationEntrepreneurship Training Program of Hunan Xiangjiang Artificial Intelligence Academy.
文摘As an emerging research field of brain science,multimodal data fusion analysis has attracted broader attention in the study of complex brain diseases such as Parkinson's disease(PD).However,current studies primarily lie with detecting the association among different modal data and reducing data attributes.The data mining method after fusion and the overall analysis framework are neglected.In this study,we propose a weighted random forest(WRF)model as the feature screening classifier.The interactions between genes and brain regions are detected as input multimodal fusion features by the correlation analysis method.We implement sample classification and optimal feature selection based on WRF,and construct a multimodal analysis framework for exploring the pathogenic factors of PD.The experimental results in Parkinson's Progression Markers Initiative(PPMI)database show that WRF performs better compared with some advanced methods,and the brain regions and genes related to PD are detected.The fusion of multi-modal data can improve the classification of PD patients and detect the pathogenic factors more comprehensively,which provides a novel perspective for the diagnosis and research of PD.We also show the great potential of WRF to perform the multimodal data fusion analysis of other brain diseases.