期刊文献+
共找到5篇文章
< 1 >
每页显示 20 50 100
TGNet:Intelligent Identification of Thunderstorm Wind Gusts Using Multimodal Fusion
1
作者 Xiaowen ZHANG Yongguang ZHENG +3 位作者 Hengde ZHANG Jie SHENG Bingjian LU Shuo FENG 《Advances in Atmospheric Sciences》 2025年第1期146-164,共19页
Thunderstorm wind gusts are small in scale,typically occurring within a range of a few kilometers.It is extremely challenging to monitor and forecast thunderstorm wind gusts using only automatic weather stations.There... Thunderstorm wind gusts are small in scale,typically occurring within a range of a few kilometers.It is extremely challenging to monitor and forecast thunderstorm wind gusts using only automatic weather stations.Therefore,it is necessary to establish thunderstorm wind gust identification techniques based on multisource high-resolution observations.This paper introduces a new algorithm,called thunderstorm wind gust identification network(TGNet).It leverages multimodal feature fusion to fuse the temporal and spatial features of thunderstorm wind gust events.The shapelet transform is first used to extract the temporal features of wind speeds from automatic weather stations,which is aimed at distinguishing thunderstorm wind gusts from those caused by synoptic-scale systems or typhoons.Then,the encoder,structured upon the U-shaped network(U-Net)and incorporating recurrent residual convolutional blocks(R2U-Net),is employed to extract the corresponding spatial convective characteristics of satellite,radar,and lightning observations.Finally,by using the multimodal deep fusion module based on multi-head cross-attention,the temporal features of wind speed at each automatic weather station are incorporated into the spatial features to obtain 10-minutely classification of thunderstorm wind gusts.TGNet products have high accuracy,with a critical success index reaching 0.77.Compared with those of U-Net and R2U-Net,the false alarm rate of TGNet products decreases by 31.28%and 24.15%,respectively.The new algorithm provides grid products of thunderstorm wind gusts with a spatial resolution of 0.01°,updated every 10minutes.The results are finer and more accurate,thereby helping to improve the accuracy of operational warnings for thunderstorm wind gusts. 展开更多
关键词 thunderstorm wind gusts shapelet transform multimodal deep feature fusion
下载PDF
Multimodal Social Media Fake News Detection Based on Similarity Inference and Adversarial Networks 被引量:1
2
作者 Fangfang Shan Huifang Sun Mengyi Wang 《Computers, Materials & Continua》 SCIE EI 2024年第4期581-605,共25页
As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocrea... As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocreate a misleading perception among users. While early research primarily focused on text-based features forfake news detection mechanisms, there has been relatively limited exploration of learning shared representationsin multimodal (text and visual) contexts. To address these limitations, this paper introduces a multimodal modelfor detecting fake news, which relies on similarity reasoning and adversarial networks. The model employsBidirectional Encoder Representation from Transformers (BERT) and Text Convolutional Neural Network (Text-CNN) for extracting textual features while utilizing the pre-trained Visual Geometry Group 19-layer (VGG-19) toextract visual features. Subsequently, the model establishes similarity representations between the textual featuresextracted by Text-CNN and visual features through similarity learning and reasoning. Finally, these features arefused to enhance the accuracy of fake news detection, and adversarial networks have been employed to investigatethe relationship between fake news and events. This paper validates the proposed model using publicly availablemultimodal datasets from Weibo and Twitter. Experimental results demonstrate that our proposed approachachieves superior performance on Twitter, with an accuracy of 86%, surpassing traditional unimodalmodalmodelsand existing multimodal models. In contrast, the overall better performance of our model on the Weibo datasetsurpasses the benchmark models across multiple metrics. The application of similarity reasoning and adversarialnetworks in multimodal fake news detection significantly enhances detection effectiveness in this paper. However,current research is limited to the fusion of only text and image modalities. Future research directions should aimto further integrate features fromadditionalmodalities to comprehensively represent themultifaceted informationof fake news. 展开更多
关键词 Fake news detection attention mechanism image-text similarity multimodal feature fusion
下载PDF
A Disentangled Representation-Based Multimodal Fusion Framework Integrating Pathomics and Radiomics for KRAS Mutation Detection in Colorectal Cancer
3
作者 Zhilong Lv Rui Yan +3 位作者 Yuexiao Lin Lin Gao Fa Zhang Ying Wang 《Big Data Mining and Analytics》 EI CSCD 2024年第3期590-602,共13页
Kirsten rat sarcoma viral oncogene homolog(namely KRAS)is a key biomarker for prognostic analysis and targeted therapy of colorectal cancer.Recently,the advancement of machine learning,especially deep learning,has gre... Kirsten rat sarcoma viral oncogene homolog(namely KRAS)is a key biomarker for prognostic analysis and targeted therapy of colorectal cancer.Recently,the advancement of machine learning,especially deep learning,has greatly promoted the development of KRAS mutation detection from tumor phenotype data,such as pathology slides or radiology images.However,there are still two major problems in existing studies:inadequate single-modal feature learning and lack of multimodal phenotypic feature fusion.In this paper,we propose a Disentangled Representation-based Multimodal Fusion framework integrating Pathomics and Radiomics(DRMF-PaRa)for KRAS mutation detection.Specifically,the DRMF-PaRa model consists of three parts:(1)the pathomics learning module,which introduces a tissue-guided Transformer model to extract more comprehensive and targeted pathological features;(2)the radiomics learning module,which captures the generic hand-crafted radiomics features and the task-specific deep radiomics features;(3)the disentangled representation-based multimodal fusion module,which learns factorized subspaces for each modality and provides a holistic view of the two heterogeneous phenotypic features.The proposed model is developed and evaluated on a multi modality dataset of 111 colorectal cancer patients with whole slide images and contrast-enhanced CT.The experimental results demonstrate the superiority of the proposed DRMF-PaRa model with an accuracy of 0.876 and an AUC of 0.865 for KRAS mutation detection. 展开更多
关键词 KRAS mutation detection multimodal feature fusion pathomics radiomics
原文传递
Multimodal Adaptive Identity-Recognition Algorithm Fused with Gait Perception 被引量:2
4
作者 Changjie Wang Zhihua Li Benjamin Sarpong 《Big Data Mining and Analytics》 EI 2021年第4期223-232,共10页
Identity-recognition technologies require assistive equipment,whereas they are poor in recognition accuracy and expensive.To overcome this deficiency,this paper proposes several gait feature identification algorithms.... Identity-recognition technologies require assistive equipment,whereas they are poor in recognition accuracy and expensive.To overcome this deficiency,this paper proposes several gait feature identification algorithms.First,in combination with the collected gait information of individuals from triaxial accelerometers on smartphones,the collected information is preprocessed,and multimodal fusion is used with the existing standard datasets to yield a multimodal synthetic dataset;then,with the multimodal characteristics of the collected biological gait information,a Convolutional Neural Network based Gait Recognition(CNN-GR)model and the related scheme for the multimodal features are developed;at last,regarding the proposed CNN-GR model and scheme,a unimodal gait feature identity single-gait feature identification algorithm and a multimodal gait feature fusion identity multimodal gait information algorithm are proposed.Experimental results show that the proposed algorithms perform well in recognition accuracy,the confusion matrix,and the kappa statistic,and they have better recognition scores and robustness than the compared algorithms;thus,the proposed algorithm has prominent promise in practice. 展开更多
关键词 gait recognition person identification deep learning multimodal feature fusion
原文传递
An Efficient WRF Framework for Discovering Risk Genes and Abnormal Brain Regions in Parkinson's Disease Based on Imaging Genetics Data
5
作者 Xia-An Bi Zhao-Xu Xing +1 位作者 Rui-Hui Xu Xi Hu 《Journal of Computer Science & Technology》 SCIE EI CSCD 2021年第2期361-374,共14页
As an emerging research field of brain science,multimodal data fusion analysis has attracted broader attention in the study of complex brain diseases such as Parkinson's disease(PD).However,current studies primari... As an emerging research field of brain science,multimodal data fusion analysis has attracted broader attention in the study of complex brain diseases such as Parkinson's disease(PD).However,current studies primarily lie with detecting the association among different modal data and reducing data attributes.The data mining method after fusion and the overall analysis framework are neglected.In this study,we propose a weighted random forest(WRF)model as the feature screening classifier.The interactions between genes and brain regions are detected as input multimodal fusion features by the correlation analysis method.We implement sample classification and optimal feature selection based on WRF,and construct a multimodal analysis framework for exploring the pathogenic factors of PD.The experimental results in Parkinson's Progression Markers Initiative(PPMI)database show that WRF performs better compared with some advanced methods,and the brain regions and genes related to PD are detected.The fusion of multi-modal data can improve the classification of PD patients and detect the pathogenic factors more comprehensively,which provides a novel perspective for the diagnosis and research of PD.We also show the great potential of WRF to perform the multimodal data fusion analysis of other brain diseases. 展开更多
关键词 multimodal fusion feature Parkinson's disease pathogenic factor detection sample classification weighted random forest model
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部