期刊文献+
共找到9篇文章
< 1 >
每页显示 20 50 100
Clinical and multimodal imaging features of acute macular neuroretinopathy lesions following recent SARS-CoV-2 infection 被引量:2
1
作者 Yang-Chen Liu Bin Wu +1 位作者 Yan Wang Song Chen 《International Journal of Ophthalmology(English edition)》 SCIE CAS 2023年第5期755-761,共7页
AIM:To describe the clinical characteristics of eyes using multimodal imaging features with acute macular neuroretinopathy(AMN)lesions following severe acute respiratory syndrome coronavirus 2(SARS-CoV-2)infection.MET... AIM:To describe the clinical characteristics of eyes using multimodal imaging features with acute macular neuroretinopathy(AMN)lesions following severe acute respiratory syndrome coronavirus 2(SARS-CoV-2)infection.METHODS:Retrospective case series study.From December 18,2022 to February 14,2023,previously healthy cases within 1-week infection with SARS-CoV-2 and examined at Tianjin Eye Hospital to confirm the diagnosis of AMN were included in the study.Totally 5 males and 9 females[mean age:29.93±10.32(16-49)y]were presented for reduced vision,with or without blurred vision.All patients underwent best corrected visual acuity(BCVA),intraocular pressure,slit lamp microscopy,indirect fundoscopy.Simultaneously,multimodal imagings fundus photography(45°or 200°field of view)was performed in 7 cases(14 eyes).Near infrared(NIR)fundus photography was performed in 9 cases(18 eyes),optical coherence tomography(OCT)in 5 cases(10 eyes),optical coherence tomography angiography(OCTA)in 9 cases(18 eyes),and fundus fluorescence angiography(FFA)in 3 cases(6 eyes).Visual field was performed in 1 case(2 eyes).RESULTS:Multimodal imaging findings data from 14 patients with AMN were reviewed.All eyes demonstrated different extent hyperreflective lesions at the level of the inner nuclear layer and/or outer plexus layer on OCT or OCTA.Fundus photography(45°or 200°field of view)showed irregular hypo-reflective lesion around the fovea in 7 cases(14 eyes).OCTA demonstrated that the superficial retinal capillary plexus(SCP)vascular density,deep capillary plexus(DCP)vascular density and choriocapillaris(CC)vascular density was reduced in 9 case(18 eyes).Among the follow-up cases(2 cases),vascular density increased in 1 case with elevated BCVA;another case has vascular density decrease in one eye and basically unchanged in other eye.En face images of the ellipsoidal zone and interdigitation zone injury showed a low wedge-shaped reflection contour appearance.NIR image mainly show the absence of the outer retinal interdigitation zone in AMN.No abnormal fluorescence was observed in FFA.Corresponding partial defect of the visual field were visualized via perimeter in one case.CONCLUSION:The morbidity of SARS-CoV-2 infection with AMN is increased.Ophthalmologists should be aware of the possible,albeit rare,AMN after SARS-CoV-2 infection and focus on multimodal imaging features.OCT,OCTA,and infrared fundus phase are proved to be valuable tools for detection of AMN in patients with SARS-CoV-2. 展开更多
关键词 SARS-CoV-2 infection tomography optical coherence acute macular neuroretinopathy multimodal imaging features
下载PDF
Multimodal Social Media Fake News Detection Based on Similarity Inference and Adversarial Networks 被引量:1
2
作者 Fangfang Shan Huifang Sun Mengyi Wang 《Computers, Materials & Continua》 SCIE EI 2024年第4期581-605,共25页
As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocrea... As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocreate a misleading perception among users. While early research primarily focused on text-based features forfake news detection mechanisms, there has been relatively limited exploration of learning shared representationsin multimodal (text and visual) contexts. To address these limitations, this paper introduces a multimodal modelfor detecting fake news, which relies on similarity reasoning and adversarial networks. The model employsBidirectional Encoder Representation from Transformers (BERT) and Text Convolutional Neural Network (Text-CNN) for extracting textual features while utilizing the pre-trained Visual Geometry Group 19-layer (VGG-19) toextract visual features. Subsequently, the model establishes similarity representations between the textual featuresextracted by Text-CNN and visual features through similarity learning and reasoning. Finally, these features arefused to enhance the accuracy of fake news detection, and adversarial networks have been employed to investigatethe relationship between fake news and events. This paper validates the proposed model using publicly availablemultimodal datasets from Weibo and Twitter. Experimental results demonstrate that our proposed approachachieves superior performance on Twitter, with an accuracy of 86%, surpassing traditional unimodalmodalmodelsand existing multimodal models. In contrast, the overall better performance of our model on the Weibo datasetsurpasses the benchmark models across multiple metrics. The application of similarity reasoning and adversarialnetworks in multimodal fake news detection significantly enhances detection effectiveness in this paper. However,current research is limited to the fusion of only text and image modalities. Future research directions should aimto further integrate features fromadditionalmodalities to comprehensively represent themultifaceted informationof fake news. 展开更多
关键词 Fake news detection attention mechanism image-text similarity multimodal feature fusion
下载PDF
Solving Geometry Problems via Feature Learning and Contrastive Learning of Multimodal Data
3
作者 Pengpeng Jian Fucheng Guo +1 位作者 Yanli Wang Yang Li 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第8期1707-1728,共22页
This paper presents an end-to-end deep learning method to solve geometry problems via feature learning and contrastive learning of multimodal data.A key challenge in solving geometry problems using deep learning is to... This paper presents an end-to-end deep learning method to solve geometry problems via feature learning and contrastive learning of multimodal data.A key challenge in solving geometry problems using deep learning is to automatically adapt to the task of understanding single-modal and multimodal problems.Existing methods either focus on single-modal ormultimodal problems,and they cannot fit each other.A general geometry problem solver shouldobviouslybe able toprocess variousmodalproblems at the same time.Inthispaper,a shared feature-learning model of multimodal data is adopted to learn the unified feature representation of text and image,which can solve the heterogeneity issue between multimodal geometry problems.A contrastive learning model of multimodal data enhances the semantic relevance betweenmultimodal features and maps them into a unified semantic space,which can effectively adapt to both single-modal and multimodal downstream tasks.Based on the feature extraction and fusion of multimodal data,a proposed geometry problem solver uses relation extraction,theorem reasoning,and problem solving to present solutions in a readable way.Experimental results show the effectiveness of the method. 展开更多
关键词 Geometry problems multimodal feature learning multimodal contrastive learning automatic solver
下载PDF
Deep Multimodal Learning and Fusion Based Intelligent Fault Diagnosis Approach
4
作者 Huifang Li Jianghang Huang +3 位作者 Jingwei Huang Senchun Chai Leilei Zhao Yuanqing Xia 《Journal of Beijing Institute of Technology》 EI CAS 2021年第2期172-185,共14页
Industrial Internet of Things(IoT)connecting society and industrial systems represents a tremendous and promising paradigm shift.With IoT,multimodal and heterogeneous data from industrial devices can be easily collect... Industrial Internet of Things(IoT)connecting society and industrial systems represents a tremendous and promising paradigm shift.With IoT,multimodal and heterogeneous data from industrial devices can be easily collected,and further analyzed to discover device maintenance and health related potential knowledge behind.IoT data-based fault diagnosis for industrial devices is very helpful to the sustainability and applicability of an IoT ecosystem.But how to efficiently use and fuse this multimodal heterogeneous data to realize intelligent fault diagnosis is still a challenge.In this paper,a novel Deep Multimodal Learning and Fusion(DMLF)based fault diagnosis method is proposed for addressing heterogeneous data from IoT environments where industrial devices coexist.First,a DMLF model is designed by combining a Convolution Neural Network(CNN)and Stacked Denoising Autoencoder(SDAE)together to capture more comprehensive fault knowledge and extract features from different modal data.Second,these multimodal features are seamlessly integrated at a fusion layer and the resulting fused features are further used to train a classifier for recognizing potential faults.Third,a two-stage training algorithm is proposed by combining supervised pre-training and fine-tuning to simplify the training process for deep structure models.A series of experiments are conducted over multimodal heterogeneous data from a gear device to verify our proposed fault diagnosis method.The experimental results show that our method outperforms the benchmarking ones in fault diagnosis accuracy. 展开更多
关键词 fault diagnosis deep learning multimodal heterogeneous data multimodal fused features
下载PDF
Hypo-Driver: A Multiview Driver Fatigue and Distraction Level Detection System
5
作者 Qaisar Abbas Mostafa EAIbrahim +1 位作者 Shakir Khan Abdul Rauf Baig 《Computers, Materials & Continua》 SCIE EI 2022年第4期1999-2017,共19页
Traffic accidents are caused by driver fatigue or distraction in many cases.To prevent accidents,several low-cost hypovigilance(hypo-V)systems were developed in the past based on a multimodal-hybrid(physiological and ... Traffic accidents are caused by driver fatigue or distraction in many cases.To prevent accidents,several low-cost hypovigilance(hypo-V)systems were developed in the past based on a multimodal-hybrid(physiological and behavioral)feature set.Similarly in this paper,real-time driver inattention and fatigue(Hypo-Driver)detection system is proposed through multi-view cameras and biosignal sensors to extract hybrid features.The considered features are derived from non-intrusive sensors that are related to the changes in driving behavior and visual facial expressions.To get enhanced visual facial features in uncontrolled environment,three cameras are deployed on multiview points(0◦,45◦,and 90◦)of the drivers.To develop a Hypo-Driver system,the physiological signals(electroencephalography(EEG),electrocardiography(ECG),electro-myography(sEMG),and electrooculography(EOG))and behavioral information(PERCLOS70-80-90%,mouth aspect ratio(MAR),eye aspect ratio(EAR),blinking frequency(BF),head-titled ratio(HT-R))are collected and pre-processed,then followed by feature selection and fusion techniques.The driver behaviors are classified into five stages such as normal,fatigue,visual inattention,cognitive inattention,and drowsy.This improved hypo-Driver system utilized trained behavioral features by a convolutional neural network(CNNs),recurrent neural network and long short-term memory(RNN-LSTM)model is used to extract physiological features.After fusion of these features,the Hypo-Driver system is classified hypo-V into five stages based on trained layers and dropout-layer in the deep-residual neural network(DRNN)model.To test the performance of a hypo-Driver system,data from 20 drivers are acquired.The results of Hypo-Driver compared to state-of-theart methods are presented.Compared to the state-of-the-art Hypo-V system,on average,the Hypo-Driver system achieved a detection accuracy(AC)of 96.5%.The obtained results indicate that the Hypo-Driver system based on multimodal and multiview features outperforms other state-of-the-art driver Hypo-V systems by handling many anomalies. 展开更多
关键词 Internet of things(IoT) intelligent transportation sensors multiview points transfer learning convolutional neural network recurrent neural network residual neural network multimodal features
下载PDF
Multimodal Adaptive Identity-Recognition Algorithm Fused with Gait Perception 被引量:2
6
作者 Changjie Wang Zhihua Li Benjamin Sarpong 《Big Data Mining and Analytics》 EI 2021年第4期223-232,共10页
Identity-recognition technologies require assistive equipment,whereas they are poor in recognition accuracy and expensive.To overcome this deficiency,this paper proposes several gait feature identification algorithms.... Identity-recognition technologies require assistive equipment,whereas they are poor in recognition accuracy and expensive.To overcome this deficiency,this paper proposes several gait feature identification algorithms.First,in combination with the collected gait information of individuals from triaxial accelerometers on smartphones,the collected information is preprocessed,and multimodal fusion is used with the existing standard datasets to yield a multimodal synthetic dataset;then,with the multimodal characteristics of the collected biological gait information,a Convolutional Neural Network based Gait Recognition(CNN-GR)model and the related scheme for the multimodal features are developed;at last,regarding the proposed CNN-GR model and scheme,a unimodal gait feature identity single-gait feature identification algorithm and a multimodal gait feature fusion identity multimodal gait information algorithm are proposed.Experimental results show that the proposed algorithms perform well in recognition accuracy,the confusion matrix,and the kappa statistic,and they have better recognition scores and robustness than the compared algorithms;thus,the proposed algorithm has prominent promise in practice. 展开更多
关键词 gait recognition person identification deep learning multimodal feature fusion
原文传递
Decoding pilot behavior consciousness of EEG, ECG, eye movements via an SVM machine learning model 被引量:2
7
作者 Xiashuang Wang Guanghong Gong +2 位作者 Ni Li Li Ding Yaofei Ma 《International Journal of Modeling, Simulation, and Scientific Computing》 EI 2020年第4期78-96,共19页
To decode the pilot’s behavioral awareness,an experiment is designed to use an aircraft simulator obtaining the pilot’s physiological behavior data.Existing pilot behavior studies such as behavior modeling methods b... To decode the pilot’s behavioral awareness,an experiment is designed to use an aircraft simulator obtaining the pilot’s physiological behavior data.Existing pilot behavior studies such as behavior modeling methods based on domain experts and behavior modeling methods based on knowledge discovery do not proceed from the characteristics of the pilots themselves.The experiment starts directly from the multimodal physiological characteristics to explore pilots’behavior.Electroencephalography,electrocardiogram,and eye movement were recorded simultaneously.Extracted multimodal features of ground missions,air missions,and cruise mission were trained to generate support vector machine behavior model based on supervised learning.The results showed that different behaviors affects different multiple rhythm features,which are power spectra of theθwaves of EEG,standard deviation of normal to normal,root mean square of standard deviation and average gaze duration.The different physiological characteristics of the pilots could also be distinguished using an SVM model.Therefore,the multimodal physiological data can contribute to future research on the behavior activities of pilots.The result can be used to design and improve pilot training programs and automation interfaces. 展开更多
关键词 Pilots’behavior decision making aircraft simulator multimodal physiological features SVM model.
原文传递
An Efficient WRF Framework for Discovering Risk Genes and Abnormal Brain Regions in Parkinson's Disease Based on Imaging Genetics Data
8
作者 Xia-An Bi Zhao-Xu Xing +1 位作者 Rui-Hui Xu Xi Hu 《Journal of Computer Science & Technology》 SCIE EI CSCD 2021年第2期361-374,共14页
As an emerging research field of brain science,multimodal data fusion analysis has attracted broader attention in the study of complex brain diseases such as Parkinson's disease(PD).However,current studies primari... As an emerging research field of brain science,multimodal data fusion analysis has attracted broader attention in the study of complex brain diseases such as Parkinson's disease(PD).However,current studies primarily lie with detecting the association among different modal data and reducing data attributes.The data mining method after fusion and the overall analysis framework are neglected.In this study,we propose a weighted random forest(WRF)model as the feature screening classifier.The interactions between genes and brain regions are detected as input multimodal fusion features by the correlation analysis method.We implement sample classification and optimal feature selection based on WRF,and construct a multimodal analysis framework for exploring the pathogenic factors of PD.The experimental results in Parkinson's Progression Markers Initiative(PPMI)database show that WRF performs better compared with some advanced methods,and the brain regions and genes related to PD are detected.The fusion of multi-modal data can improve the classification of PD patients and detect the pathogenic factors more comprehensively,which provides a novel perspective for the diagnosis and research of PD.We also show the great potential of WRF to perform the multimodal data fusion analysis of other brain diseases. 展开更多
关键词 multimodal fusion feature Parkinson's disease pathogenic factor detection sample classification weighted random forest model
原文传递
Classification of Remote Sensing Images Based on Band Selection and Multi-mode Feature Fusion
9
作者 Xiaodong Yu Hongbin Dong +1 位作者 Zihe Mu Yu Sun 《国际计算机前沿大会会议论文集》 2020年第1期612-620,共9页
As feature data in multimodal remote sensing images belong to multiple modes and are complementary to each other,the traditional method of single-mode data analysis and processing cannot effectively fuse the data of d... As feature data in multimodal remote sensing images belong to multiple modes and are complementary to each other,the traditional method of single-mode data analysis and processing cannot effectively fuse the data of different modes and express the correlation between different modes.In order to solve this problem,make better fusion of different modal data and the relationship between the said features,this paper proposes a fusion method of multiple modal spectral characteristics and radar remote sensing imageaccording to the spatial dimension in the form of a vector or matrix for effective integration,by training the SVM model.Experimental results show that the method based on band selection and multi-mode feature fusion can effectively improve the robustness of remote sensing image features.Compared with other methods,the fusion method can achieve higher classification accuracy and better classification effect. 展开更多
关键词 Remote sensing classification Classification of features Band selection multimodal feature fusion SVM
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部