期刊文献+
共找到33篇文章
< 1 2 >
每页显示 20 50 100
A Hand Features Based Fusion Recognition Network with Enhancing Multi-Modal Correlation
1
作者 Wei Wu Yuan Zhang +2 位作者 Yunpeng Li Chuanyang Li YanHao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期537-555,共19页
Fusing hand-based features in multi-modal biometric recognition enhances anti-spoofing capabilities.Additionally,it leverages inter-modal correlation to enhance recognition performance.Concurrently,the robustness and ... Fusing hand-based features in multi-modal biometric recognition enhances anti-spoofing capabilities.Additionally,it leverages inter-modal correlation to enhance recognition performance.Concurrently,the robustness and recognition performance of the system can be enhanced through judiciously leveraging the correlation among multimodal features.Nevertheless,two issues persist in multi-modal feature fusion recognition:Firstly,the enhancement of recognition performance in fusion recognition has not comprehensively considered the inter-modality correlations among distinct modalities.Secondly,during modal fusion,improper weight selection diminishes the salience of crucial modal features,thereby diminishing the overall recognition performance.To address these two issues,we introduce an enhanced DenseNet multimodal recognition network founded on feature-level fusion.The information from the three modalities is fused akin to RGB,and the input network augments the correlation between modes through channel correlation.Within the enhanced DenseNet network,the Efficient Channel Attention Network(ECA-Net)dynamically adjusts the weight of each channel to amplify the salience of crucial information in each modal feature.Depthwise separable convolution markedly reduces the training parameters and further enhances the feature correlation.Experimental evaluations were conducted on four multimodal databases,comprising six unimodal databases,including multispectral palmprint and palm vein databases from the Chinese Academy of Sciences.The Equal Error Rates(EER)values were 0.0149%,0.0150%,0.0099%,and 0.0050%,correspondingly.In comparison to other network methods for palmprint,palm vein,and finger vein fusion recognition,this approach substantially enhances recognition performance,rendering it suitable for high-security environments with practical applicability.The experiments in this article utilized amodest sample database comprising 200 individuals.The subsequent phase involves preparing for the extension of the method to larger databases. 展开更多
关键词 BIOMETRICS multi-modAL CORRELATION deep learning feature-level fusion
下载PDF
A Comprehensive Survey on Deep Learning Multi-Modal Fusion:Methods,Technologies and Applications
2
作者 Tianzhe Jiao Chaopeng Guo +2 位作者 Xiaoyue Feng Yuming Chen Jie Song 《Computers, Materials & Continua》 SCIE EI 2024年第7期1-35,共35页
Multi-modal fusion technology gradually become a fundamental task in many fields,such as autonomous driving,smart healthcare,sentiment analysis,and human-computer interaction.It is rapidly becoming the dominant resear... Multi-modal fusion technology gradually become a fundamental task in many fields,such as autonomous driving,smart healthcare,sentiment analysis,and human-computer interaction.It is rapidly becoming the dominant research due to its powerful perception and judgment capabilities.Under complex scenes,multi-modal fusion technology utilizes the complementary characteristics of multiple data streams to fuse different data types and achieve more accurate predictions.However,achieving outstanding performance is challenging because of equipment performance limitations,missing information,and data noise.This paper comprehensively reviews existing methods based onmulti-modal fusion techniques and completes a detailed and in-depth analysis.According to the data fusion stage,multi-modal fusion has four primary methods:early fusion,deep fusion,late fusion,and hybrid fusion.The paper surveys the three majormulti-modal fusion technologies that can significantly enhance the effect of data fusion and further explore the applications of multi-modal fusion technology in various fields.Finally,it discusses the challenges and explores potential research opportunities.Multi-modal tasks still need intensive study because of data heterogeneity and quality.Preserving complementary information and eliminating redundant information between modalities is critical in multi-modal technology.Invalid data fusion methods may introduce extra noise and lead to worse results.This paper provides a comprehensive and detailed summary in response to these challenges. 展开更多
关键词 multi-modal fusion REPRESENTATION TRANSLATION ALIGNMENT deep learning comparative analysis
下载PDF
Fake News Detection Based on Text-Modal Dominance and Fusing Multiple Multi-Model Clues
3
作者 Li fang Fu Huanxin Peng +1 位作者 Changjin Ma Yuhan Liu 《Computers, Materials & Continua》 SCIE EI 2024年第3期4399-4416,共18页
In recent years,how to efficiently and accurately identify multi-model fake news has become more challenging.First,multi-model data provides more evidence but not all are equally important.Secondly,social structure in... In recent years,how to efficiently and accurately identify multi-model fake news has become more challenging.First,multi-model data provides more evidence but not all are equally important.Secondly,social structure information has proven to be effective in fake news detection and how to combine it while reducing the noise information is critical.Unfortunately,existing approaches fail to handle these problems.This paper proposes a multi-model fake news detection framework based on Tex-modal Dominance and fusing Multiple Multi-model Cues(TD-MMC),which utilizes three valuable multi-model clues:text-model importance,text-image complementary,and text-image inconsistency.TD-MMC is dominated by textural content and assisted by image information while using social network information to enhance text representation.To reduce the irrelevant social structure’s information interference,we use a unidirectional cross-modal attention mechanism to selectively learn the social structure’s features.A cross-modal attention mechanism is adopted to obtain text-image cross-modal features while retaining textual features to reduce the loss of important information.In addition,TD-MMC employs a new multi-model loss to improve the model’s generalization ability.Extensive experiments have been conducted on two public real-world English and Chinese datasets,and the results show that our proposed model outperforms the state-of-the-art methods on classification evaluation metrics. 展开更多
关键词 Fake news detection cross-modal attention mechanism multi-modal fusion social network transfer learning
下载PDF
Method of Multi-Mode Sensor Data Fusion with an Adaptive Deep Coupling Convolutional Auto-Encoder
4
作者 Xiaoxiong Feng Jianhua Liu 《Journal of Sensor Technology》 2023年第4期69-85,共17页
To address the difficulties in fusing multi-mode sensor data for complex industrial machinery, an adaptive deep coupling convolutional auto-encoder (ADCCAE) fusion method was proposed. First, the multi-mode features e... To address the difficulties in fusing multi-mode sensor data for complex industrial machinery, an adaptive deep coupling convolutional auto-encoder (ADCCAE) fusion method was proposed. First, the multi-mode features extracted synchronously by the CCAE were stacked and fed to the multi-channel convolution layers for fusion. Then, the fused data was passed to all connection layers for compression and fed to the Softmax module for classification. Finally, the coupling loss function coefficients and the network parameters were optimized through an adaptive approach using the gray wolf optimization (GWO) algorithm. Experimental comparisons showed that the proposed ADCCAE fusion model was superior to existing models for multi-mode data fusion. 展开更多
关键词 multi-mode Data fusion Coupling Convolutional Auto-Encoder Adaptive Optimization Deep Learning
下载PDF
PowerDetector:Malicious PowerShell Script Family Classification Based on Multi-Modal Semantic Fusion and Deep Learning 被引量:1
5
作者 Xiuzhang Yang Guojun Peng +2 位作者 Dongni Zhang Yuhang Gao Chenguang Li 《China Communications》 SCIE CSCD 2023年第11期202-224,共23页
Power Shell has been widely deployed in fileless malware and advanced persistent threat(APT)attacks due to its high stealthiness and live-off-theland technique.However,existing works mainly focus on deobfuscation and ... Power Shell has been widely deployed in fileless malware and advanced persistent threat(APT)attacks due to its high stealthiness and live-off-theland technique.However,existing works mainly focus on deobfuscation and malicious detection,lacking the malicious Power Shell families classification and behavior analysis.Moreover,the state-of-the-art methods fail to capture fine-grained features and semantic relationships,resulting in low robustness and accuracy.To this end,we propose Power Detector,a novel malicious Power Shell script detector based on multimodal semantic fusion and deep learning.Specifically,we design four feature extraction methods to extract key features from character,token,abstract syntax tree(AST),and semantic knowledge graph.Then,we intelligently design four embeddings(i.e.,Char2Vec,Token2Vec,AST2Vec,and Rela2Vec) and construct a multi-modal fusion algorithm to concatenate feature vectors from different views.Finally,we propose a combined model based on transformer and CNN-Bi LSTM to implement Power Shell family detection.Our experiments with five types of Power Shell attacks show that PowerDetector can accurately detect various obfuscated and stealth PowerShell scripts,with a 0.9402 precision,a 0.9358 recall,and a 0.9374 F1-score.Furthermore,through singlemodal and multi-modal comparison experiments,we demonstrate that PowerDetector’s multi-modal embedding and deep learning model can achieve better accuracy and even identify more unknown attacks. 展开更多
关键词 deep learning malicious family detection multi-modal semantic fusion POWERSHELL
下载PDF
Multi-Modal Military Event Extraction Based on Knowledge Fusion
6
作者 Yuyuan Xiang Yangli Jia +1 位作者 Xiangliang Zhang Zhenling Zhang 《Computers, Materials & Continua》 SCIE EI 2023年第10期97-114,共18页
Event extraction stands as a significant endeavor within the realm of information extraction,aspiring to automatically extract structured event information from vast volumes of unstructured text.Extracting event eleme... Event extraction stands as a significant endeavor within the realm of information extraction,aspiring to automatically extract structured event information from vast volumes of unstructured text.Extracting event elements from multi-modal data remains a challenging task due to the presence of a large number of images and overlapping event elements in the data.Although researchers have proposed various methods to accomplish this task,most existing event extraction models cannot address these challenges because they are only applicable to text scenarios.To solve the above issues,this paper proposes a multi-modal event extraction method based on knowledge fusion.Specifically,for event-type recognition,we use a meticulous pipeline approach that integrates multiple pre-trained models.This approach enables a more comprehensive capture of the multidimensional event semantic features present in military texts,thereby enhancing the interconnectedness of information between trigger words and events.For event element extraction,we propose a method for constructing a priori templates that combine event types with corresponding trigger words.This approach facilitates the acquisition of fine-grained input samples containing event trigger words,thus enabling the model to understand the semantic relationships between elements in greater depth.Furthermore,a fusion method for spatial mapping of textual event elements and image elements is proposed to reduce the category number overload and effectively achieve multi-modal knowledge fusion.The experimental results based on the CCKS 2022 dataset show that our method has achieved competitive results,with a comprehensive evaluation value F1-score of 53.4%for the model.These results validate the effectiveness of our method in extracting event elements from multi-modal data. 展开更多
关键词 Event extraction multi-modAL knowledge fusion pre-trained models
下载PDF
Robust Symmetry Prediction with Multi-Modal Feature Fusion for Partial Shapes
7
作者 Junhua Xi Kouquan Zheng +3 位作者 Yifan Zhong Longjiang Li Zhiping Cai Jinjing Chen 《Intelligent Automation & Soft Computing》 SCIE 2023年第3期3099-3111,共13页
In geometry processing,symmetry research benefits from global geo-metric features of complete shapes,but the shape of an object captured in real-world applications is often incomplete due to the limited sensor resoluti... In geometry processing,symmetry research benefits from global geo-metric features of complete shapes,but the shape of an object captured in real-world applications is often incomplete due to the limited sensor resolution,single viewpoint,and occlusion.Different from the existing works predicting symmetry from the complete shape,we propose a learning approach for symmetry predic-tion based on a single RGB-D image.Instead of directly predicting the symmetry from incomplete shapes,our method consists of two modules,i.e.,the multi-mod-al feature fusion module and the detection-by-reconstruction module.Firstly,we build a channel-transformer network(CTN)to extract cross-fusion features from the RGB-D as the multi-modal feature fusion module,which helps us aggregate features from the color and the depth separately.Then,our self-reconstruction net-work based on a 3D variational auto-encoder(3D-VAE)takes the global geo-metric features as input,followed by a prediction symmetry network to detect the symmetry.Our experiments are conducted on three public datasets:ShapeNet,YCB,and ScanNet,we demonstrate that our method can produce reliable and accurate results. 展开更多
关键词 Symmetry prediction multi-modal feature fusion partial shapes
下载PDF
Fake News Detection Based on Cross-Modal Message Aggregation and Gated Fusion Network
8
作者 Fangfang Shan Mengyao Liu +1 位作者 Menghan Zhang Zhenyu Wang 《Computers, Materials & Continua》 SCIE EI 2024年第7期1521-1542,共22页
Social media has become increasingly significant in modern society,but it has also turned into a breeding ground for the propagation of misleading information,potentially causing a detrimental impact on public opinion... Social media has become increasingly significant in modern society,but it has also turned into a breeding ground for the propagation of misleading information,potentially causing a detrimental impact on public opinion and daily life.Compared to pure text content,multmodal content significantly increases the visibility and share ability of posts.This has made the search for efficient modality representations and cross-modal information interaction methods a key focus in the field of multimodal fake news detection.To effectively address the critical challenge of accurately detecting fake news on social media,this paper proposes a fake news detection model based on crossmodal message aggregation and a gated fusion network(MAGF).MAGF first uses BERT to extract cumulative textual feature representations and word-level features,applies Faster Region-based ConvolutionalNeuralNetwork(Faster R-CNN)to obtain image objects,and leverages ResNet-50 and Visual Geometry Group-19(VGG-19)to obtain image region features and global features.The image region features and word-level text features are then projected into a low-dimensional space to calculate a text-image affinity matrix for cross-modal message aggregation.The gated fusion network combines text and image region features to obtain adaptively aggregated features.The interaction matrix is derived through an attention mechanism and further integrated with global image features using a co-attention mechanism to producemultimodal representations.Finally,these fused features are fed into a classifier for news categorization.Experiments were conducted on two public datasets,Twitter and Weibo.Results show that the proposed model achieves accuracy rates of 91.8%and 88.7%on the two datasets,respectively,significantly outperforming traditional unimodal and existing multimodal models. 展开更多
关键词 Fake news detection cross-modalmessage aggregation gate fusion network co-attention mechanism multi-modal representation
下载PDF
An intelligent navigation experimental system based on multi-mode fusion
9
作者 Rui HAN Zhiquan FENG +3 位作者 Jinglan TIAN Xue FAN Xiaohui YANG Qingbei GUO 《Virtual Reality & Intelligent Hardware》 2020年第4期345-353,共9页
At present,most experimental teaching systems lack guidance of an operator,and thus users often do not know what to do during an experiment.The user load is therefore increased,and the learning efficiency of the stude... At present,most experimental teaching systems lack guidance of an operator,and thus users often do not know what to do during an experiment.The user load is therefore increased,and the learning efficiency of the students is decreased.To solve the problem of insufficient system interactivity and guidance,an experimental navigation system based on multi-mode fusion is proposed in this paper.The system first obtains user information by sensing the hardware devices,intelligently perceives the user intention and progress of the experiment according to the information acquired,and finally carries out a multi-modal intelligent navigation process for users.As an innovative aspect of this study,an intelligent multi-mode navigation system is used to guide users in conducting experiments,thereby reducing the user load and enabling the users to effectively complete their experiments.The results prove that this system can guide users in completing their experiments,and can effectively reduce the user load during the interaction process and improve the efficiency. 展开更多
关键词 Navigation interaction Chemical experiment system multi-mode fusion
下载PDF
Adaptive Multi-modal Fusion Instance Segmentation for CAEVs in Complex Conditions:Dataset,Framework and Verifications 被引量:2
10
作者 Pai Peng Keke Geng +3 位作者 Guodong Yin Yanbo Lu Weichao Zhuang Shuaipeng Liu 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2021年第5期96-106,共11页
Current works of environmental perception for connected autonomous electrified vehicles(CAEVs)mainly focus on the object detection task in good weather and illumination conditions,they often perform poorly in adverse ... Current works of environmental perception for connected autonomous electrified vehicles(CAEVs)mainly focus on the object detection task in good weather and illumination conditions,they often perform poorly in adverse scenarios and have a vague scene parsing ability.This paper aims to develop an end-to-end sharpening mixture of experts(SMoE)fusion framework to improve the robustness and accuracy of the perception systems for CAEVs in complex illumination and weather conditions.Three original contributions make our work distinctive from the existing relevant literature.The Complex KITTI dataset is introduced which consists of 7481 pairs of modified KITTI RGB images and the generated LiDAR dense depth maps,and this dataset is fine annotated in instance-level with the proposed semi-automatic annotation method.The SMoE fusion approach is devised to adaptively learn the robust kernels from complementary modalities.Comprehensive comparative experiments are implemented,and the results show that the proposed SMoE framework yield significant improvements over the other fusion techniques in adverse environmental conditions.This research proposes a SMoE fusion framework to improve the scene parsing ability of the perception systems for CAEVs in adverse conditions. 展开更多
关键词 Connected autonomous electrified vehicles multi-modal fusion Semi-automatic annotation Sharpening mixture of experts Comparative experiments
下载PDF
Test method of laser paint removal based on multi-modal feature fusion
11
作者 HUANG Hai-peng HAO Ben-tian +2 位作者 YE De-jun GAO Hao LI Liang 《Journal of Central South University》 SCIE EI CAS CSCD 2022年第10期3385-3398,共14页
Laser cleaning is a highly nonlinear physical process for solving poor single-modal(e.g., acoustic or vision)detection performance and low inter-information utilization. In this study, a multi-modal feature fusion net... Laser cleaning is a highly nonlinear physical process for solving poor single-modal(e.g., acoustic or vision)detection performance and low inter-information utilization. In this study, a multi-modal feature fusion network model was constructed based on a laser paint removal experiment. The alignment of heterogeneous data under different modals was solved by combining the piecewise aggregate approximation and gramian angular field. Moreover, the attention mechanism was introduced to optimize the dual-path network and dense connection network, enabling the sampling characteristics to be extracted and integrated. Consequently, the multi-modal discriminant detection of laser paint removal was realized. According to the experimental results, the verification accuracy of the constructed model on the experimental dataset was 99.17%, which is 5.77% higher than the optimal single-modal detection results of the laser paint removal. The feature extraction network was optimized by the attention mechanism, and the model accuracy was increased by 3.3%. Results verify the improved classification performance of the constructed multi-modal feature fusion model in detecting laser paint removal, the effective integration of acoustic data and visual image data, and the accurate detection of laser paint removal. 展开更多
关键词 laser cleaning multi-modal fusion image processing deep learning
下载PDF
Adaptive multi-modal feature fusion for far and hard object detection
12
作者 LI Yang GE Hongwei 《Journal of Measurement Science and Instrumentation》 CAS CSCD 2021年第2期232-241,共10页
In order to solve difficult detection of far and hard objects due to the sparseness and insufficient semantic information of LiDAR point cloud,a 3D object detection network with multi-modal data adaptive fusion is pro... In order to solve difficult detection of far and hard objects due to the sparseness and insufficient semantic information of LiDAR point cloud,a 3D object detection network with multi-modal data adaptive fusion is proposed,which makes use of multi-neighborhood information of voxel and image information.Firstly,design an improved ResNet that maintains the structure information of far and hard objects in low-resolution feature maps,which is more suitable for detection task.Meanwhile,semantema of each image feature map is enhanced by semantic information from all subsequent feature maps.Secondly,extract multi-neighborhood context information with different receptive field sizes to make up for the defect of sparseness of point cloud which improves the ability of voxel features to represent the spatial structure and semantic information of objects.Finally,propose a multi-modal feature adaptive fusion strategy which uses learnable weights to express the contribution of different modal features to the detection task,and voxel attention further enhances the fused feature expression of effective target objects.The experimental results on the KITTI benchmark show that this method outperforms VoxelNet with remarkable margins,i.e.increasing the AP by 8.78%and 5.49%on medium and hard difficulty levels.Meanwhile,our method achieves greater detection performance compared with many mainstream multi-modal methods,i.e.outperforming the AP by 1%compared with that of MVX-Net on medium and hard difficulty levels. 展开更多
关键词 3D object detection adaptive fusion multi-modal data fusion attention mechanism multi-neighborhood features
下载PDF
Adaptive cross-fusion learning for multi-modal gesture recognition
13
作者 Benjia ZHOU Jun WAN +1 位作者 Yanyan LIANG Guodong GUO 《Virtual Reality & Intelligent Hardware》 2021年第3期235-247,共13页
Background Gesture recognition has attracted significant attention because of its wide range of potential applications.Although multi-modal gesture recognition has made significant progress in recent years,a popular m... Background Gesture recognition has attracted significant attention because of its wide range of potential applications.Although multi-modal gesture recognition has made significant progress in recent years,a popular method still is simply fusing prediction scores at the end of each branch,which often ignores complementary features among different modalities in the early stage and does not fuse the complementary features into a more discriminative feature.Methods This paper proposes an Adaptive Cross-modal Weighting(ACmW)scheme to exploit complementarity features from RGB-D data in this study.The scheme learns relations among different modalities by combining the features of different data streams.The proposed ACmW module contains two key functions:(1)fusing complementary features from multiple streams through an adaptive one-dimensional convolution;and(2)modeling the correlation of multi-stream complementary features in the time dimension.Through the effective combination of these two functional modules,the proposed ACmW can automatically analyze the relationship between the complementary features from different streams,and can fuse them in the spatial and temporal dimensions.Results Extensive experiments validate the effectiveness of the proposed method,and show that our method outperforms state-of-the-art methods on IsoGD and NVGesture. 展开更多
关键词 Gesture recognition multi-modal fusion RGB-D
下载PDF
基于深度强化学习微小软排线装配技术的研究
14
作者 林杰 楚中毅 任芸丹 《机床与液压》 北大核心 2024年第14期89-93,共5页
传统机器人控制方法仅限于固定种类和较为规则的来料,通过位置关系完成装配。由于排线的形态变异较大,很难实现抓取和自动化组装,排线的组装成功率和良率较低。针对宽度小于2 mm微小排线装配难题,通过机器3D视觉传感、力觉传感、触觉传... 传统机器人控制方法仅限于固定种类和较为规则的来料,通过位置关系完成装配。由于排线的形态变异较大,很难实现抓取和自动化组装,排线的组装成功率和良率较低。针对宽度小于2 mm微小排线装配难题,通过机器3D视觉传感、力觉传感、触觉传感和本体觉传感等多模态融合技术,设计一套基于深度强化学习的微小软排线装配智能控制算法。在此基础上搭建了一组由协作机器人、六维力传感器、3D机器视觉系统组成的实验设备,并在多环境、不确定因素下验证了此方法装配的可行性。基于高精度微小排线的装配要求,通过深度强化学习多模态控制方法大幅提升了可靠性和装配的成功率,相比传统控制方法装配效率提升15%以上。此测试系统的装配精度可达±0.1 mm,装配成功率到达98%以上。 展开更多
关键词 机器人 深度强化学习 多模态融合技术 智能控制算法
下载PDF
Multi-modality hierarchical fusion network for lumbar spine segmentation with magnetic resonance images
15
作者 Han Yan Guangtao Zhang +1 位作者 Wei Cui Zhuliang Yu 《Control Theory and Technology》 EI CSCD 2024年第4期612-622,共11页
For the analysis of spinal and disc diseases,automated tissue segmentation of the lumbar spine is vital.Due to the continuous and concentrated location of the target,the abundance of edge features,and individual diffe... For the analysis of spinal and disc diseases,automated tissue segmentation of the lumbar spine is vital.Due to the continuous and concentrated location of the target,the abundance of edge features,and individual differences,conventional automatic segmentation methods perform poorly.Since the success of deep learning in the segmentation of medical images has been shown in the past few years,it has been applied to this task in a number of ways.The multi-scale and multi-modal features of lumbar tissues,however,are rarely explored by methodologies of deep learning.Because of the inadequacies in medical images availability,it is crucial to effectively fuse various modes of data collection for model training to alleviate the problem of insufficient samples.In this paper,we propose a novel multi-modality hierarchical fusion network(MHFN)for improving lumbar spine segmentation by learning robust feature representations from multi-modality magnetic resonance images.An adaptive group fusion module(AGFM)is introduced in this paper to fuse features from various modes to extract cross-modality features that could be valuable.Furthermore,to combine features from low to high levels of cross-modality,we design a hierarchical fusion structure based on AGFM.Compared to the other feature fusion methods,AGFM is more effective based on experimental results on multi-modality MR images of the lumbar spine.To further enhance segmentation accuracy,we compare our network with baseline fusion structures.Compared to the baseline fusion structures(input-level:76.27%,layer-level:78.10%,decision-level:79.14%),our network was able to segment fractured vertebrae more accurately(85.05%). 展开更多
关键词 Lumbar spine segmentation Deep learning multi-modality fusion Feature fusion
原文传递
Explainable Conformer Network for Detection of COVID-19 Pneumonia from Chest CT Scan: From Concepts toward Clinical Explainability
16
作者 Mohamed Abdel-Basset Hossam Hawash +2 位作者 Mohamed Abouhawwash S.S.Askar Alshaimaa A.Tantawy 《Computers, Materials & Continua》 SCIE EI 2024年第1期1171-1187,共17页
The early implementation of treatment therapies necessitates the swift and precise identification of COVID-19 pneumonia by the analysis of chest CT scans.This study aims to investigate the indispensable need for preci... The early implementation of treatment therapies necessitates the swift and precise identification of COVID-19 pneumonia by the analysis of chest CT scans.This study aims to investigate the indispensable need for precise and interpretable diagnostic tools for improving clinical decision-making for COVID-19 diagnosis.This paper proposes a novel deep learning approach,called Conformer Network,for explainable discrimination of viral pneumonia depending on the lung Region of Infections(ROI)within a single modality radiographic CT scan.Firstly,an efficient U-shaped transformer network is integrated for lung image segmentation.Then,a robust transfer learning technique is introduced to design a robust feature extractor based on pre-trained lightweight Big Transfer(BiT-L)and finetuned on medical data to effectively learn the patterns of infection in the input image.Secondly,this work presents a visual explanation method to guarantee clinical explainability for decisions made by Conformer Network.Experimental evaluation of real-world CT data demonstrated that the diagnostic accuracy of ourmodel outperforms cutting-edge studies with statistical significance.The Conformer Network achieves 97.40% of detection accuracy under cross-validation settings.Our model not only achieves high sensitivity and specificity but also affords visualizations of salient features contributing to each classification decision,enhancing the overall transparency and trustworthiness of our model.The findings provide obvious implications for the ability of our model to empower clinical staff by generating transparent intuitions about the features driving diagnostic decisions. 展开更多
关键词 Deep learning COVID-19 multi-modal medical image fusion diagnostic image fusion
下载PDF
Research on Fine-Grained Recognition Method for Sensitive Information in Social Networks Based on CLIP
17
作者 Menghan Zhang Fangfang Shan +1 位作者 Mengyao Liu Zhenyu Wang 《Computers, Materials & Continua》 SCIE EI 2024年第10期1565-1580,共16页
With the emergence and development of social networks,people can stay in touch with friends,family,and colleagues more quickly and conveniently,regardless of their location.This ubiquitous digital internet environment... With the emergence and development of social networks,people can stay in touch with friends,family,and colleagues more quickly and conveniently,regardless of their location.This ubiquitous digital internet environment has also led to large-scale disclosure of personal privacy.Due to the complexity and subtlety of sensitive information,traditional sensitive information identification technologies cannot thoroughly address the characteristics of each piece of data,thus weakening the deep connections between text and images.In this context,this paper adopts the CLIP model as a modality discriminator.By using comparative learning between sensitive image descriptions and images,the similarity between the images and the sensitive descriptions is obtained to determine whether the images contain sensitive information.This provides the basis for identifying sensitive information using different modalities.Specifically,if the original data does not contain sensitive information,only single-modality text-sensitive information identification is performed;if the original data contains sensitive information,multimodality sensitive information identification is conducted.This approach allows for differentiated processing of each piece of data,thereby achieving more accurate sensitive information identification.The aforementioned modality discriminator can address the limitations of existing sensitive information identification technologies,making the identification of sensitive information from the original data more appropriate and precise. 展开更多
关键词 Deep learning social networks sensitive information recognition multi-modal fusion
下载PDF
Improving VQA via Dual-Level Feature Embedding Network
18
作者 Yaru Song Huahu Xu Dikai Fang 《Intelligent Automation & Soft Computing》 2024年第3期397-416,共20页
Visual Question Answering(VQA)has sparked widespread interest as a crucial task in integrating vision and language.VQA primarily uses attention mechanisms to effectively answer questions to associate relevant visual r... Visual Question Answering(VQA)has sparked widespread interest as a crucial task in integrating vision and language.VQA primarily uses attention mechanisms to effectively answer questions to associate relevant visual regions with input questions.The detection-based features extracted by the object detection network aim to acquire the visual attention distribution on a predetermined detection frame and provide object-level insights to answer questions about foreground objects more effectively.However,it cannot answer the question about the background forms without detection boxes due to the lack of fine-grained details,which is the advantage of grid-based features.In this paper,we propose a Dual-Level Feature Embedding(DLFE)network,which effectively integrates grid-based and detection-based image features in a unified architecture to realize the complementary advantages of both features.Specifically,in DLFE,In DLFE,firstly,a novel Dual-Level Self-Attention(DLSA)modular is proposed to mine the intrinsic properties of the two features,where Positional Relation Attention(PRA)is designed to model the position information.Then,we propose a Feature Fusion Attention(FFA)to address the semantic noise caused by the fusion of two features and construct an alignment graph to enhance and align the grid and detection features.Finally,we use co-attention to learn the interactive features of the image and question and answer questions more accurately.Our method has significantly improved compared to the baseline,increasing accuracy from 66.01%to 70.63%on the test-std dataset of VQA 1.0 and from 66.24%to 70.91%for the test-std dataset of VQA 2.0. 展开更多
关键词 Visual question answering multi-modal feature processing attention mechanisms cross-model fusion
下载PDF
Intelligent Breast Cancer Prediction Empowered with Fusion and Deep Learning 被引量:6
19
作者 Shahan Yamin Siddiqui Iftikhar Naseer +4 位作者 Muhammad Adnan Khan Muhammad Faheem Mushtaq Rizwan Ali Naqvi Dildar Hussain Amir Haider 《Computers, Materials & Continua》 SCIE EI 2021年第4期1033-1049,共17页
Breast cancer is the most frequently detected tumor that eventually could result in a significant increase in female mortality globally.According to clinical statistics,one woman out of eight is under the threat of br... Breast cancer is the most frequently detected tumor that eventually could result in a significant increase in female mortality globally.According to clinical statistics,one woman out of eight is under the threat of breast cancer.Lifestyle and inheritance patterns may be a reason behind its spread among women.However,some preventive measures,such as tests and periodic clinical checks can mitigate its risk thereby,improving its survival chances substantially.Early diagnosis and initial stage treatment can help increase the survival rate.For that purpose,pathologists can gather support from nondestructive and efficient computer-aided diagnosis(CAD)systems.This study explores the breast cancer CAD method relying on multimodal medical imaging and decision-based fusion.In multimodal medical imaging fusion,a deep learning approach is applied,obtaining 97.5%accuracy with a 2.5%miss rate for breast cancer prediction.A deep extreme learning machine technique applied on feature-based data provided a 97.41%accuracy.Finally,decisionbased fusion applied to both breast cancer prediction models to diagnose its stages,resulted in an overall accuracy of 97.97%.The proposed system model provides more accurate results compared with other state-of-the-art approaches,rapidly diagnosing breast cancer to decrease its mortality rate. 展开更多
关键词 fusion feature breast cancer prediction deep learning convolutional neural network multi-modal medical image fusion decision-based fusion
下载PDF
VISUALIZATION OF HEAD AND NECK CANCER MODELS WITH A TRIPLE FUSION REPORTER GENE
20
作者 YING ZHENG QIAOYA LIN +2 位作者 HONGLIN JIN JUAN CHEN ZHIHONG ZHANG 《Journal of Innovative Optical Health Sciences》 SCIE EI CAS 2012年第4期48-56,共9页
The development of experimental animal models for head and neck tumors generally rely on the biol uminescence imaging to achieve the dynamic monitoring of the tumor growth and metastasis due to the complicated anatomi... The development of experimental animal models for head and neck tumors generally rely on the biol uminescence imaging to achieve the dynamic monitoring of the tumor growth and metastasis due to the complicated anatomical structures.Since the bioluminescence imaging is largely affected by the intracellular luciferase expression level and external D-luciferin concentrations,its imaging accuracy requires further confirmation.Here,a new triple fusion reportelr gene,which consists of a herpes simplex virus type 1 thymidine kinase(TK)gene for radioactive imaging,a far-red fuorescent protein(mLumin)gene for fuorescent imaging,and a firefly luciferase gene for bioluminescence imaging,was introduced for in vrivo observation of the head and neck tumors through multi-modality imaging.Results show that fuorescence and bioluminescence signals from mLumin and luciferase,respectively,were clearly observed in tumor cells,and TK could activate suicide pathway of the cells in the presence of nucleotide analog-ganciclovir(GCV),demonstrating the effecti veness of individual functions of each gene.Moreover,subcutaneous and metastasis animal models for head and neck tumors using the fusion reporter gene-expressing cell lines were established,allowing multi-modality imaging in vio.Together,the established tumor models of head and neck cancer based on the newly developed triple fusion reporter gene are ideal for monitoring tumor growth,assessing the drug therapeutic efficacy and verifying the effec-tiveness of new treatments. 展开更多
关键词 Head and neck cancer tumor metastasis model three fusion reporter gene far-red fluorescent protein frefly luciferase multi-modality imaging
下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部