期刊文献+
共找到149篇文章
< 1 2 8 >
每页显示 20 50 100
Multi-Modality Medical Image Fusion Based on Wavelet Analysis and Quality Evaluation 被引量:3
1
作者 Yu Lifeng & Zu Donglin Institute of Heavy Ion Physics, Peking University, 100871, P. R. China Wang Weidong General Hospital of PLA, Beijing 100853, P. R. China Bao Shanglian Institute of Heavy Ion Physics, Peking University, 100871, P. R. China 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2001年第1期42-48,共7页
Multi-modality medical image fusion has more and more important applications in medical image analysis and understanding. In this paper, we develop and apply a multi-resolution method based on wavelet pyramid to fuse ... Multi-modality medical image fusion has more and more important applications in medical image analysis and understanding. In this paper, we develop and apply a multi-resolution method based on wavelet pyramid to fuse medical images from different modalities such as PET-MRI and CT-MRI. In particular, we evaluate the different fusion results when applying different selection rules and obtain optimum combination of fusion parameters. 展开更多
关键词 multi-modality MEDICAL IMAGE fusion MULTI-RESOLUTION analysis Wavelet.
下载PDF
Robust triboelectric information-mat enhanced by multi-modality deep learning for smart home
2
作者 Yanqin Yang Qiongfeng Shi +3 位作者 Zixuan Zhang Xuechuan Shan Budiman Salam Chengkuo Lee 《InfoMat》 SCIE CAS CSCD 2023年第1期139-160,共22页
In metaverse,a digital-twin smart home is a vital platform for immersive communication between the physical and virtual world.Triboelectric nanogenerators(TENGs)sensors contribute substantially to providing smart-home... In metaverse,a digital-twin smart home is a vital platform for immersive communication between the physical and virtual world.Triboelectric nanogenerators(TENGs)sensors contribute substantially to providing smart-home monitoring.However,TENG deployment is hindered by its unstable out-put under environment changes.Herein,we develop a digital-twin smart home using a robust all-TENG based information mat(InfoMat),which consists of an in-home mat array and an entry mat.The interdigital electrodes design allows environment-insensitive ratiometric readout from the mat array to can-cel the commonly experienced environmental variations.Arbitrary position sensing is also achieved because of the interval arrangement of the mat pixels.Concurrently,the two-channel entry mat generates multi-modality informa-tion to aid the 10-user identification accuracy to increase from 93% to 99% compared to the one-channel case.Furthermore,a digital-twin smart home is visualized by real-time projecting the information in smart home to virtual reality,including access authorization,position,walking trajectory,dynamic activities/sports,and so on. 展开更多
关键词 digital twin environment-insensitive multi-modality deep learning SCALABILITY smart home triboelectric information-mat
原文传递
Emma:An accurate,efficient,and multi-modality strategy for autonomous vehicle angle prediction
3
作者 Keqi Song Tao Ni +1 位作者 Linqi Song Weitao Xu 《Intelligent and Converged Networks》 EI 2023年第1期41-49,共9页
Autonomous driving and self-driving vehicles have become the most popular selection for customers for their convenience.Vehicle angle prediction is one of the most prevalent topics in the autonomous driving industry,t... Autonomous driving and self-driving vehicles have become the most popular selection for customers for their convenience.Vehicle angle prediction is one of the most prevalent topics in the autonomous driving industry,that is,realizing real-time vehicle angle prediction.However,existing methods of vehicle angle prediction utilize only single-modal data to achieve model prediction,such as images captured by the camera,which limits the performance and efficiency of the prediction system.In this paper,we present Emma,a novel vehicle angle prediction strategy that achieves multi-modal prediction and is more efficient.Specifically,Emma exploits both images and inertial measurement unit(IMU)signals with a fusion network for multi-modal data fusion and vehicle angle prediction.Moreover,we design and implement a few-shot learning module in Emma for fast domain adaptation to varied scenarios(e.g.,different vehicle models).Evaluation results demonstrate that Emma achieves overall 97.5%accuracy in predicting three vehicle angle parameters(yaw,pitch,and roll),which outperforms traditional single-modalities by approximately 16.7%-36.8%.Additionally,the few-shot learning module presents promising adaptive ability and shows overall 79.8%and 88.3%accuracy in 5-shot and 10-shot settings,respectively.Finally,empirical results show that Emma reduces energy consumption by 39.7%when running on the Arduino UNO board. 展开更多
关键词 multi-modality autonomous driving vehicle angle prediction few-shot learning
原文传递
Fake News Detection Based on Text-Modal Dominance and Fusing Multiple Multi-Model Clues
4
作者 Li fang Fu Huanxin Peng +1 位作者 Changjin Ma Yuhan Liu 《Computers, Materials & Continua》 SCIE EI 2024年第3期4399-4416,共18页
In recent years,how to efficiently and accurately identify multi-model fake news has become more challenging.First,multi-model data provides more evidence but not all are equally important.Secondly,social structure in... In recent years,how to efficiently and accurately identify multi-model fake news has become more challenging.First,multi-model data provides more evidence but not all are equally important.Secondly,social structure information has proven to be effective in fake news detection and how to combine it while reducing the noise information is critical.Unfortunately,existing approaches fail to handle these problems.This paper proposes a multi-model fake news detection framework based on Tex-modal Dominance and fusing Multiple Multi-model Cues(TD-MMC),which utilizes three valuable multi-model clues:text-model importance,text-image complementary,and text-image inconsistency.TD-MMC is dominated by textural content and assisted by image information while using social network information to enhance text representation.To reduce the irrelevant social structure’s information interference,we use a unidirectional cross-modal attention mechanism to selectively learn the social structure’s features.A cross-modal attention mechanism is adopted to obtain text-image cross-modal features while retaining textual features to reduce the loss of important information.In addition,TD-MMC employs a new multi-model loss to improve the model’s generalization ability.Extensive experiments have been conducted on two public real-world English and Chinese datasets,and the results show that our proposed model outperforms the state-of-the-art methods on classification evaluation metrics. 展开更多
关键词 Fake news detection cross-modal attention mechanism multi-modal fusion social network transfer learning
下载PDF
Multi-modal knowledge graph inference via media convergence and logic rule
5
作者 Feng Lin Dongmei Li +5 位作者 Wenbin Zhang Dongsheng Shi Yuanzhou Jiao Qianzhong Chen Yiying Lin Wentao Zhu 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第1期211-221,共11页
Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the intro... Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the introduction of a large amount of information from other modalities reduces the effectiveness of representation learning and makes knowledge graph inference less effective.To address the issue,an inference method based on Media Convergence and Rule-guided Joint Inference model(MCRJI)has been pro-posed.The authors not only converge multi-media features of entities but also introduce logic rules to improve the accuracy and interpretability of link prediction.First,a multi-headed self-attention approach is used to obtain the attention of different media features of entities during semantic synthesis.Second,logic rules of different lengths are mined from knowledge graph to learn new entity representations.Finally,knowledge graph inference is performed based on representing entities that converge multi-media features.Numerous experimental results show that MCRJI outperforms other advanced baselines in using multi-media features and knowledge graph inference,demonstrating that MCRJI provides an excellent approach for knowledge graph inference with converged multi-media features. 展开更多
关键词 logic rule media convergence multi-modal knowledge graph inference representation learning
下载PDF
Generative Multi-Modal Mutual Enhancement Video Semantic Communications
6
作者 Yuanle Chen Haobo Wang +3 位作者 Chunyu Liu Linyi Wang Jiaxin Liu Wei Wu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2985-3009,共25页
Recently,there have been significant advancements in the study of semantic communication in single-modal scenarios.However,the ability to process information in multi-modal environments remains limited.Inspired by the... Recently,there have been significant advancements in the study of semantic communication in single-modal scenarios.However,the ability to process information in multi-modal environments remains limited.Inspired by the research and applications of natural language processing across different modalities,our goal is to accurately extract frame-level semantic information from videos and ultimately transmit high-quality videos.Specifically,we propose a deep learning-basedMulti-ModalMutual Enhancement Video Semantic Communication system,called M3E-VSC.Built upon a VectorQuantized Generative AdversarialNetwork(VQGAN),our systemaims to leverage mutual enhancement among different modalities by using text as the main carrier of transmission.With it,the semantic information can be extracted fromkey-frame images and audio of the video and performdifferential value to ensure that the extracted text conveys accurate semantic information with fewer bits,thus improving the capacity of the system.Furthermore,a multi-frame semantic detection module is designed to facilitate semantic transitions during video generation.Simulation results demonstrate that our proposed model maintains high robustness in complex noise environments,particularly in low signal-to-noise ratio conditions,significantly improving the accuracy and speed of semantic transmission in video communication by approximately 50 percent. 展开更多
关键词 Generative adversarial networks multi-modal mutual enhancement video semantic transmission deep learning
下载PDF
Unsupervised multi-modal image translation based on the squeeze-and-excitation mechanism and feature attention module
7
作者 胡振涛 HU Chonghao +1 位作者 YANG Haoran SHUAI Weiwei 《High Technology Letters》 EI CAS 2024年第1期23-30,共8页
The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-genera... The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-generator mechanism is employed among the advanced approaches available to model different domain mappings,which results in inefficient training of neural networks and pattern collapse,leading to inefficient generation of image diversity.To address this issue,this paper introduces a multi-modal unsupervised image translation framework that uses a generator to perform multi-modal image translation.Specifically,firstly,the domain code is introduced in this paper to explicitly control the different generation tasks.Secondly,this paper brings in the squeeze-and-excitation(SE)mechanism and feature attention(FA)module.Finally,the model integrates multiple optimization objectives to ensure efficient multi-modal translation.This paper performs qualitative and quantitative experiments on multiple non-paired benchmark image translation datasets while demonstrating the benefits of the proposed method over existing technologies.Overall,experimental results have shown that the proposed method is versatile and scalable. 展开更多
关键词 multi-modal image translation generative adversarial network(GAN) squeezeand-excitation(SE)mechanism feature attention(FA)module
下载PDF
Multifunctional microcapsules:A theranostic agent for US/MR/PAT multi-modality imaging and synergistic chemo-photothermal osteosarcoma therapy 被引量:2
8
作者 Hufei Wang Sijia Xu +7 位作者 Daoyang Fan Xiaowen Geng Guang Zhi Decheng Wu Hong Shen Fei Yang Xiao Zhou Xing Wang 《Bioactive Materials》 SCIE 2022年第1期453-465,共13页
Development of versatile theranostic agents that simultaneously integrate therapeutic and diagnostic features remains a clinical urgent.Herein,we aimed to prepare uniform PEGylated(lactic-co-glycolic acid)(PLGA)microc... Development of versatile theranostic agents that simultaneously integrate therapeutic and diagnostic features remains a clinical urgent.Herein,we aimed to prepare uniform PEGylated(lactic-co-glycolic acid)(PLGA)microcapsules(PB@(Fe_(3)O_(4)@PEG-PLGA)MCs)with superparamagnetic Fe3O4 nanoparticles embedded in the shell and Prussian blue(PB)NPs inbuilt in the cavity via a premix membrane emulsification(PME)method.On account of the eligible geometry and multiple load capacity,these MCs could be used as efficient multi-modality contrast agents to simultaneously enhance the contrasts of US,MR and PAT imaging.In-built PB NPs furnished the MCs with excellent photothermal conversion property and embedded Fe_(3)O_(4)NPs endowed the magnetic location for fabrication of targeted drug delivery system.Notably,after further in-situ encapsulation of antitumor drug of DOX,(PB+DOX)@(Fe_(3)O_(4)@PEG-PLGA)MCs possessed more unique advantages on achieving near infrared(NIR)-responsive drug delivery and magnetic-guided chemo-photothermal synergistic osteosarcoma therapy.In vitro and in vivo studies revealed these biocompatible(PB+DOX)@(Fe_(3)O_(4)@PEG-PLGA)MCs could effectively target to the tumor tissue with superior therapeutic effect against the invasion of osteosarcoma and alleviation of osteolytic lesions,which will be developed as a smart platform integrating multi-modality imaging capabilities and synergistic effect with high therapy efficacy. 展开更多
关键词 multi-modality imaging MICROCAPSULE Photothermal therapy Drug delivery OSTEOSARCOMA
原文传递
Blind identification of occurrence of multi-modality in laser-feedback-based self-mixing sensor
9
作者 Muhammad Usman Usman Zabit +1 位作者 Olivier DBernal Gulistan Raja 《Chinese Optics Letters》 SCIE EI CAS CSCD 2020年第1期29-33,共5页
Self-mixing interferometry(SMI)is an attractive sensing scheme that typically relies on mono-modal operation of an employed laser diode.However,change in laser modality can occur due to change in operating conditions.... Self-mixing interferometry(SMI)is an attractive sensing scheme that typically relies on mono-modal operation of an employed laser diode.However,change in laser modality can occur due to change in operating conditions.So,detection of occurrence of multi-modality in SMI signals is necessary to avoid erroneous metric measurements.Typically,processing of multi-modal SMI signals is a difficult task due to the diverse and complex nature of such signals.However,the proposed techniques can significantly ease this task by identifying the modal state of SMI signals with 100%success rate so that interferometric fringes can be correctly interpreted for metric sensing applications. 展开更多
关键词 SELF-MIXING interferometry LASER DIODE multi-modality optical FEEDBACK
原文传递
M3SC:A Generic Dataset for Mixed Multi-Modal(MMM)Sensing and Communication Integration 被引量:3
10
作者 Xiang Cheng Ziwei Huang +6 位作者 Lu Bai Haotian Zhang Mingran Sun Boxun Liu Sijiang Li Jianan Zhang Minson Lee 《China Communications》 SCIE CSCD 2023年第11期13-29,共17页
The sixth generation(6G)of mobile communication system is witnessing a new paradigm shift,i.e.,integrated sensing-communication system.A comprehensive dataset is a prerequisite for 6G integrated sensing-communication ... The sixth generation(6G)of mobile communication system is witnessing a new paradigm shift,i.e.,integrated sensing-communication system.A comprehensive dataset is a prerequisite for 6G integrated sensing-communication research.This paper develops a novel simulation dataset,named M3SC,for mixed multi-modal(MMM)sensing-communication integration,and the generation framework of the M3SC dataset is further given.To obtain multimodal sensory data in physical space and communication data in electromagnetic space,we utilize Air-Sim and WaveFarer to collect multi-modal sensory data and exploit Wireless InSite to collect communication data.Furthermore,the in-depth integration and precise alignment of AirSim,WaveFarer,andWireless InSite are achieved.The M3SC dataset covers various weather conditions,multiplex frequency bands,and different times of the day.Currently,the M3SC dataset contains 1500 snapshots,including 80 RGB images,160 depth maps,80 LiDAR point clouds,256 sets of mmWave waveforms with 8 radar point clouds,and 72 channel impulse response(CIR)matrices per snapshot,thus totaling 120,000 RGB images,240,000 depth maps,120,000 LiDAR point clouds,384,000 sets of mmWave waveforms with 12,000 radar point clouds,and 108,000 CIR matrices.The data processing result presents the multi-modal sensory information and communication channel statistical properties.Finally,the MMM sensing-communication application,which can be supported by the M3SC dataset,is discussed. 展开更多
关键词 multi-modal sensing RAY-TRACING sensing-communication integration simulation dataset
下载PDF
Multi-task Learning of Semantic Segmentation and Height Estimation for Multi-modal Remote Sensing Images 被引量:1
11
作者 Mengyu WANG Zhiyuan YAN +2 位作者 Yingchao FENG Wenhui DIAO Xian SUN 《Journal of Geodesy and Geoinformation Science》 CSCD 2023年第4期27-39,共13页
Deep learning based methods have been successfully applied to semantic segmentation of optical remote sensing images.However,as more and more remote sensing data is available,it is a new challenge to comprehensively u... Deep learning based methods have been successfully applied to semantic segmentation of optical remote sensing images.However,as more and more remote sensing data is available,it is a new challenge to comprehensively utilize multi-modal remote sensing data to break through the performance bottleneck of single-modal interpretation.In addition,semantic segmentation and height estimation in remote sensing data are two tasks with strong correlation,but existing methods usually study individual tasks separately,which leads to high computational resource overhead.To this end,we propose a Multi-Task learning framework for Multi-Modal remote sensing images(MM_MT).Specifically,we design a Cross-Modal Feature Fusion(CMFF)method,which aggregates complementary information of different modalities to improve the accuracy of semantic segmentation and height estimation.Besides,a dual-stream multi-task learning method is introduced for Joint Semantic Segmentation and Height Estimation(JSSHE),extracting common features in a shared network to save time and resources,and then learning task-specific features in two task branches.Experimental results on the public multi-modal remote sensing image dataset Potsdam show that compared to training two tasks independently,multi-task learning saves 20%of training time and achieves competitive performance with mIoU of 83.02%for semantic segmentation and accuracy of 95.26%for height estimation. 展开更多
关键词 multi-modAL MULTI-TASK semantic segmentation height estimation convolutional neural network
下载PDF
A Multi-mode Electronic Load Sensing Control Scheme with Power Limitation and Pressure Cut-off for Mobile Machinery
12
作者 Min Cheng Bolin Sun +1 位作者 Ruqi Ding Bing Xu 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2023年第1期157-170,共14页
In mobile machinery,hydro-mechanical pumps are increasingly replaced by electronically controlled pumps to improve the automation level,but diversified control functions(e.g.,power limitation and pressure cut-off)are ... In mobile machinery,hydro-mechanical pumps are increasingly replaced by electronically controlled pumps to improve the automation level,but diversified control functions(e.g.,power limitation and pressure cut-off)are integrated into the electronic controller only from the pump level,leading to the potential instability of the overall system.To solve this problem,a multi-mode electrohydraulic load sensing(MELS)control scheme is proposed especially considering the switching stability from the system level,which includes four working modes of flow control,load sensing,power limitation,and pressure control.Depending on the actual working requirements,the switching rules for the different modes and the switching direction(i.e.,the modes can be switched bilaterally or unilaterally)are defined.The priority of different modes is also defined,from high to low:pressure control,power limitation,load sensing,and flow control.When multiple switching rules are satisfied at the same time,the system switches to the control mode with the highest priority.In addition,the switching stability between flow control and pressure control modes is analyzed,and the controller parameters that guarantee the switching stability are obtained.A comparative study is carried out based on a test rig with a 2-ton hydraulic excavator.The results show that the MELS controller can achieve the control functions of proper flow supplement,power limitation,and pressure cut-off,which has good stability performance when switching between different control modes.This research proposes the MELS control method that realizes the stability of multi-mode switching of the hydraulic system of mobile machinery under different working conditions. 展开更多
关键词 Hydraulic control Load sensing multi-modE Power limitation Mobile machinery
下载PDF
A survey of multi-modal learning theory
13
作者 HUANG Yu HUANG Longbo 《中山大学学报(自然科学版)(中英文)》 CAS CSCD 北大核心 2023年第5期38-49,共12页
Deep multi-modal learning,a rapidly growing field with a wide range of practical applications,aims to effectively utilize and integrate information from multiple sources,known as modalities.Despite its impressive empi... Deep multi-modal learning,a rapidly growing field with a wide range of practical applications,aims to effectively utilize and integrate information from multiple sources,known as modalities.Despite its impressive empirical performance,the theoretical foundations of deep multi-modal learning have yet to be fully explored.In this paper,we will undertake a comprehensive survey of recent developments in multi-modal learning theories,focusing on the fundamental properties that govern this field.Our goal is to provide a thorough collection of current theoretical tools for analyzing multi-modal learning,to clarify their implications for practitioners,and to suggest future directions for the establishment of a solid theoretical foundation for deep multi-modal learning. 展开更多
关键词 multi-modal learning machine learning theory OPTIMIZATION GENERALIZATION
下载PDF
PowerDetector:Malicious PowerShell Script Family Classification Based on Multi-Modal Semantic Fusion and Deep Learning
14
作者 Xiuzhang Yang Guojun Peng +2 位作者 Dongni Zhang Yuhang Gao Chenguang Li 《China Communications》 SCIE CSCD 2023年第11期202-224,共23页
Power Shell has been widely deployed in fileless malware and advanced persistent threat(APT)attacks due to its high stealthiness and live-off-theland technique.However,existing works mainly focus on deobfuscation and ... Power Shell has been widely deployed in fileless malware and advanced persistent threat(APT)attacks due to its high stealthiness and live-off-theland technique.However,existing works mainly focus on deobfuscation and malicious detection,lacking the malicious Power Shell families classification and behavior analysis.Moreover,the state-of-the-art methods fail to capture fine-grained features and semantic relationships,resulting in low robustness and accuracy.To this end,we propose Power Detector,a novel malicious Power Shell script detector based on multimodal semantic fusion and deep learning.Specifically,we design four feature extraction methods to extract key features from character,token,abstract syntax tree(AST),and semantic knowledge graph.Then,we intelligently design four embeddings(i.e.,Char2Vec,Token2Vec,AST2Vec,and Rela2Vec) and construct a multi-modal fusion algorithm to concatenate feature vectors from different views.Finally,we propose a combined model based on transformer and CNN-Bi LSTM to implement Power Shell family detection.Our experiments with five types of Power Shell attacks show that PowerDetector can accurately detect various obfuscated and stealth PowerShell scripts,with a 0.9402 precision,a 0.9358 recall,and a 0.9374 F1-score.Furthermore,through singlemodal and multi-modal comparison experiments,we demonstrate that PowerDetector’s multi-modal embedding and deep learning model can achieve better accuracy and even identify more unknown attacks. 展开更多
关键词 deep learning malicious family detection multi-modal semantic fusion POWERSHELL
下载PDF
Multi-Modal Military Event Extraction Based on Knowledge Fusion
15
作者 Yuyuan Xiang Yangli Jia +1 位作者 Xiangliang Zhang Zhenling Zhang 《Computers, Materials & Continua》 SCIE EI 2023年第10期97-114,共18页
Event extraction stands as a significant endeavor within the realm of information extraction,aspiring to automatically extract structured event information from vast volumes of unstructured text.Extracting event eleme... Event extraction stands as a significant endeavor within the realm of information extraction,aspiring to automatically extract structured event information from vast volumes of unstructured text.Extracting event elements from multi-modal data remains a challenging task due to the presence of a large number of images and overlapping event elements in the data.Although researchers have proposed various methods to accomplish this task,most existing event extraction models cannot address these challenges because they are only applicable to text scenarios.To solve the above issues,this paper proposes a multi-modal event extraction method based on knowledge fusion.Specifically,for event-type recognition,we use a meticulous pipeline approach that integrates multiple pre-trained models.This approach enables a more comprehensive capture of the multidimensional event semantic features present in military texts,thereby enhancing the interconnectedness of information between trigger words and events.For event element extraction,we propose a method for constructing a priori templates that combine event types with corresponding trigger words.This approach facilitates the acquisition of fine-grained input samples containing event trigger words,thus enabling the model to understand the semantic relationships between elements in greater depth.Furthermore,a fusion method for spatial mapping of textual event elements and image elements is proposed to reduce the category number overload and effectively achieve multi-modal knowledge fusion.The experimental results based on the CCKS 2022 dataset show that our method has achieved competitive results,with a comprehensive evaluation value F1-score of 53.4%for the model.These results validate the effectiveness of our method in extracting event elements from multi-modal data. 展开更多
关键词 Event extraction multi-modAL knowledge fusion pre-trained models
下载PDF
A multi-modal clustering method for traditonal Chinese medicine clinical data via media convergence
16
作者 Jingna Si Ziwei Tian +6 位作者 Dongmei Li Lei Zhang Lei Yao Wenjuan Jiang Jia Liu Runshun Zhang Xiaoping Zhang 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第2期390-400,共11页
Media convergence is a media change led by technological innovation.Applying media convergence technology to the study of clustering in Chinese medicine can significantly exploit the advantages of media fusion.Obtaini... Media convergence is a media change led by technological innovation.Applying media convergence technology to the study of clustering in Chinese medicine can significantly exploit the advantages of media fusion.Obtaining consistent and complementary information among multiple modalities through media convergence can provide technical support for clustering.This article presents an approach based on Media Convergence and Graph convolution Encoder Clustering(MCGEC)for traditonal Chinese medicine(TCM)clinical data.It feeds modal information and graph structure from media information into a multi-modal graph convolution encoder to obtain the media feature representation learnt from multiple modalities.MCGEC captures latent information from various modalities by fusion and optimises the feature representations and network architecture with learnt clustering labels.The experiment is conducted on real-world multimodal TCM clinical data,including information like images and text.MCGEC has improved clustering results compared to the generic single-modal clustering methods and the current more advanced multi-modal clustering methods.MCGEC applied to TCM clinical datasets can achieve better results.Integrating multimedia features into clustering algorithms offers significant benefits compared to single-modal clustering approaches that simply concatenate features from different modalities.It provides practical technical support for multi-modal clustering in the TCM field incorporating multimedia features. 展开更多
关键词 graph convolutional encoder media convergence multi-modal clustering traditional Chinese medicine
下载PDF
Multi-Modal Scene Matching Location Algorithm Based on M2Det
17
作者 Jiwei Fan Xiaogang Yang +2 位作者 Ruitao Lu Qingge Li Siyu Wang 《Computers, Materials & Continua》 SCIE EI 2023年第10期1031-1052,共22页
In recent years,many visual positioning algorithms have been proposed based on computer vision and they have achieved good results.However,these algorithms have a single function,cannot perceive the environment,and ha... In recent years,many visual positioning algorithms have been proposed based on computer vision and they have achieved good results.However,these algorithms have a single function,cannot perceive the environment,and have poor versatility,and there is a certain mismatch phenomenon,which affects the positioning accuracy.Therefore,this paper proposes a location algorithm that combines a target recognition algorithm with a depth feature matching algorithm to solve the problem of unmanned aerial vehicle(UAV)environment perception and multi-modal image-matching fusion location.This algorithm was based on the single-shot object detector based on multi-level feature pyramid network(M2Det)algorithm and replaced the original visual geometry group(VGG)feature extraction network with the ResNet-101 network to improve the feature extraction capability of the network model.By introducing a depth feature matching algorithm,the algorithm shares neural network weights and realizes the design of UAV target recognition and a multi-modal image-matching fusion positioning algorithm.When the reference image and the real-time image were mismatched,the dynamic adaptive proportional constraint and the random sample consensus consistency algorithm(DAPC-RANSAC)were used to optimize the matching results to improve the correct matching efficiency of the target.Using the multi-modal registration data set,the proposed algorithm was compared and analyzed to verify its superiority and feasibility.The results show that the algorithm proposed in this paper can effectively deal with the matching between multi-modal images(visible image–infrared image,infrared image–satellite image,visible image–satellite image),and the contrast,scale,brightness,ambiguity deformation,and other changes had good stability and robustness.Finally,the effectiveness and practicability of the algorithm proposed in this paper were verified in an aerial test scene of an S1000 sixrotor UAV. 展开更多
关键词 Visual positioning multi-modal scene matching unmanned aerial vehicle
下载PDF
Robust Symmetry Prediction with Multi-Modal Feature Fusion for Partial Shapes
18
作者 Junhua Xi Kouquan Zheng +3 位作者 Yifan Zhong Longjiang Li Zhiping Cai Jinjing Chen 《Intelligent Automation & Soft Computing》 SCIE 2023年第3期3099-3111,共13页
In geometry processing,symmetry research benefits from global geo-metric features of complete shapes,but the shape of an object captured in real-world applications is often incomplete due to the limited sensor resoluti... In geometry processing,symmetry research benefits from global geo-metric features of complete shapes,but the shape of an object captured in real-world applications is often incomplete due to the limited sensor resolution,single viewpoint,and occlusion.Different from the existing works predicting symmetry from the complete shape,we propose a learning approach for symmetry predic-tion based on a single RGB-D image.Instead of directly predicting the symmetry from incomplete shapes,our method consists of two modules,i.e.,the multi-mod-al feature fusion module and the detection-by-reconstruction module.Firstly,we build a channel-transformer network(CTN)to extract cross-fusion features from the RGB-D as the multi-modal feature fusion module,which helps us aggregate features from the color and the depth separately.Then,our self-reconstruction net-work based on a 3D variational auto-encoder(3D-VAE)takes the global geo-metric features as input,followed by a prediction symmetry network to detect the symmetry.Our experiments are conducted on three public datasets:ShapeNet,YCB,and ScanNet,we demonstrate that our method can produce reliable and accurate results. 展开更多
关键词 Symmetry prediction multi-modal feature fusion partial shapes
下载PDF
Multi-mode Multi-frequency GNSS-IR Combination System for Sea Level Retrieval
19
作者 Wenyue CHE Xiaolei WANG +1 位作者 Xiufeng HE Jin LIU 《Journal of Geodesy and Geoinformation Science》 CSCD 2023年第2期32-39,共8页
With the development of Global Navigation Satellite Systems(GNSS),geodetic GNSS receivers have been utilized to monitor sea levels using GNSS-Interferometry Reflectometry(GNSS-IR)technology.The multi-mode,multi-freque... With the development of Global Navigation Satellite Systems(GNSS),geodetic GNSS receivers have been utilized to monitor sea levels using GNSS-Interferometry Reflectometry(GNSS-IR)technology.The multi-mode,multi-frequency signals of GPS,GLONASS,Galileo,and Beidou can be used for GNSS-IR sea level retrieval,but combining these retrievals remains problematic.To address this issue,a GNSS-IR sea level retrieval combination system has been developed,which begins by analyzing error sources in GNSS-IR sea level retrieval and establishing and solving the GNSS-IR retrieval equation.This paper focuses on two key points:time window selection and equation stability.The stability of the retrieval combination equations is determined by the condition number of the coefficient matrix within the time window.The impact of ill-conditioned coefficient matrices on the retrieval results is demonstrated using an extreme case of SNR data with only ascending or descending trajectories.After determining the time window and removing ill-conditioned equations,the multi-mode,multi-frequency GNSS-IR retrieval is performed.Results from three International GNSS Service(IGS)stations show that the combination method produces high-precision,high-resolution,and high-reliability sea level retrieval combination sequences. 展开更多
关键词 GNSS-IR sea level retrieval multi-mode multi-frequency combination equation stability
下载PDF
DCRL-KG: Distributed Multi-Modal Knowledge Graph Retrieval Platform Based on Collaborative Representation Learning
20
作者 Leilei Li Yansheng Fu +6 位作者 Dongjie Zhu Xiaofang Li Yundong Sun Jianrui Ding Mingrui Wu Ning Cao Russell Higgs 《Intelligent Automation & Soft Computing》 SCIE 2023年第6期3295-3307,共13页
The knowledge graph with relational abundant information has been widely used as the basic data support for the retrieval platforms.Image and text descriptions added to the knowledge graph enrich the node information,... The knowledge graph with relational abundant information has been widely used as the basic data support for the retrieval platforms.Image and text descriptions added to the knowledge graph enrich the node information,which accounts for the advantage of the multi-modal knowledge graph.In the field of cross-modal retrieval platforms,multi-modal knowledge graphs can help to improve retrieval accuracy and efficiency because of the abundant relational infor-mation provided by knowledge graphs.The representation learning method is sig-nificant to the application of multi-modal knowledge graphs.This paper proposes a distributed collaborative vector retrieval platform(DCRL-KG)using the multi-modal knowledge graph VisualSem as the foundation to achieve efficient and high-precision multimodal data retrieval.Firstly,use distributed technology to classify and store the data in the knowledge graph to improve retrieval efficiency.Secondly,this paper uses BabelNet to expand the knowledge graph through multi-ple filtering processes and increase the diversification of information.Finally,this paper builds a variety of retrieval models to achieve the fusion of retrieval results through linear combination methods to achieve high-precision language retrieval and image retrieval.The paper uses sentence retrieval and image retrieval experi-ments to prove that the platform can optimize the storage structure of the multi-modal knowledge graph and have good performance in multi-modal space. 展开更多
关键词 multi-modal retrieval distributed storage knowledge graph
下载PDF
上一页 1 2 8 下一页 到第
使用帮助 返回顶部