Diagnosis and prediction of satellite fault are more difficult than that of other equipment due to the complex structure of satellites and the presence of multi excite sources of satellite faults. Generally, one kind ...Diagnosis and prediction of satellite fault are more difficult than that of other equipment due to the complex structure of satellites and the presence of multi excite sources of satellite faults. Generally, one kind of reasoning model can only diagnose and predict one kind of satellite faults. In this paper the author introduces an application of a new method using multi modal reasoning to diagnose and predict satellite faults. The method has been used in the development of knowledge based satellite fault diagnosis and recovery system (KSFDRS) successfully. It is shown that the method is effective.展开更多
Fusing hand-based features in multi-modal biometric recognition enhances anti-spoofing capabilities.Additionally,it leverages inter-modal correlation to enhance recognition performance.Concurrently,the robustness and ...Fusing hand-based features in multi-modal biometric recognition enhances anti-spoofing capabilities.Additionally,it leverages inter-modal correlation to enhance recognition performance.Concurrently,the robustness and recognition performance of the system can be enhanced through judiciously leveraging the correlation among multimodal features.Nevertheless,two issues persist in multi-modal feature fusion recognition:Firstly,the enhancement of recognition performance in fusion recognition has not comprehensively considered the inter-modality correlations among distinct modalities.Secondly,during modal fusion,improper weight selection diminishes the salience of crucial modal features,thereby diminishing the overall recognition performance.To address these two issues,we introduce an enhanced DenseNet multimodal recognition network founded on feature-level fusion.The information from the three modalities is fused akin to RGB,and the input network augments the correlation between modes through channel correlation.Within the enhanced DenseNet network,the Efficient Channel Attention Network(ECA-Net)dynamically adjusts the weight of each channel to amplify the salience of crucial information in each modal feature.Depthwise separable convolution markedly reduces the training parameters and further enhances the feature correlation.Experimental evaluations were conducted on four multimodal databases,comprising six unimodal databases,including multispectral palmprint and palm vein databases from the Chinese Academy of Sciences.The Equal Error Rates(EER)values were 0.0149%,0.0150%,0.0099%,and 0.0050%,correspondingly.In comparison to other network methods for palmprint,palm vein,and finger vein fusion recognition,this approach substantially enhances recognition performance,rendering it suitable for high-security environments with practical applicability.The experiments in this article utilized amodest sample database comprising 200 individuals.The subsequent phase involves preparing for the extension of the method to larger databases.展开更多
Multi-modal fusion technology gradually become a fundamental task in many fields,such as autonomous driving,smart healthcare,sentiment analysis,and human-computer interaction.It is rapidly becoming the dominant resear...Multi-modal fusion technology gradually become a fundamental task in many fields,such as autonomous driving,smart healthcare,sentiment analysis,and human-computer interaction.It is rapidly becoming the dominant research due to its powerful perception and judgment capabilities.Under complex scenes,multi-modal fusion technology utilizes the complementary characteristics of multiple data streams to fuse different data types and achieve more accurate predictions.However,achieving outstanding performance is challenging because of equipment performance limitations,missing information,and data noise.This paper comprehensively reviews existing methods based onmulti-modal fusion techniques and completes a detailed and in-depth analysis.According to the data fusion stage,multi-modal fusion has four primary methods:early fusion,deep fusion,late fusion,and hybrid fusion.The paper surveys the three majormulti-modal fusion technologies that can significantly enhance the effect of data fusion and further explore the applications of multi-modal fusion technology in various fields.Finally,it discusses the challenges and explores potential research opportunities.Multi-modal tasks still need intensive study because of data heterogeneity and quality.Preserving complementary information and eliminating redundant information between modalities is critical in multi-modal technology.Invalid data fusion methods may introduce extra noise and lead to worse results.This paper provides a comprehensive and detailed summary in response to these challenges.展开更多
Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of po...Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of possible future trajectories can be consid-erable(multi-modal).Most prior approaches proposed to address multi-modal motion prediction are based on complex machine learning systems that have limited interpret-ability.Moreover,the metrics used in current benchmarks do not evaluate all aspects of the problem,such as the diversity and admissibility of the output.The authors aim to advance towards the design of trustworthy motion prediction systems,based on some of the re-quirements for the design of Trustworthy Artificial Intelligence.The focus is on evaluation criteria,robustness,and interpretability of outputs.First,the evaluation metrics are comprehensively analysed,the main gaps of current benchmarks are identified,and a new holistic evaluation framework is proposed.Then,a method for the assessment of spatial and temporal robustness is introduced by simulating noise in the perception system.To enhance the interpretability of the outputs and generate more balanced results in the proposed evaluation framework,an intent prediction layer that can be attached to multi-modal motion prediction models is proposed.The effectiveness of this approach is assessed through a survey that explores different elements in the visualisation of the multi-modal trajectories and intentions.The proposed approach and findings make a significant contribution to the development of trustworthy motion prediction systems for autono-mous vehicles,advancing the field towards greater safety and reliability.展开更多
Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the intro...Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the introduction of a large amount of information from other modalities reduces the effectiveness of representation learning and makes knowledge graph inference less effective.To address the issue,an inference method based on Media Convergence and Rule-guided Joint Inference model(MCRJI)has been pro-posed.The authors not only converge multi-media features of entities but also introduce logic rules to improve the accuracy and interpretability of link prediction.First,a multi-headed self-attention approach is used to obtain the attention of different media features of entities during semantic synthesis.Second,logic rules of different lengths are mined from knowledge graph to learn new entity representations.Finally,knowledge graph inference is performed based on representing entities that converge multi-media features.Numerous experimental results show that MCRJI outperforms other advanced baselines in using multi-media features and knowledge graph inference,demonstrating that MCRJI provides an excellent approach for knowledge graph inference with converged multi-media features.展开更多
In recent years,with the continuous development of deep learning and knowledge graph reasoning methods,more and more researchers have shown great interest in improving knowledge graph reasoning methods by inferring mi...In recent years,with the continuous development of deep learning and knowledge graph reasoning methods,more and more researchers have shown great interest in improving knowledge graph reasoning methods by inferring missing facts through reasoning.By searching paths on the knowledge graph and making fact and link predictions based on these paths,deep learning-based Reinforcement Learning(RL)agents can demonstrate good performance and interpretability.Therefore,deep reinforcement learning-based knowledge reasoning methods have rapidly emerged in recent years and have become a hot research topic.However,even in a small and fixed knowledge graph reasoning action space,there are still a large number of invalid actions.It often leads to the interruption of RL agents’wandering due to the selection of invalid actions,resulting in a significant decrease in the success rate of path mining.In order to improve the success rate of RL agents in the early stages of path search,this article proposes a knowledge reasoning method based on Deep Transfer Reinforcement Learning path(DTRLpath).Before supervised pre-training and retraining,a pre-task of searching for effective actions in a single step is added.The RL agent is first trained in the pre-task to improve its ability to search for effective actions.Then,the trained agent is transferred to the target reasoning task for path search training,which improves its success rate in searching for target task paths.Finally,based on the comparative experimental results on the FB15K-237 and NELL-995 datasets,it can be concluded that the proposed method significantly improves the success rate of path search and outperforms similar methods in most reasoning tasks.展开更多
Intelligent personal assistants play a pivotal role in in-vehicle systems,significantly enhancing life efficiency,driving safety,and decision-making support.In this study,the multi-modal design elements of intelligent...Intelligent personal assistants play a pivotal role in in-vehicle systems,significantly enhancing life efficiency,driving safety,and decision-making support.In this study,the multi-modal design elements of intelligent personal assistants within the context of visual,auditory,and somatosensory interactions with drivers were discussed.Their impact on the driver’s psychological state through various modes such as visual imagery,voice interaction,and gesture interaction were explored.The study also introduced innovative designs for in-vehicle intelligent personal assistants,incorporating design principles such as driver-centricity,prioritizing passenger safety,and utilizing timely feedback as a criterion.Additionally,the study employed design methods like driver behavior research and driving situation analysis to enhance the emotional connection between drivers and their vehicles,ultimately improving driver satisfaction and trust.展开更多
Recently,there have been significant advancements in the study of semantic communication in single-modal scenarios.However,the ability to process information in multi-modal environments remains limited.Inspired by the...Recently,there have been significant advancements in the study of semantic communication in single-modal scenarios.However,the ability to process information in multi-modal environments remains limited.Inspired by the research and applications of natural language processing across different modalities,our goal is to accurately extract frame-level semantic information from videos and ultimately transmit high-quality videos.Specifically,we propose a deep learning-basedMulti-ModalMutual Enhancement Video Semantic Communication system,called M3E-VSC.Built upon a VectorQuantized Generative AdversarialNetwork(VQGAN),our systemaims to leverage mutual enhancement among different modalities by using text as the main carrier of transmission.With it,the semantic information can be extracted fromkey-frame images and audio of the video and performdifferential value to ensure that the extracted text conveys accurate semantic information with fewer bits,thus improving the capacity of the system.Furthermore,a multi-frame semantic detection module is designed to facilitate semantic transitions during video generation.Simulation results demonstrate that our proposed model maintains high robustness in complex noise environments,particularly in low signal-to-noise ratio conditions,significantly improving the accuracy and speed of semantic transmission in video communication by approximately 50 percent.展开更多
The growing prevalence of knowledge reasoning using knowledge graphs(KGs)has substantially improved the accuracy and efficiency of intelligent medical diagnosis.However,current models primarily integrate electronic me...The growing prevalence of knowledge reasoning using knowledge graphs(KGs)has substantially improved the accuracy and efficiency of intelligent medical diagnosis.However,current models primarily integrate electronic medical records(EMRs)and KGs into the knowledge reasoning process,ignoring the differing significance of various types of knowledge in EMRs and the diverse data types present in the text.To better integrate EMR text information,we propose a novel intelligent diagnostic model named the Graph ATtention network incorporating Text representation in knowledge reasoning(GATiT),which comprises text representation,subgraph construction,knowledge reasoning,and diagnostic classification.In the text representation process,GATiT uses a pre-trained model to obtain text representations of the EMRs and additionally enhances embeddings by including chief complaint information and numerical information in the input.In the subgraph construction process,GATiT constructs text subgraphs and disease subgraphs from the KG,utilizing EMR text and the disease to be diagnosed.To differentiate the varying importance of nodes within the subgraphs features such as node categories,relevance scores,and other relevant factors are introduced into the text subgraph.Themessage-passing strategy and attention weight calculation of the graph attention network are adjusted to learn these features in the knowledge reasoning process.Finally,in the diagnostic classification process,the interactive attention-based fusion method integrates the results of knowledge reasoning with text representations to produce the final diagnosis results.Experimental results on multi-label and single-label EMR datasets demonstrate the model’s superiority over several state-of-theart methods.展开更多
Background: Clinical reasoning is an essential skill for nursing students since it is required to solve difficulties that arise in complex clinical settings. However, teaching and learning clinical reasoning skills is...Background: Clinical reasoning is an essential skill for nursing students since it is required to solve difficulties that arise in complex clinical settings. However, teaching and learning clinical reasoning skills is difficult because of its complexity. This study, therefore aimed at exploring the challenges experienced by nurse educators in promoting acquisition of clinical reasoning skills by undergraduate nursing students. Methods: A qualitative exploratory research design was used in this study. The participants were purposively sampled and recruited into the study. Data were collected using semi-structured interview guides. Thematic analysis method was used to analyze the collected data The principles of beneficence, respect of human dignity and justice were observed. Results: The findings have shown that clinical learning environment, lacked material and human resources. The students had no interest to learn the skill. There was also knowledge gap between nurse educators and clinical nurses. Lack of role model was also an issue and limited time exposure. Conclusion: The study revealed that nurse educators encounter various challenges in promoting the acquisition of clinical reasoning skills among undergraduate nursing students. Training institutions and hospitals should periodically revise the curriculum and provide sufficient resources to facilitate effective teaching and learning of clinical reasoning. Nurse educators must also update their knowledge and skills through continuous professional development if they are to transfer the skill effectively.展开更多
Background: Clinical reasoning is a critical cognitive skill that enables undergraduate nursing students to make clinically sound decisions. A lapse in clinical reasoning can result in unintended harm to patients. The...Background: Clinical reasoning is a critical cognitive skill that enables undergraduate nursing students to make clinically sound decisions. A lapse in clinical reasoning can result in unintended harm to patients. The aim of the study was to assess and compare the levels of clinical reasoning skills between third year and fourth year undergraduate nursing students. Methods: The study utilized a descriptive comparative research design, based on the positivism paradigm. 410 undergraduate nursing students were systematically sampled and recruited into the study. The researchers used the Self-Assessment of Clinical Reflection and Reasoning questionnaire to collect data on clinical reasoning skills from third- and fourth-year nursing students while adhering to ethical principles of human dignity. Descriptive statistics were done to analyse the level of clinical reasoning and an independent sample t-test was performed to compare the clinical reasoning skills of the student. A p value of 0.05 was accepted. Results: The results of the study revealed that the mean clinical reasoning scores of the undergraduate nursing students were knowledge/theory application (M = 3.84;SD = 1.04);decision-making based on experience and evidence (M = 4.09;SD = 1.01);dealing with uncertainty (M = 3.93;SD = 0.87);reflection and reasoning (M = 3.77;SD = 3.88). The mean difference in clinical reasoning skills between third- and fourth-year undergraduate nursing students was not significantly different from an independent sample t-test scores (t = −1.08;p = 0.28);(t = −0.29;p = 0.73);(t = 1.19;p = 0.24);(t = −0.57;p = 0.57). Since the p-value is >0.05, the null hypothesis (H0) “there is no significantno significant difference in clinical reasoning between third year and fourth year undergraduate nursing students”, was accepted. Conclusion: This study has shown that the level of clinical reasoning skills of the undergraduate nursing students was moderate to low. This meant that the teaching methods have not been effective to improve the students clinical reasoning skills. Therefore, the training institutions should revise their curriculum by incorporating new teaching methods like simulation to enhance students’ clinical reasoning skills. In conclusion, evaluating clinical reasoning skills is crucial for addressing healthcare issues, validating teaching methods, and fostering continuous improvement in nursing education.展开更多
The menstrual cycle has been a topic of interest in relation to behavior and cognition for many years, with historical beliefs associating it with cognitive impairment. However, recent research has challenged these be...The menstrual cycle has been a topic of interest in relation to behavior and cognition for many years, with historical beliefs associating it with cognitive impairment. However, recent research has challenged these beliefs and suggested potential positive effects of the menstrual cycle on cognitive performance. Despite these emerging findings, there is still a lack of consensus regarding the impact of the menstrual cycle on cognition, particularly in domains such as spatial reasoning, visual memory, and numerical memory. Hence, this study aimed to explore the relationship between the menstrual cycle and cognitive performance in these specific domains. Previous studies have reported mixed findings, with some suggesting no significant association and others indicating potential differences across the menstrual cycle. To contribute to this body of knowledge, we explored the research question of whether the menstrual cycles have a significant effect on cognition, particularly in the domains of spatial reasoning, visual and numerical memory in a regionally diverse sample of menstruating females. A total of 30 menstruating females from mixed geographical backgrounds participated in the study, and a repeated measures design was used to assess their cognitive performance in two phases of the menstrual cycle: follicular and luteal. The results of the study revealed that while spatial reasoning was not significantly related to the menstrual cycle (p = 0.256), both visual and numerical memory had significant positive associations (p < 0.001) with the luteal phase. However, since the effect sizes were very small, the importance of this relationship might be commonly overestimated. Future studies could thus entail designs with larger sample sizes, including neuro-biological measures of menstrual stages, and consequently inform competent interventions and support systems.展开更多
Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurr...Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurrent Temporal Graph Convolution Networks(IndRT-GCNets)framework to efficiently and accurately capture event attribute information.The framework models the knowledge graph sequences to learn the evolutionary represen-tations of entities and relations within each period.Firstly,by utilizing the temporal graph convolution module in the evolutionary representation unit,the framework captures the structural dependency relationships within the knowledge graph in each period.Meanwhile,to achieve better event representation and establish effective correlations,an independent recurrent neural network is employed to implement auto-regressive modeling.Furthermore,static attributes of entities in the entity-relation events are constrained andmerged using a static graph constraint to obtain optimal entity representations.Finally,the evolution of entity and relation representations is utilized to predict events in the next subsequent step.On multiple real-world datasets such as Freebase13(FB13),Freebase 15k(FB15K),WordNet11(WN11),WordNet18(WN18),FB15K-237,WN18RR,YAGO3-10,and Nell-995,the results of multiple evaluation indicators show that our proposed IndRT-GCNets framework outperforms most existing models on knowledge reasoning tasks,which validates the effectiveness and robustness.展开更多
The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-genera...The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-generator mechanism is employed among the advanced approaches available to model different domain mappings,which results in inefficient training of neural networks and pattern collapse,leading to inefficient generation of image diversity.To address this issue,this paper introduces a multi-modal unsupervised image translation framework that uses a generator to perform multi-modal image translation.Specifically,firstly,the domain code is introduced in this paper to explicitly control the different generation tasks.Secondly,this paper brings in the squeeze-and-excitation(SE)mechanism and feature attention(FA)module.Finally,the model integrates multiple optimization objectives to ensure efficient multi-modal translation.This paper performs qualitative and quantitative experiments on multiple non-paired benchmark image translation datasets while demonstrating the benefits of the proposed method over existing technologies.Overall,experimental results have shown that the proposed method is versatile and scalable.展开更多
In this paper,we combine the teaching and learning situation of deaf and hard-of-hearing students in the Linear Algebra course of the Computer Science and Technology major at the Nanjing Normal University of Special E...In this paper,we combine the teaching and learning situation of deaf and hard-of-hearing students in the Linear Algebra course of the Computer Science and Technology major at the Nanjing Normal University of Special Education.Based on the cognitive style of deaf and hard-of-hearing students,we apply example induction,exhaustive induction,and mathematical induction to the teaching of Linear Algebra by utilizing specific course content.The aim is to design comprehensive teaching that caters to the cognitive style characteristics of deaf and hard-of-hearing students,strengthen their mathematical thinking styles such as quantitative thinking,algorithmic thinking,symbolic thinking,visual thinking,logical thinking,and creative thinking,and enhance the effectiveness of classroom teaching and learning outcomes in Linear Algebra for deaf and hard-of-hearing students.展开更多
The sixth generation(6G)of mobile communication system is witnessing a new paradigm shift,i.e.,integrated sensing-communication system.A comprehensive dataset is a prerequisite for 6G integrated sensing-communication ...The sixth generation(6G)of mobile communication system is witnessing a new paradigm shift,i.e.,integrated sensing-communication system.A comprehensive dataset is a prerequisite for 6G integrated sensing-communication research.This paper develops a novel simulation dataset,named M3SC,for mixed multi-modal(MMM)sensing-communication integration,and the generation framework of the M3SC dataset is further given.To obtain multimodal sensory data in physical space and communication data in electromagnetic space,we utilize Air-Sim and WaveFarer to collect multi-modal sensory data and exploit Wireless InSite to collect communication data.Furthermore,the in-depth integration and precise alignment of AirSim,WaveFarer,andWireless InSite are achieved.The M3SC dataset covers various weather conditions,multiplex frequency bands,and different times of the day.Currently,the M3SC dataset contains 1500 snapshots,including 80 RGB images,160 depth maps,80 LiDAR point clouds,256 sets of mmWave waveforms with 8 radar point clouds,and 72 channel impulse response(CIR)matrices per snapshot,thus totaling 120,000 RGB images,240,000 depth maps,120,000 LiDAR point clouds,384,000 sets of mmWave waveforms with 12,000 radar point clouds,and 108,000 CIR matrices.The data processing result presents the multi-modal sensory information and communication channel statistical properties.Finally,the MMM sensing-communication application,which can be supported by the M3SC dataset,is discussed.展开更多
Deep learning based methods have been successfully applied to semantic segmentation of optical remote sensing images.However,as more and more remote sensing data is available,it is a new challenge to comprehensively u...Deep learning based methods have been successfully applied to semantic segmentation of optical remote sensing images.However,as more and more remote sensing data is available,it is a new challenge to comprehensively utilize multi-modal remote sensing data to break through the performance bottleneck of single-modal interpretation.In addition,semantic segmentation and height estimation in remote sensing data are two tasks with strong correlation,but existing methods usually study individual tasks separately,which leads to high computational resource overhead.To this end,we propose a Multi-Task learning framework for Multi-Modal remote sensing images(MM_MT).Specifically,we design a Cross-Modal Feature Fusion(CMFF)method,which aggregates complementary information of different modalities to improve the accuracy of semantic segmentation and height estimation.Besides,a dual-stream multi-task learning method is introduced for Joint Semantic Segmentation and Height Estimation(JSSHE),extracting common features in a shared network to save time and resources,and then learning task-specific features in two task branches.Experimental results on the public multi-modal remote sensing image dataset Potsdam show that compared to training two tasks independently,multi-task learning saves 20%of training time and achieves competitive performance with mIoU of 83.02%for semantic segmentation and accuracy of 95.26%for height estimation.展开更多
Power Shell has been widely deployed in fileless malware and advanced persistent threat(APT)attacks due to its high stealthiness and live-off-theland technique.However,existing works mainly focus on deobfuscation and ...Power Shell has been widely deployed in fileless malware and advanced persistent threat(APT)attacks due to its high stealthiness and live-off-theland technique.However,existing works mainly focus on deobfuscation and malicious detection,lacking the malicious Power Shell families classification and behavior analysis.Moreover,the state-of-the-art methods fail to capture fine-grained features and semantic relationships,resulting in low robustness and accuracy.To this end,we propose Power Detector,a novel malicious Power Shell script detector based on multimodal semantic fusion and deep learning.Specifically,we design four feature extraction methods to extract key features from character,token,abstract syntax tree(AST),and semantic knowledge graph.Then,we intelligently design four embeddings(i.e.,Char2Vec,Token2Vec,AST2Vec,and Rela2Vec) and construct a multi-modal fusion algorithm to concatenate feature vectors from different views.Finally,we propose a combined model based on transformer and CNN-Bi LSTM to implement Power Shell family detection.Our experiments with five types of Power Shell attacks show that PowerDetector can accurately detect various obfuscated and stealth PowerShell scripts,with a 0.9402 precision,a 0.9358 recall,and a 0.9374 F1-score.Furthermore,through singlemodal and multi-modal comparison experiments,we demonstrate that PowerDetector’s multi-modal embedding and deep learning model can achieve better accuracy and even identify more unknown attacks.展开更多
With the growing discovery of exposed vulnerabilities in the Industrial Control Components(ICCs),identification of the exploitable ones is urgent for Industrial Control System(ICS)administrators to proactively forecas...With the growing discovery of exposed vulnerabilities in the Industrial Control Components(ICCs),identification of the exploitable ones is urgent for Industrial Control System(ICS)administrators to proactively forecast potential threats.However,it is not a trivial task due to the complexity of the multi-source heterogeneous data and the lack of automatic analysis methods.To address these challenges,we propose an exploitability reasoning method based on the ICC-Vulnerability Knowledge Graph(KG)in which relation paths contain abundant potential evidence to support the reasoning.The reasoning task in this work refers to determining whether a specific relation is valid between an attacker entity and a possible exploitable vulnerability entity with the help of a collective of the critical paths.The proposed method consists of three primary building blocks:KG construction,relation path representation,and query relation reasoning.A security-oriented ontology combines exploit modeling,which provides a guideline for the integration of the scattered knowledge while constructing the KG.We emphasize the role of the aggregation of the attention mechanism in representation learning and ultimate reasoning.In order to acquire a high-quality representation,the entity and relation embeddings take advantage of their local structure and related semantics.Some critical paths are assigned corresponding attentive weights and then they are aggregated for the determination of the query relation validity.In particular,similarity calculation is introduced into a critical path selection algorithm,which improves search and reasoning performance.Meanwhile,the proposed algorithm avoids redundant paths between the given pairs of entities.Experimental results show that the proposed method outperforms the state-of-the-art ones in the aspects of embedding quality and query relation reasoning accuracy.展开更多
文摘Diagnosis and prediction of satellite fault are more difficult than that of other equipment due to the complex structure of satellites and the presence of multi excite sources of satellite faults. Generally, one kind of reasoning model can only diagnose and predict one kind of satellite faults. In this paper the author introduces an application of a new method using multi modal reasoning to diagnose and predict satellite faults. The method has been used in the development of knowledge based satellite fault diagnosis and recovery system (KSFDRS) successfully. It is shown that the method is effective.
基金funded by the National Natural Science Foundation of China(61991413)the China Postdoctoral Science Foundation(2019M651142)+1 种基金the Natural Science Foundation of Liaoning Province(2021-KF-12-07)the Natural Science Foundations of Liaoning Province(2023-MS-322).
文摘Fusing hand-based features in multi-modal biometric recognition enhances anti-spoofing capabilities.Additionally,it leverages inter-modal correlation to enhance recognition performance.Concurrently,the robustness and recognition performance of the system can be enhanced through judiciously leveraging the correlation among multimodal features.Nevertheless,two issues persist in multi-modal feature fusion recognition:Firstly,the enhancement of recognition performance in fusion recognition has not comprehensively considered the inter-modality correlations among distinct modalities.Secondly,during modal fusion,improper weight selection diminishes the salience of crucial modal features,thereby diminishing the overall recognition performance.To address these two issues,we introduce an enhanced DenseNet multimodal recognition network founded on feature-level fusion.The information from the three modalities is fused akin to RGB,and the input network augments the correlation between modes through channel correlation.Within the enhanced DenseNet network,the Efficient Channel Attention Network(ECA-Net)dynamically adjusts the weight of each channel to amplify the salience of crucial information in each modal feature.Depthwise separable convolution markedly reduces the training parameters and further enhances the feature correlation.Experimental evaluations were conducted on four multimodal databases,comprising six unimodal databases,including multispectral palmprint and palm vein databases from the Chinese Academy of Sciences.The Equal Error Rates(EER)values were 0.0149%,0.0150%,0.0099%,and 0.0050%,correspondingly.In comparison to other network methods for palmprint,palm vein,and finger vein fusion recognition,this approach substantially enhances recognition performance,rendering it suitable for high-security environments with practical applicability.The experiments in this article utilized amodest sample database comprising 200 individuals.The subsequent phase involves preparing for the extension of the method to larger databases.
基金supported by the Natural Science Foundation of Liaoning Province(Grant No.2023-MSBA-070)the National Natural Science Foundation of China(Grant No.62302086).
文摘Multi-modal fusion technology gradually become a fundamental task in many fields,such as autonomous driving,smart healthcare,sentiment analysis,and human-computer interaction.It is rapidly becoming the dominant research due to its powerful perception and judgment capabilities.Under complex scenes,multi-modal fusion technology utilizes the complementary characteristics of multiple data streams to fuse different data types and achieve more accurate predictions.However,achieving outstanding performance is challenging because of equipment performance limitations,missing information,and data noise.This paper comprehensively reviews existing methods based onmulti-modal fusion techniques and completes a detailed and in-depth analysis.According to the data fusion stage,multi-modal fusion has four primary methods:early fusion,deep fusion,late fusion,and hybrid fusion.The paper surveys the three majormulti-modal fusion technologies that can significantly enhance the effect of data fusion and further explore the applications of multi-modal fusion technology in various fields.Finally,it discusses the challenges and explores potential research opportunities.Multi-modal tasks still need intensive study because of data heterogeneity and quality.Preserving complementary information and eliminating redundant information between modalities is critical in multi-modal technology.Invalid data fusion methods may introduce extra noise and lead to worse results.This paper provides a comprehensive and detailed summary in response to these challenges.
基金European Commission,Joint Research Center,Grant/Award Number:HUMAINTMinisterio de Ciencia e Innovación,Grant/Award Number:PID2020‐114924RB‐I00Comunidad de Madrid,Grant/Award Number:S2018/EMT‐4362 SEGVAUTO 4.0‐CM。
文摘Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of possible future trajectories can be consid-erable(multi-modal).Most prior approaches proposed to address multi-modal motion prediction are based on complex machine learning systems that have limited interpret-ability.Moreover,the metrics used in current benchmarks do not evaluate all aspects of the problem,such as the diversity and admissibility of the output.The authors aim to advance towards the design of trustworthy motion prediction systems,based on some of the re-quirements for the design of Trustworthy Artificial Intelligence.The focus is on evaluation criteria,robustness,and interpretability of outputs.First,the evaluation metrics are comprehensively analysed,the main gaps of current benchmarks are identified,and a new holistic evaluation framework is proposed.Then,a method for the assessment of spatial and temporal robustness is introduced by simulating noise in the perception system.To enhance the interpretability of the outputs and generate more balanced results in the proposed evaluation framework,an intent prediction layer that can be attached to multi-modal motion prediction models is proposed.The effectiveness of this approach is assessed through a survey that explores different elements in the visualisation of the multi-modal trajectories and intentions.The proposed approach and findings make a significant contribution to the development of trustworthy motion prediction systems for autono-mous vehicles,advancing the field towards greater safety and reliability.
基金National College Students’Training Programs of Innovation and Entrepreneurship,Grant/Award Number:S202210022060the CACMS Innovation Fund,Grant/Award Number:CI2021A00512the National Nature Science Foundation of China under Grant,Grant/Award Number:62206021。
文摘Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the introduction of a large amount of information from other modalities reduces the effectiveness of representation learning and makes knowledge graph inference less effective.To address the issue,an inference method based on Media Convergence and Rule-guided Joint Inference model(MCRJI)has been pro-posed.The authors not only converge multi-media features of entities but also introduce logic rules to improve the accuracy and interpretability of link prediction.First,a multi-headed self-attention approach is used to obtain the attention of different media features of entities during semantic synthesis.Second,logic rules of different lengths are mined from knowledge graph to learn new entity representations.Finally,knowledge graph inference is performed based on representing entities that converge multi-media features.Numerous experimental results show that MCRJI outperforms other advanced baselines in using multi-media features and knowledge graph inference,demonstrating that MCRJI provides an excellent approach for knowledge graph inference with converged multi-media features.
基金supported by Key Laboratory of Information System Requirement,No.LHZZ202202Natural Science Foundation of Xinjiang Uyghur Autonomous Region(2023D01C55)Scientific Research Program of the Higher Education Institution of Xinjiang(XJEDU2023P127).
文摘In recent years,with the continuous development of deep learning and knowledge graph reasoning methods,more and more researchers have shown great interest in improving knowledge graph reasoning methods by inferring missing facts through reasoning.By searching paths on the knowledge graph and making fact and link predictions based on these paths,deep learning-based Reinforcement Learning(RL)agents can demonstrate good performance and interpretability.Therefore,deep reinforcement learning-based knowledge reasoning methods have rapidly emerged in recent years and have become a hot research topic.However,even in a small and fixed knowledge graph reasoning action space,there are still a large number of invalid actions.It often leads to the interruption of RL agents’wandering due to the selection of invalid actions,resulting in a significant decrease in the success rate of path mining.In order to improve the success rate of RL agents in the early stages of path search,this article proposes a knowledge reasoning method based on Deep Transfer Reinforcement Learning path(DTRLpath).Before supervised pre-training and retraining,a pre-task of searching for effective actions in a single step is added.The RL agent is first trained in the pre-task to improve its ability to search for effective actions.Then,the trained agent is transferred to the target reasoning task for path search training,which improves its success rate in searching for target task paths.Finally,based on the comparative experimental results on the FB15K-237 and NELL-995 datasets,it can be concluded that the proposed method significantly improves the success rate of path search and outperforms similar methods in most reasoning tasks.
文摘Intelligent personal assistants play a pivotal role in in-vehicle systems,significantly enhancing life efficiency,driving safety,and decision-making support.In this study,the multi-modal design elements of intelligent personal assistants within the context of visual,auditory,and somatosensory interactions with drivers were discussed.Their impact on the driver’s psychological state through various modes such as visual imagery,voice interaction,and gesture interaction were explored.The study also introduced innovative designs for in-vehicle intelligent personal assistants,incorporating design principles such as driver-centricity,prioritizing passenger safety,and utilizing timely feedback as a criterion.Additionally,the study employed design methods like driver behavior research and driving situation analysis to enhance the emotional connection between drivers and their vehicles,ultimately improving driver satisfaction and trust.
基金supported by the National Key Research and Development Project under Grant 2020YFB1807602Key Program of Marine Economy Development Special Foundation of Department of Natural Resources of Guangdong Province(GDNRC[2023]24)the National Natural Science Foundation of China under Grant 62271267.
文摘Recently,there have been significant advancements in the study of semantic communication in single-modal scenarios.However,the ability to process information in multi-modal environments remains limited.Inspired by the research and applications of natural language processing across different modalities,our goal is to accurately extract frame-level semantic information from videos and ultimately transmit high-quality videos.Specifically,we propose a deep learning-basedMulti-ModalMutual Enhancement Video Semantic Communication system,called M3E-VSC.Built upon a VectorQuantized Generative AdversarialNetwork(VQGAN),our systemaims to leverage mutual enhancement among different modalities by using text as the main carrier of transmission.With it,the semantic information can be extracted fromkey-frame images and audio of the video and performdifferential value to ensure that the extracted text conveys accurate semantic information with fewer bits,thus improving the capacity of the system.Furthermore,a multi-frame semantic detection module is designed to facilitate semantic transitions during video generation.Simulation results demonstrate that our proposed model maintains high robustness in complex noise environments,particularly in low signal-to-noise ratio conditions,significantly improving the accuracy and speed of semantic transmission in video communication by approximately 50 percent.
基金supported in part by the Science and Technology Innovation 2030-“New Generation of Artificial Intelligence”Major Project(No.2021ZD0111000)Henan Provincial Science and Technology Research Project(No.232102211039).
文摘The growing prevalence of knowledge reasoning using knowledge graphs(KGs)has substantially improved the accuracy and efficiency of intelligent medical diagnosis.However,current models primarily integrate electronic medical records(EMRs)and KGs into the knowledge reasoning process,ignoring the differing significance of various types of knowledge in EMRs and the diverse data types present in the text.To better integrate EMR text information,we propose a novel intelligent diagnostic model named the Graph ATtention network incorporating Text representation in knowledge reasoning(GATiT),which comprises text representation,subgraph construction,knowledge reasoning,and diagnostic classification.In the text representation process,GATiT uses a pre-trained model to obtain text representations of the EMRs and additionally enhances embeddings by including chief complaint information and numerical information in the input.In the subgraph construction process,GATiT constructs text subgraphs and disease subgraphs from the KG,utilizing EMR text and the disease to be diagnosed.To differentiate the varying importance of nodes within the subgraphs features such as node categories,relevance scores,and other relevant factors are introduced into the text subgraph.Themessage-passing strategy and attention weight calculation of the graph attention network are adjusted to learn these features in the knowledge reasoning process.Finally,in the diagnostic classification process,the interactive attention-based fusion method integrates the results of knowledge reasoning with text representations to produce the final diagnosis results.Experimental results on multi-label and single-label EMR datasets demonstrate the model’s superiority over several state-of-theart methods.
文摘Background: Clinical reasoning is an essential skill for nursing students since it is required to solve difficulties that arise in complex clinical settings. However, teaching and learning clinical reasoning skills is difficult because of its complexity. This study, therefore aimed at exploring the challenges experienced by nurse educators in promoting acquisition of clinical reasoning skills by undergraduate nursing students. Methods: A qualitative exploratory research design was used in this study. The participants were purposively sampled and recruited into the study. Data were collected using semi-structured interview guides. Thematic analysis method was used to analyze the collected data The principles of beneficence, respect of human dignity and justice were observed. Results: The findings have shown that clinical learning environment, lacked material and human resources. The students had no interest to learn the skill. There was also knowledge gap between nurse educators and clinical nurses. Lack of role model was also an issue and limited time exposure. Conclusion: The study revealed that nurse educators encounter various challenges in promoting the acquisition of clinical reasoning skills among undergraduate nursing students. Training institutions and hospitals should periodically revise the curriculum and provide sufficient resources to facilitate effective teaching and learning of clinical reasoning. Nurse educators must also update their knowledge and skills through continuous professional development if they are to transfer the skill effectively.
文摘Background: Clinical reasoning is a critical cognitive skill that enables undergraduate nursing students to make clinically sound decisions. A lapse in clinical reasoning can result in unintended harm to patients. The aim of the study was to assess and compare the levels of clinical reasoning skills between third year and fourth year undergraduate nursing students. Methods: The study utilized a descriptive comparative research design, based on the positivism paradigm. 410 undergraduate nursing students were systematically sampled and recruited into the study. The researchers used the Self-Assessment of Clinical Reflection and Reasoning questionnaire to collect data on clinical reasoning skills from third- and fourth-year nursing students while adhering to ethical principles of human dignity. Descriptive statistics were done to analyse the level of clinical reasoning and an independent sample t-test was performed to compare the clinical reasoning skills of the student. A p value of 0.05 was accepted. Results: The results of the study revealed that the mean clinical reasoning scores of the undergraduate nursing students were knowledge/theory application (M = 3.84;SD = 1.04);decision-making based on experience and evidence (M = 4.09;SD = 1.01);dealing with uncertainty (M = 3.93;SD = 0.87);reflection and reasoning (M = 3.77;SD = 3.88). The mean difference in clinical reasoning skills between third- and fourth-year undergraduate nursing students was not significantly different from an independent sample t-test scores (t = −1.08;p = 0.28);(t = −0.29;p = 0.73);(t = 1.19;p = 0.24);(t = −0.57;p = 0.57). Since the p-value is >0.05, the null hypothesis (H0) “there is no significantno significant difference in clinical reasoning between third year and fourth year undergraduate nursing students”, was accepted. Conclusion: This study has shown that the level of clinical reasoning skills of the undergraduate nursing students was moderate to low. This meant that the teaching methods have not been effective to improve the students clinical reasoning skills. Therefore, the training institutions should revise their curriculum by incorporating new teaching methods like simulation to enhance students’ clinical reasoning skills. In conclusion, evaluating clinical reasoning skills is crucial for addressing healthcare issues, validating teaching methods, and fostering continuous improvement in nursing education.
文摘The menstrual cycle has been a topic of interest in relation to behavior and cognition for many years, with historical beliefs associating it with cognitive impairment. However, recent research has challenged these beliefs and suggested potential positive effects of the menstrual cycle on cognitive performance. Despite these emerging findings, there is still a lack of consensus regarding the impact of the menstrual cycle on cognition, particularly in domains such as spatial reasoning, visual memory, and numerical memory. Hence, this study aimed to explore the relationship between the menstrual cycle and cognitive performance in these specific domains. Previous studies have reported mixed findings, with some suggesting no significant association and others indicating potential differences across the menstrual cycle. To contribute to this body of knowledge, we explored the research question of whether the menstrual cycles have a significant effect on cognition, particularly in the domains of spatial reasoning, visual and numerical memory in a regionally diverse sample of menstruating females. A total of 30 menstruating females from mixed geographical backgrounds participated in the study, and a repeated measures design was used to assess their cognitive performance in two phases of the menstrual cycle: follicular and luteal. The results of the study revealed that while spatial reasoning was not significantly related to the menstrual cycle (p = 0.256), both visual and numerical memory had significant positive associations (p < 0.001) with the luteal phase. However, since the effect sizes were very small, the importance of this relationship might be commonly overestimated. Future studies could thus entail designs with larger sample sizes, including neuro-biological measures of menstrual stages, and consequently inform competent interventions and support systems.
基金the National Natural Science Founda-tion of China(62062062)hosted by Gulila Altenbek.
文摘Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurrent Temporal Graph Convolution Networks(IndRT-GCNets)framework to efficiently and accurately capture event attribute information.The framework models the knowledge graph sequences to learn the evolutionary represen-tations of entities and relations within each period.Firstly,by utilizing the temporal graph convolution module in the evolutionary representation unit,the framework captures the structural dependency relationships within the knowledge graph in each period.Meanwhile,to achieve better event representation and establish effective correlations,an independent recurrent neural network is employed to implement auto-regressive modeling.Furthermore,static attributes of entities in the entity-relation events are constrained andmerged using a static graph constraint to obtain optimal entity representations.Finally,the evolution of entity and relation representations is utilized to predict events in the next subsequent step.On multiple real-world datasets such as Freebase13(FB13),Freebase 15k(FB15K),WordNet11(WN11),WordNet18(WN18),FB15K-237,WN18RR,YAGO3-10,and Nell-995,the results of multiple evaluation indicators show that our proposed IndRT-GCNets framework outperforms most existing models on knowledge reasoning tasks,which validates the effectiveness and robustness.
基金the National Natural Science Foundation of China(No.61976080)the Academic Degrees&Graduate Education Reform Project of Henan Province(No.2021SJGLX195Y)+1 种基金the Teaching Reform Research and Practice Project of Henan Undergraduate Universities(No.2022SYJXLX008)the Key Project on Research and Practice of Henan University Graduate Education and Teaching Reform(No.YJSJG2023XJ006)。
文摘The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-generator mechanism is employed among the advanced approaches available to model different domain mappings,which results in inefficient training of neural networks and pattern collapse,leading to inefficient generation of image diversity.To address this issue,this paper introduces a multi-modal unsupervised image translation framework that uses a generator to perform multi-modal image translation.Specifically,firstly,the domain code is introduced in this paper to explicitly control the different generation tasks.Secondly,this paper brings in the squeeze-and-excitation(SE)mechanism and feature attention(FA)module.Finally,the model integrates multiple optimization objectives to ensure efficient multi-modal translation.This paper performs qualitative and quantitative experiments on multiple non-paired benchmark image translation datasets while demonstrating the benefits of the proposed method over existing technologies.Overall,experimental results have shown that the proposed method is versatile and scalable.
文摘In this paper,we combine the teaching and learning situation of deaf and hard-of-hearing students in the Linear Algebra course of the Computer Science and Technology major at the Nanjing Normal University of Special Education.Based on the cognitive style of deaf and hard-of-hearing students,we apply example induction,exhaustive induction,and mathematical induction to the teaching of Linear Algebra by utilizing specific course content.The aim is to design comprehensive teaching that caters to the cognitive style characteristics of deaf and hard-of-hearing students,strengthen their mathematical thinking styles such as quantitative thinking,algorithmic thinking,symbolic thinking,visual thinking,logical thinking,and creative thinking,and enhance the effectiveness of classroom teaching and learning outcomes in Linear Algebra for deaf and hard-of-hearing students.
基金This work was supported in part by the Ministry National Key Research and Development Project(Grant No.2020AAA0108101)the National Natural Science Foundation of China(Grants No.62125101,62341101,62001018,and 62301011)+1 种基金Shandong Natural Science Foundation(Grant No.ZR2023YQ058)the New Cornerstone Science Foundation through the XPLORER PRIZE.The authors would like to thank Mengyuan Lu and Zengrui Han for their help in the construction of electromagnetic space in Wireless InSite simulation platform and Weibo Wen,Qi Duan,and Yong Yu for their help in the construction of phys ical space in AirSim simulation platform.
文摘The sixth generation(6G)of mobile communication system is witnessing a new paradigm shift,i.e.,integrated sensing-communication system.A comprehensive dataset is a prerequisite for 6G integrated sensing-communication research.This paper develops a novel simulation dataset,named M3SC,for mixed multi-modal(MMM)sensing-communication integration,and the generation framework of the M3SC dataset is further given.To obtain multimodal sensory data in physical space and communication data in electromagnetic space,we utilize Air-Sim and WaveFarer to collect multi-modal sensory data and exploit Wireless InSite to collect communication data.Furthermore,the in-depth integration and precise alignment of AirSim,WaveFarer,andWireless InSite are achieved.The M3SC dataset covers various weather conditions,multiplex frequency bands,and different times of the day.Currently,the M3SC dataset contains 1500 snapshots,including 80 RGB images,160 depth maps,80 LiDAR point clouds,256 sets of mmWave waveforms with 8 radar point clouds,and 72 channel impulse response(CIR)matrices per snapshot,thus totaling 120,000 RGB images,240,000 depth maps,120,000 LiDAR point clouds,384,000 sets of mmWave waveforms with 12,000 radar point clouds,and 108,000 CIR matrices.The data processing result presents the multi-modal sensory information and communication channel statistical properties.Finally,the MMM sensing-communication application,which can be supported by the M3SC dataset,is discussed.
基金National Key R&D Program of China(No.2022ZD0118401).
文摘Deep learning based methods have been successfully applied to semantic segmentation of optical remote sensing images.However,as more and more remote sensing data is available,it is a new challenge to comprehensively utilize multi-modal remote sensing data to break through the performance bottleneck of single-modal interpretation.In addition,semantic segmentation and height estimation in remote sensing data are two tasks with strong correlation,but existing methods usually study individual tasks separately,which leads to high computational resource overhead.To this end,we propose a Multi-Task learning framework for Multi-Modal remote sensing images(MM_MT).Specifically,we design a Cross-Modal Feature Fusion(CMFF)method,which aggregates complementary information of different modalities to improve the accuracy of semantic segmentation and height estimation.Besides,a dual-stream multi-task learning method is introduced for Joint Semantic Segmentation and Height Estimation(JSSHE),extracting common features in a shared network to save time and resources,and then learning task-specific features in two task branches.Experimental results on the public multi-modal remote sensing image dataset Potsdam show that compared to training two tasks independently,multi-task learning saves 20%of training time and achieves competitive performance with mIoU of 83.02%for semantic segmentation and accuracy of 95.26%for height estimation.
基金This work was supported by National Natural Science Foundation of China(No.62172308,No.U1626107,No.61972297,No.62172144,and No.62062019).
文摘Power Shell has been widely deployed in fileless malware and advanced persistent threat(APT)attacks due to its high stealthiness and live-off-theland technique.However,existing works mainly focus on deobfuscation and malicious detection,lacking the malicious Power Shell families classification and behavior analysis.Moreover,the state-of-the-art methods fail to capture fine-grained features and semantic relationships,resulting in low robustness and accuracy.To this end,we propose Power Detector,a novel malicious Power Shell script detector based on multimodal semantic fusion and deep learning.Specifically,we design four feature extraction methods to extract key features from character,token,abstract syntax tree(AST),and semantic knowledge graph.Then,we intelligently design four embeddings(i.e.,Char2Vec,Token2Vec,AST2Vec,and Rela2Vec) and construct a multi-modal fusion algorithm to concatenate feature vectors from different views.Finally,we propose a combined model based on transformer and CNN-Bi LSTM to implement Power Shell family detection.Our experiments with five types of Power Shell attacks show that PowerDetector can accurately detect various obfuscated and stealth PowerShell scripts,with a 0.9402 precision,a 0.9358 recall,and a 0.9374 F1-score.Furthermore,through singlemodal and multi-modal comparison experiments,we demonstrate that PowerDetector’s multi-modal embedding and deep learning model can achieve better accuracy and even identify more unknown attacks.
基金Our work is supported by the National Key R&D Program of China(2021YFB2012400).
文摘With the growing discovery of exposed vulnerabilities in the Industrial Control Components(ICCs),identification of the exploitable ones is urgent for Industrial Control System(ICS)administrators to proactively forecast potential threats.However,it is not a trivial task due to the complexity of the multi-source heterogeneous data and the lack of automatic analysis methods.To address these challenges,we propose an exploitability reasoning method based on the ICC-Vulnerability Knowledge Graph(KG)in which relation paths contain abundant potential evidence to support the reasoning.The reasoning task in this work refers to determining whether a specific relation is valid between an attacker entity and a possible exploitable vulnerability entity with the help of a collective of the critical paths.The proposed method consists of three primary building blocks:KG construction,relation path representation,and query relation reasoning.A security-oriented ontology combines exploit modeling,which provides a guideline for the integration of the scattered knowledge while constructing the KG.We emphasize the role of the aggregation of the attention mechanism in representation learning and ultimate reasoning.In order to acquire a high-quality representation,the entity and relation embeddings take advantage of their local structure and related semantics.Some critical paths are assigned corresponding attentive weights and then they are aggregated for the determination of the query relation validity.In particular,similarity calculation is introduced into a critical path selection algorithm,which improves search and reasoning performance.Meanwhile,the proposed algorithm avoids redundant paths between the given pairs of entities.Experimental results show that the proposed method outperforms the state-of-the-art ones in the aspects of embedding quality and query relation reasoning accuracy.