Deep multi-modal learning,a rapidly growing field with a wide range of practical applications,aims to effectively utilize and integrate information from multiple sources,known as modalities.Despite its impressive empi...Deep multi-modal learning,a rapidly growing field with a wide range of practical applications,aims to effectively utilize and integrate information from multiple sources,known as modalities.Despite its impressive empirical performance,the theoretical foundations of deep multi-modal learning have yet to be fully explored.In this paper,we will undertake a comprehensive survey of recent developments in multi-modal learning theories,focusing on the fundamental properties that govern this field.Our goal is to provide a thorough collection of current theoretical tools for analyzing multi-modal learning,to clarify their implications for practitioners,and to suggest future directions for the establishment of a solid theoretical foundation for deep multi-modal learning.展开更多
Multi-modal fusion technology gradually become a fundamental task in many fields,such as autonomous driving,smart healthcare,sentiment analysis,and human-computer interaction.It is rapidly becoming the dominant resear...Multi-modal fusion technology gradually become a fundamental task in many fields,such as autonomous driving,smart healthcare,sentiment analysis,and human-computer interaction.It is rapidly becoming the dominant research due to its powerful perception and judgment capabilities.Under complex scenes,multi-modal fusion technology utilizes the complementary characteristics of multiple data streams to fuse different data types and achieve more accurate predictions.However,achieving outstanding performance is challenging because of equipment performance limitations,missing information,and data noise.This paper comprehensively reviews existing methods based onmulti-modal fusion techniques and completes a detailed and in-depth analysis.According to the data fusion stage,multi-modal fusion has four primary methods:early fusion,deep fusion,late fusion,and hybrid fusion.The paper surveys the three majormulti-modal fusion technologies that can significantly enhance the effect of data fusion and further explore the applications of multi-modal fusion technology in various fields.Finally,it discusses the challenges and explores potential research opportunities.Multi-modal tasks still need intensive study because of data heterogeneity and quality.Preserving complementary information and eliminating redundant information between modalities is critical in multi-modal technology.Invalid data fusion methods may introduce extra noise and lead to worse results.This paper provides a comprehensive and detailed summary in response to these challenges.展开更多
Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are...Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.展开更多
Heart disease remains a leading cause of morbidity and mortality worldwide,highlighting the need for improved diagnostic methods.Traditional diagnostics face limitations such as reliance on single-modality data and vu...Heart disease remains a leading cause of morbidity and mortality worldwide,highlighting the need for improved diagnostic methods.Traditional diagnostics face limitations such as reliance on single-modality data and vulnerability to apparatus faults,which can reduce accuracy,especially with poor-quality images.Additionally,these methods often require significant time and expertise,making them less accessible in resource-limited settings.Emerging technologies like artificial intelligence and machine learning offer promising solutions by integrating multi-modality data and enhancing diagnostic precision,ultimately improving patient outcomes and reducing healthcare costs.This study introduces Heart-Net,a multi-modal deep learning framework designed to enhance heart disease diagnosis by integrating data from Cardiac Magnetic Resonance Imaging(MRI)and Electrocardiogram(ECG).Heart-Net uses a 3D U-Net for MRI analysis and a Temporal Convolutional Graph Neural Network(TCGN)for ECG feature extraction,combining these through an attention mechanism to emphasize relevant features.Classification is performed using Optimized TCGN.This approach improves early detection,reduces diagnostic errors,and supports personalized risk assessments and continuous health monitoring.The proposed approach results show that Heart-Net significantly outperforms traditional single-modality models,achieving accuracies of 92.56%forHeartnetDataset Ⅰ(HNET-DSⅠ),93.45%forHeartnetDataset Ⅱ(HNET-DSⅡ),and 91.89%for Heartnet Dataset Ⅲ(HNET-DSⅢ),mitigating the impact of apparatus faults and image quality issues.These findings underscore the potential of Heart-Net to revolutionize heart disease diagnostics and improve clinical outcomes.展开更多
Power Shell has been widely deployed in fileless malware and advanced persistent threat(APT)attacks due to its high stealthiness and live-off-theland technique.However,existing works mainly focus on deobfuscation and ...Power Shell has been widely deployed in fileless malware and advanced persistent threat(APT)attacks due to its high stealthiness and live-off-theland technique.However,existing works mainly focus on deobfuscation and malicious detection,lacking the malicious Power Shell families classification and behavior analysis.Moreover,the state-of-the-art methods fail to capture fine-grained features and semantic relationships,resulting in low robustness and accuracy.To this end,we propose Power Detector,a novel malicious Power Shell script detector based on multimodal semantic fusion and deep learning.Specifically,we design four feature extraction methods to extract key features from character,token,abstract syntax tree(AST),and semantic knowledge graph.Then,we intelligently design four embeddings(i.e.,Char2Vec,Token2Vec,AST2Vec,and Rela2Vec) and construct a multi-modal fusion algorithm to concatenate feature vectors from different views.Finally,we propose a combined model based on transformer and CNN-Bi LSTM to implement Power Shell family detection.Our experiments with five types of Power Shell attacks show that PowerDetector can accurately detect various obfuscated and stealth PowerShell scripts,with a 0.9402 precision,a 0.9358 recall,and a 0.9374 F1-score.Furthermore,through singlemodal and multi-modal comparison experiments,we demonstrate that PowerDetector’s multi-modal embedding and deep learning model can achieve better accuracy and even identify more unknown attacks.展开更多
Deep learning based methods have been successfully applied to semantic segmentation of optical remote sensing images.However,as more and more remote sensing data is available,it is a new challenge to comprehensively u...Deep learning based methods have been successfully applied to semantic segmentation of optical remote sensing images.However,as more and more remote sensing data is available,it is a new challenge to comprehensively utilize multi-modal remote sensing data to break through the performance bottleneck of single-modal interpretation.In addition,semantic segmentation and height estimation in remote sensing data are two tasks with strong correlation,but existing methods usually study individual tasks separately,which leads to high computational resource overhead.To this end,we propose a Multi-Task learning framework for Multi-Modal remote sensing images(MM_MT).Specifically,we design a Cross-Modal Feature Fusion(CMFF)method,which aggregates complementary information of different modalities to improve the accuracy of semantic segmentation and height estimation.Besides,a dual-stream multi-task learning method is introduced for Joint Semantic Segmentation and Height Estimation(JSSHE),extracting common features in a shared network to save time and resources,and then learning task-specific features in two task branches.Experimental results on the public multi-modal remote sensing image dataset Potsdam show that compared to training two tasks independently,multi-task learning saves 20%of training time and achieves competitive performance with mIoU of 83.02%for semantic segmentation and accuracy of 95.26%for height estimation.展开更多
The knowledge graph with relational abundant information has been widely used as the basic data support for the retrieval platforms.Image and text descriptions added to the knowledge graph enrich the node information,...The knowledge graph with relational abundant information has been widely used as the basic data support for the retrieval platforms.Image and text descriptions added to the knowledge graph enrich the node information,which accounts for the advantage of the multi-modal knowledge graph.In the field of cross-modal retrieval platforms,multi-modal knowledge graphs can help to improve retrieval accuracy and efficiency because of the abundant relational infor-mation provided by knowledge graphs.The representation learning method is sig-nificant to the application of multi-modal knowledge graphs.This paper proposes a distributed collaborative vector retrieval platform(DCRL-KG)using the multi-modal knowledge graph VisualSem as the foundation to achieve efficient and high-precision multimodal data retrieval.Firstly,use distributed technology to classify and store the data in the knowledge graph to improve retrieval efficiency.Secondly,this paper uses BabelNet to expand the knowledge graph through multi-ple filtering processes and increase the diversification of information.Finally,this paper builds a variety of retrieval models to achieve the fusion of retrieval results through linear combination methods to achieve high-precision language retrieval and image retrieval.The paper uses sentence retrieval and image retrieval experi-ments to prove that the platform can optimize the storage structure of the multi-modal knowledge graph and have good performance in multi-modal space.展开更多
In recent years,research on facial expression recognition(FER)under mask is trending.Wearing a mask for protection from Covid 19 has become a compulsion and it hides the facial expressions that is why FER under the ma...In recent years,research on facial expression recognition(FER)under mask is trending.Wearing a mask for protection from Covid 19 has become a compulsion and it hides the facial expressions that is why FER under the mask is a difficult task.The prevailing unimodal techniques for facial recognition are not up to the mark in terms of good results for the masked face,however,a multi-modal technique can be employed to generate better results.We proposed a multi-modal methodology based on deep learning for facial recognition under a masked face using facial and vocal expressions.The multimodal has been trained on a facial and vocal dataset.We have used two standard datasets,M-LFW for the masked dataset and CREMA-D and TESS dataset for vocal expressions.The vocal expressions are in the form of audio while the faces data is in image form that is why the data is heterogenous.In order to make the data homogeneous,the voice data is converted into images by taking spectrogram.A spectrogram embeds important features of the voice and it converts the audio format into the images.Later,the dataset is passed to the multimodal for training.neural network and the experimental results demonstrate that the proposed multimodal algorithm outsets unimodal methods and other state-of-the-art deep neural network models.展开更多
Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning technique...Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease.展开更多
Existing visual scene understanding methods mainly focus on identifying coarse-grained concepts about the visual objects and their relationships,largely neglecting fine-grained scene understanding.In fact,many data-dr...Existing visual scene understanding methods mainly focus on identifying coarse-grained concepts about the visual objects and their relationships,largely neglecting fine-grained scene understanding.In fact,many data-driven applications on the Web(e.g.,news-reading and e-shopping)require accurate recognition of much less coarse concepts as entities and proper linking them to a knowledge graph(KG),which can take their performance to the next level.In light of this,in this paper,we identify a new research task:visual entity linking for fine-grained scene understanding.To accomplish the task,we first extract features of candidate entities from different modalities,i.e.,visual features,textual features,and KG features.Then,we design a deep modal-attention neural network-based learning-to-rank method which aggregates all features and maps visual objects to the entities in KG.Extensive experimental results on the newly constructed dataset show that our proposed method is effective as it significantly improves the accuracy performance from 66.46%to 83.16%compared with baselines.展开更多
针对大规模无人机自组网面临的任务需求多样性、电磁环境复杂性、节点高机动性等问题,充分考虑无人机节点高速移动的特点,基于无人机拓扑稳定度和链路通信容量指标设计了一种无人机多点中继(multi-point relay,MPR)选择方法;为了减少网...针对大规模无人机自组网面临的任务需求多样性、电磁环境复杂性、节点高机动性等问题,充分考虑无人机节点高速移动的特点,基于无人机拓扑稳定度和链路通信容量指标设计了一种无人机多点中继(multi-point relay,MPR)选择方法;为了减少网络路由更新时间,增加无人机自组网路由策略的稳定性和可靠性,提出了一种基于Q-learning的自适应链路状态路由协议(Q-learning based adaptive link state routing,QALSR)。仿真结果表明,所提算法性能指标优于现有的主动路由协议。展开更多
BACKGROUND Intensive care unit-acquired weakness(ICU-AW)is a common complication that significantly impacts the patient's recovery process,even leading to adverse outcomes.Currently,there is a lack of effective pr...BACKGROUND Intensive care unit-acquired weakness(ICU-AW)is a common complication that significantly impacts the patient's recovery process,even leading to adverse outcomes.Currently,there is a lack of effective preventive measures.AIM To identify significant risk factors for ICU-AW through iterative machine learning techniques and offer recommendations for its prevention and treatment.METHODS Patients were categorized into ICU-AW and non-ICU-AW groups on the 14th day post-ICU admission.Relevant data from the initial 14 d of ICU stay,such as age,comorbidities,sedative dosage,vasopressor dosage,duration of mechanical ventilation,length of ICU stay,and rehabilitation therapy,were gathered.The relationships between these variables and ICU-AW were examined.Utilizing iterative machine learning techniques,a multilayer perceptron neural network model was developed,and its predictive performance for ICU-AW was assessed using the receiver operating characteristic curve.RESULTS Within the ICU-AW group,age,duration of mechanical ventilation,lorazepam dosage,adrenaline dosage,and length of ICU stay were significantly higher than in the non-ICU-AW group.Additionally,sepsis,multiple organ dysfunction syndrome,hypoalbuminemia,acute heart failure,respiratory failure,acute kidney injury,anemia,stress-related gastrointestinal bleeding,shock,hypertension,coronary artery disease,malignant tumors,and rehabilitation therapy ratios were significantly higher in the ICU-AW group,demonstrating statistical significance.The most influential factors contributing to ICU-AW were identified as the length of ICU stay(100.0%)and the duration of mechanical ventilation(54.9%).The neural network model predicted ICU-AW with an area under the curve of 0.941,sensitivity of 92.2%,and specificity of 82.7%.CONCLUSION The main factors influencing ICU-AW are the length of ICU stay and the duration of mechanical ventilation.A primary preventive strategy,when feasible,involves minimizing both ICU stay and mechanical ventilation duration.展开更多
Although Federated Deep Learning(FDL)enables distributed machine learning in the Internet of Vehicles(IoV),it requires multiple clients to upload model parameters,thus still existing unavoidable communication overhead...Although Federated Deep Learning(FDL)enables distributed machine learning in the Internet of Vehicles(IoV),it requires multiple clients to upload model parameters,thus still existing unavoidable communication overhead and data privacy risks.The recently proposed Swarm Learning(SL)provides a decentralized machine learning approach for unit edge computing and blockchain-based coordination.A Swarm-Federated Deep Learning framework in the IoV system(IoV-SFDL)that integrates SL into the FDL framework is proposed in this paper.The IoV-SFDL organizes vehicles to generate local SL models with adjacent vehicles based on the blockchain empowered SL,then aggregates the global FDL model among different SL groups with a credibility weights prediction algorithm.Extensive experimental results show that compared with the baseline frameworks,the proposed IoV-SFDL framework reduces the overhead of client-to-server communication by 16.72%,while the model performance improves by about 5.02%for the same training iterations.展开更多
In recent years,deep learning methods have gradually been applied to prediction tasks related to Arctic sea ice concentration,but relatively little research has been conducted for larger spatial and temporal scales,ma...In recent years,deep learning methods have gradually been applied to prediction tasks related to Arctic sea ice concentration,but relatively little research has been conducted for larger spatial and temporal scales,mainly due to the limited time coverage of observations and reanalysis data.Meanwhile,deep learning predictions of sea ice thickness(SIT)have yet to receive ample attention.In this study,two data-driven deep learning(DL)models are built based on the ConvLSTM and fully convolutional U-net(FC-Unet)algorithms and trained using CMIP6 historical simulations for transfer learning and fine-tuned using reanalysis/observations.These models enable monthly predictions of Arctic SIT without considering the complex physical processes involved.Through comprehensive assessments of prediction skills by season and region,the results suggest that using a broader set of CMIP6 data for transfer learning,as well as incorporating multiple climate variables as predictors,contribute to better prediction results,although both DL models can effectively predict the spatiotemporal features of SIT anomalies.Regarding the predicted SIT anomalies of the FC-Unet model,the spatial correlations with reanalysis reach an average level of 89%over all months,while the temporal anomaly correlation coefficients are close to unity in most cases.The models also demonstrate robust performances in predicting SIT and SIE during extreme events.The effectiveness and reliability of the proposed deep transfer learning models in predicting Arctic SIT can facilitate more accurate pan-Arctic predictions,aiding climate change research and real-time business applications.展开更多
Magnesium(Mg)alloys have shown great prospects as both structural and biomedical materials,while poor corrosion resistance limits their further application.In this work,to avoid the time-consuming and laborious experi...Magnesium(Mg)alloys have shown great prospects as both structural and biomedical materials,while poor corrosion resistance limits their further application.In this work,to avoid the time-consuming and laborious experiment trial,a high-throughput computational strategy based on first-principles calculations is designed for screening corrosion-resistant binary Mg alloy with intermetallics,from both the thermodynamic and kinetic perspectives.The stable binary Mg intermetallics with low equilibrium potential difference with respect to the Mg matrix are firstly identified.Then,the hydrogen adsorption energies on the surfaces of these Mg intermetallics are calculated,and the corrosion exchange current density is further calculated by a hydrogen evolution reaction(HER)kinetic model.Several intermetallics,e.g.Y_(3)Mg,Y_(2)Mg and La_(5)Mg,are identified to be promising intermetallics which might effectively hinder the cathodic HER.Furthermore,machine learning(ML)models are developed to predict Mg intermetallics with proper hydrogen adsorption energy employing work function(W_(f))and weighted first ionization energy(WFIE).The generalization of the ML models is tested on five new binary Mg intermetallics with the average root mean square error(RMSE)of 0.11 eV.This study not only predicts some promising binary Mg intermetallics which may suppress the galvanic corrosion,but also provides a high-throughput screening strategy and ML models for the design of corrosion-resistant alloy,which can be extended to ternary Mg alloys or other alloy systems.展开更多
基金Supported by Technology and Innovation Major Project of the Ministry of Science and Technology of China(2020AAA0108400, 2020AAA0108403)Tsinghua Precision Medicine Foundation(10001020109)。
文摘Deep multi-modal learning,a rapidly growing field with a wide range of practical applications,aims to effectively utilize and integrate information from multiple sources,known as modalities.Despite its impressive empirical performance,the theoretical foundations of deep multi-modal learning have yet to be fully explored.In this paper,we will undertake a comprehensive survey of recent developments in multi-modal learning theories,focusing on the fundamental properties that govern this field.Our goal is to provide a thorough collection of current theoretical tools for analyzing multi-modal learning,to clarify their implications for practitioners,and to suggest future directions for the establishment of a solid theoretical foundation for deep multi-modal learning.
基金supported by the Natural Science Foundation of Liaoning Province(Grant No.2023-MSBA-070)the National Natural Science Foundation of China(Grant No.62302086).
文摘Multi-modal fusion technology gradually become a fundamental task in many fields,such as autonomous driving,smart healthcare,sentiment analysis,and human-computer interaction.It is rapidly becoming the dominant research due to its powerful perception and judgment capabilities.Under complex scenes,multi-modal fusion technology utilizes the complementary characteristics of multiple data streams to fuse different data types and achieve more accurate predictions.However,achieving outstanding performance is challenging because of equipment performance limitations,missing information,and data noise.This paper comprehensively reviews existing methods based onmulti-modal fusion techniques and completes a detailed and in-depth analysis.According to the data fusion stage,multi-modal fusion has four primary methods:early fusion,deep fusion,late fusion,and hybrid fusion.The paper surveys the three majormulti-modal fusion technologies that can significantly enhance the effect of data fusion and further explore the applications of multi-modal fusion technology in various fields.Finally,it discusses the challenges and explores potential research opportunities.Multi-modal tasks still need intensive study because of data heterogeneity and quality.Preserving complementary information and eliminating redundant information between modalities is critical in multi-modal technology.Invalid data fusion methods may introduce extra noise and lead to worse results.This paper provides a comprehensive and detailed summary in response to these challenges.
基金supported by the Ministry of Science and Technology of China,No.2020AAA0109605(to XL)Meizhou Major Scientific and Technological Innovation PlatformsProjects of Guangdong Provincial Science & Technology Plan Projects,No.2019A0102005(to HW).
文摘Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R435),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Heart disease remains a leading cause of morbidity and mortality worldwide,highlighting the need for improved diagnostic methods.Traditional diagnostics face limitations such as reliance on single-modality data and vulnerability to apparatus faults,which can reduce accuracy,especially with poor-quality images.Additionally,these methods often require significant time and expertise,making them less accessible in resource-limited settings.Emerging technologies like artificial intelligence and machine learning offer promising solutions by integrating multi-modality data and enhancing diagnostic precision,ultimately improving patient outcomes and reducing healthcare costs.This study introduces Heart-Net,a multi-modal deep learning framework designed to enhance heart disease diagnosis by integrating data from Cardiac Magnetic Resonance Imaging(MRI)and Electrocardiogram(ECG).Heart-Net uses a 3D U-Net for MRI analysis and a Temporal Convolutional Graph Neural Network(TCGN)for ECG feature extraction,combining these through an attention mechanism to emphasize relevant features.Classification is performed using Optimized TCGN.This approach improves early detection,reduces diagnostic errors,and supports personalized risk assessments and continuous health monitoring.The proposed approach results show that Heart-Net significantly outperforms traditional single-modality models,achieving accuracies of 92.56%forHeartnetDataset Ⅰ(HNET-DSⅠ),93.45%forHeartnetDataset Ⅱ(HNET-DSⅡ),and 91.89%for Heartnet Dataset Ⅲ(HNET-DSⅢ),mitigating the impact of apparatus faults and image quality issues.These findings underscore the potential of Heart-Net to revolutionize heart disease diagnostics and improve clinical outcomes.
基金This work was supported by National Natural Science Foundation of China(No.62172308,No.U1626107,No.61972297,No.62172144,and No.62062019).
文摘Power Shell has been widely deployed in fileless malware and advanced persistent threat(APT)attacks due to its high stealthiness and live-off-theland technique.However,existing works mainly focus on deobfuscation and malicious detection,lacking the malicious Power Shell families classification and behavior analysis.Moreover,the state-of-the-art methods fail to capture fine-grained features and semantic relationships,resulting in low robustness and accuracy.To this end,we propose Power Detector,a novel malicious Power Shell script detector based on multimodal semantic fusion and deep learning.Specifically,we design four feature extraction methods to extract key features from character,token,abstract syntax tree(AST),and semantic knowledge graph.Then,we intelligently design four embeddings(i.e.,Char2Vec,Token2Vec,AST2Vec,and Rela2Vec) and construct a multi-modal fusion algorithm to concatenate feature vectors from different views.Finally,we propose a combined model based on transformer and CNN-Bi LSTM to implement Power Shell family detection.Our experiments with five types of Power Shell attacks show that PowerDetector can accurately detect various obfuscated and stealth PowerShell scripts,with a 0.9402 precision,a 0.9358 recall,and a 0.9374 F1-score.Furthermore,through singlemodal and multi-modal comparison experiments,we demonstrate that PowerDetector’s multi-modal embedding and deep learning model can achieve better accuracy and even identify more unknown attacks.
基金National Key R&D Program of China(No.2022ZD0118401).
文摘Deep learning based methods have been successfully applied to semantic segmentation of optical remote sensing images.However,as more and more remote sensing data is available,it is a new challenge to comprehensively utilize multi-modal remote sensing data to break through the performance bottleneck of single-modal interpretation.In addition,semantic segmentation and height estimation in remote sensing data are two tasks with strong correlation,but existing methods usually study individual tasks separately,which leads to high computational resource overhead.To this end,we propose a Multi-Task learning framework for Multi-Modal remote sensing images(MM_MT).Specifically,we design a Cross-Modal Feature Fusion(CMFF)method,which aggregates complementary information of different modalities to improve the accuracy of semantic segmentation and height estimation.Besides,a dual-stream multi-task learning method is introduced for Joint Semantic Segmentation and Height Estimation(JSSHE),extracting common features in a shared network to save time and resources,and then learning task-specific features in two task branches.Experimental results on the public multi-modal remote sensing image dataset Potsdam show that compared to training two tasks independently,multi-task learning saves 20%of training time and achieves competitive performance with mIoU of 83.02%for semantic segmentation and accuracy of 95.26%for height estimation.
基金This work is supported by the Fundamental Research Funds for the Central Universities(Grant No.HIT.NSRIF.201714)Weihai Science and Technology Development Program(2016DX GJMS15)+1 种基金Weihai Scientific Research and Innovation Fund(2020)Key Research and Development Program in Shandong Provincial(2017GGX90103).
文摘The knowledge graph with relational abundant information has been widely used as the basic data support for the retrieval platforms.Image and text descriptions added to the knowledge graph enrich the node information,which accounts for the advantage of the multi-modal knowledge graph.In the field of cross-modal retrieval platforms,multi-modal knowledge graphs can help to improve retrieval accuracy and efficiency because of the abundant relational infor-mation provided by knowledge graphs.The representation learning method is sig-nificant to the application of multi-modal knowledge graphs.This paper proposes a distributed collaborative vector retrieval platform(DCRL-KG)using the multi-modal knowledge graph VisualSem as the foundation to achieve efficient and high-precision multimodal data retrieval.Firstly,use distributed technology to classify and store the data in the knowledge graph to improve retrieval efficiency.Secondly,this paper uses BabelNet to expand the knowledge graph through multi-ple filtering processes and increase the diversification of information.Finally,this paper builds a variety of retrieval models to achieve the fusion of retrieval results through linear combination methods to achieve high-precision language retrieval and image retrieval.The paper uses sentence retrieval and image retrieval experi-ments to prove that the platform can optimize the storage structure of the multi-modal knowledge graph and have good performance in multi-modal space.
文摘In recent years,research on facial expression recognition(FER)under mask is trending.Wearing a mask for protection from Covid 19 has become a compulsion and it hides the facial expressions that is why FER under the mask is a difficult task.The prevailing unimodal techniques for facial recognition are not up to the mark in terms of good results for the masked face,however,a multi-modal technique can be employed to generate better results.We proposed a multi-modal methodology based on deep learning for facial recognition under a masked face using facial and vocal expressions.The multimodal has been trained on a facial and vocal dataset.We have used two standard datasets,M-LFW for the masked dataset and CREMA-D and TESS dataset for vocal expressions.The vocal expressions are in the form of audio while the faces data is in image form that is why the data is heterogenous.In order to make the data homogeneous,the voice data is converted into images by taking spectrogram.A spectrogram embeds important features of the voice and it converts the audio format into the images.Later,the dataset is passed to the multimodal for training.neural network and the experimental results demonstrate that the proposed multimodal algorithm outsets unimodal methods and other state-of-the-art deep neural network models.
文摘Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease.
文摘Existing visual scene understanding methods mainly focus on identifying coarse-grained concepts about the visual objects and their relationships,largely neglecting fine-grained scene understanding.In fact,many data-driven applications on the Web(e.g.,news-reading and e-shopping)require accurate recognition of much less coarse concepts as entities and proper linking them to a knowledge graph(KG),which can take their performance to the next level.In light of this,in this paper,we identify a new research task:visual entity linking for fine-grained scene understanding.To accomplish the task,we first extract features of candidate entities from different modalities,i.e.,visual features,textual features,and KG features.Then,we design a deep modal-attention neural network-based learning-to-rank method which aggregates all features and maps visual objects to the entities in KG.Extensive experimental results on the newly constructed dataset show that our proposed method is effective as it significantly improves the accuracy performance from 66.46%to 83.16%compared with baselines.
文摘针对大规模无人机自组网面临的任务需求多样性、电磁环境复杂性、节点高机动性等问题,充分考虑无人机节点高速移动的特点,基于无人机拓扑稳定度和链路通信容量指标设计了一种无人机多点中继(multi-point relay,MPR)选择方法;为了减少网络路由更新时间,增加无人机自组网路由策略的稳定性和可靠性,提出了一种基于Q-learning的自适应链路状态路由协议(Q-learning based adaptive link state routing,QALSR)。仿真结果表明,所提算法性能指标优于现有的主动路由协议。
基金Supported by Science and Technology Support Program of Qiandongnan Prefecture,No.Qiandongnan Sci-Tech Support[2021]12Guizhou Province High-Level Innovative Talent Training Program,No.Qiannan Thousand Talents[2022]201701.
文摘BACKGROUND Intensive care unit-acquired weakness(ICU-AW)is a common complication that significantly impacts the patient's recovery process,even leading to adverse outcomes.Currently,there is a lack of effective preventive measures.AIM To identify significant risk factors for ICU-AW through iterative machine learning techniques and offer recommendations for its prevention and treatment.METHODS Patients were categorized into ICU-AW and non-ICU-AW groups on the 14th day post-ICU admission.Relevant data from the initial 14 d of ICU stay,such as age,comorbidities,sedative dosage,vasopressor dosage,duration of mechanical ventilation,length of ICU stay,and rehabilitation therapy,were gathered.The relationships between these variables and ICU-AW were examined.Utilizing iterative machine learning techniques,a multilayer perceptron neural network model was developed,and its predictive performance for ICU-AW was assessed using the receiver operating characteristic curve.RESULTS Within the ICU-AW group,age,duration of mechanical ventilation,lorazepam dosage,adrenaline dosage,and length of ICU stay were significantly higher than in the non-ICU-AW group.Additionally,sepsis,multiple organ dysfunction syndrome,hypoalbuminemia,acute heart failure,respiratory failure,acute kidney injury,anemia,stress-related gastrointestinal bleeding,shock,hypertension,coronary artery disease,malignant tumors,and rehabilitation therapy ratios were significantly higher in the ICU-AW group,demonstrating statistical significance.The most influential factors contributing to ICU-AW were identified as the length of ICU stay(100.0%)and the duration of mechanical ventilation(54.9%).The neural network model predicted ICU-AW with an area under the curve of 0.941,sensitivity of 92.2%,and specificity of 82.7%.CONCLUSION The main factors influencing ICU-AW are the length of ICU stay and the duration of mechanical ventilation.A primary preventive strategy,when feasible,involves minimizing both ICU stay and mechanical ventilation duration.
基金supported by the National Natural Science Foundation of China(NSFC)under Grant 62071179.
文摘Although Federated Deep Learning(FDL)enables distributed machine learning in the Internet of Vehicles(IoV),it requires multiple clients to upload model parameters,thus still existing unavoidable communication overhead and data privacy risks.The recently proposed Swarm Learning(SL)provides a decentralized machine learning approach for unit edge computing and blockchain-based coordination.A Swarm-Federated Deep Learning framework in the IoV system(IoV-SFDL)that integrates SL into the FDL framework is proposed in this paper.The IoV-SFDL organizes vehicles to generate local SL models with adjacent vehicles based on the blockchain empowered SL,then aggregates the global FDL model among different SL groups with a credibility weights prediction algorithm.Extensive experimental results show that compared with the baseline frameworks,the proposed IoV-SFDL framework reduces the overhead of client-to-server communication by 16.72%,while the model performance improves by about 5.02%for the same training iterations.
基金supported by the National Natural Science Foundation of China(Grant Nos.41976193 and 42176243).
文摘In recent years,deep learning methods have gradually been applied to prediction tasks related to Arctic sea ice concentration,but relatively little research has been conducted for larger spatial and temporal scales,mainly due to the limited time coverage of observations and reanalysis data.Meanwhile,deep learning predictions of sea ice thickness(SIT)have yet to receive ample attention.In this study,two data-driven deep learning(DL)models are built based on the ConvLSTM and fully convolutional U-net(FC-Unet)algorithms and trained using CMIP6 historical simulations for transfer learning and fine-tuned using reanalysis/observations.These models enable monthly predictions of Arctic SIT without considering the complex physical processes involved.Through comprehensive assessments of prediction skills by season and region,the results suggest that using a broader set of CMIP6 data for transfer learning,as well as incorporating multiple climate variables as predictors,contribute to better prediction results,although both DL models can effectively predict the spatiotemporal features of SIT anomalies.Regarding the predicted SIT anomalies of the FC-Unet model,the spatial correlations with reanalysis reach an average level of 89%over all months,while the temporal anomaly correlation coefficients are close to unity in most cases.The models also demonstrate robust performances in predicting SIT and SIE during extreme events.The effectiveness and reliability of the proposed deep transfer learning models in predicting Arctic SIT can facilitate more accurate pan-Arctic predictions,aiding climate change research and real-time business applications.
基金financially supported by the National Key Research and Development Program of China(No.2016YFB0701202,No.2017YFB0701500 and No.2020YFB1505901)National Natural Science Foundation of China(General Program No.51474149,52072240)+3 种基金Shanghai Science and Technology Committee(No.18511109300)Science and Technology Commission of the CMC(2019JCJQZD27300)financial support from the University of Michigan and Shanghai Jiao Tong University joint funding,China(AE604401)Science and Technology Commission of Shanghai Municipality(No.18511109302).
文摘Magnesium(Mg)alloys have shown great prospects as both structural and biomedical materials,while poor corrosion resistance limits their further application.In this work,to avoid the time-consuming and laborious experiment trial,a high-throughput computational strategy based on first-principles calculations is designed for screening corrosion-resistant binary Mg alloy with intermetallics,from both the thermodynamic and kinetic perspectives.The stable binary Mg intermetallics with low equilibrium potential difference with respect to the Mg matrix are firstly identified.Then,the hydrogen adsorption energies on the surfaces of these Mg intermetallics are calculated,and the corrosion exchange current density is further calculated by a hydrogen evolution reaction(HER)kinetic model.Several intermetallics,e.g.Y_(3)Mg,Y_(2)Mg and La_(5)Mg,are identified to be promising intermetallics which might effectively hinder the cathodic HER.Furthermore,machine learning(ML)models are developed to predict Mg intermetallics with proper hydrogen adsorption energy employing work function(W_(f))and weighted first ionization energy(WFIE).The generalization of the ML models is tested on five new binary Mg intermetallics with the average root mean square error(RMSE)of 0.11 eV.This study not only predicts some promising binary Mg intermetallics which may suppress the galvanic corrosion,but also provides a high-throughput screening strategy and ML models for the design of corrosion-resistant alloy,which can be extended to ternary Mg alloys or other alloy systems.