Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning technique...Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease.展开更多
Ransomware has emerged as a critical cybersecurity threat,characterized by its ability to encrypt user data or lock devices,demanding ransom for their release.Traditional ransomware detection methods face limitations ...Ransomware has emerged as a critical cybersecurity threat,characterized by its ability to encrypt user data or lock devices,demanding ransom for their release.Traditional ransomware detection methods face limitations due to their assumption of similar data distributions between training and testing phases,rendering them less effective against evolving ransomware families.This paper introduces TLERAD(Transfer Learning for Enhanced Ransomware Attack Detection),a novel approach that leverages unsupervised transfer learning and co-clustering techniques to bridge the gap between source and target domains,enabling robust detection of both known and unknown ransomware variants.The proposed method achieves high detection accuracy,with an AUC of 0.98 for known ransomware and 0.93 for unknown ransomware,significantly outperforming baseline methods.Comprehensive experiments demonstrate TLERAD’s effectiveness in real-world scenarios,highlighting its adapt-ability to the rapidly evolving ransomware landscape.The paper also discusses future directions for enhancing TLERAD,including real-time adaptation,integration with lightweight and post-quantum cryptography,and the incorporation of explainable AI techniques.展开更多
The potential for reducing greenhouse gas(GHG)emissions and energy consumption in wastewater treatment can be realized through intelligent control,with machine learning(ML)and multimodality emerging as a promising sol...The potential for reducing greenhouse gas(GHG)emissions and energy consumption in wastewater treatment can be realized through intelligent control,with machine learning(ML)and multimodality emerging as a promising solution.Here,we introduce an ML technique based on multimodal strategies,focusing specifically on intelligent aeration control in wastewater treatment plants(WWTPs).The generalization of the multimodal strategy is demonstrated on eight ML models.The results demonstrate that this multimodal strategy significantly enhances model indicators for ML in environmental science and the efficiency of aeration control,exhibiting exceptional performance and interpretability.Integrating random forest with visual models achieves the highest accuracy in forecasting aeration quantity in multimodal models,with a mean absolute percentage error of 4.4%and a coefficient of determination of 0.948.Practical testing in a full-scale plant reveals that the multimodal model can reduce operation costs by 19.8%compared to traditional fuzzy control methods.The potential application of these strategies in critical water science domains is discussed.To foster accessibility and promote widespread adoption,the multimodal ML models are freely available on GitHub,thereby eliminating technical barriers and encouraging the application of artificial intelligence in urban wastewater treatment.展开更多
Cross-Site Scripting(XSS)remains a significant threat to web application security,exploiting vulnerabilities to hijack user sessions and steal sensitive data.Traditional detection methods often fail to keep pace with ...Cross-Site Scripting(XSS)remains a significant threat to web application security,exploiting vulnerabilities to hijack user sessions and steal sensitive data.Traditional detection methods often fail to keep pace with the evolving sophistication of cyber threats.This paper introduces a novel hybrid ensemble learning framework that leverages a combination of advanced machine learning algorithms—Logistic Regression(LR),Support Vector Machines(SVM),eXtreme Gradient Boosting(XGBoost),Categorical Boosting(CatBoost),and Deep Neural Networks(DNN).Utilizing the XSS-Attacks-2021 dataset,which comprises 460 instances across various real-world trafficrelated scenarios,this framework significantly enhances XSS attack detection.Our approach,which includes rigorous feature engineering and model tuning,not only optimizes accuracy but also effectively minimizes false positives(FP)(0.13%)and false negatives(FN)(0.19%).This comprehensive methodology has been rigorously validated,achieving an unprecedented accuracy of 99.87%.The proposed system is scalable and efficient,capable of adapting to the increasing number of web applications and user demands without a decline in performance.It demonstrates exceptional real-time capabilities,with the ability to detect XSS attacks dynamically,maintaining high accuracy and low latency even under significant loads.Furthermore,despite the computational complexity introduced by the hybrid ensemble approach,strategic use of parallel processing and algorithm tuning ensures that the system remains scalable and performs robustly in real-time applications.Designed for easy integration with existing web security systems,our framework supports adaptable Application Programming Interfaces(APIs)and a modular design,facilitating seamless augmentation of current defenses.This innovation represents a significant advancement in cybersecurity,offering a scalable and effective solution for securing modern web applications against evolving threats.展开更多
The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation fo...The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation for automatically recognizing machine failure,and thus timely maintenance can ensure safe operations.Transfer learning is a promising solution that can enhance the machine fault diagnosis model by borrowing pre-trained knowledge from the source model and applying it to the target model,which typically involves two datasets.In response to the availability of multiple datasets,this paper proposes using selective and adaptive incremental transfer learning(SA-ITL),which fuses three algorithms,namely,the hybrid selective algorithm,the transferability enhancement algorithm,and the incremental transfer learning algorithm.It is a selective algorithm that enables selecting and ordering appropriate datasets for transfer learning and selecting useful knowledge to avoid negative transfer.The algorithm also adaptively adjusts the portion of training data to balance the learning rate and training time.The proposed algorithm is evaluated and analyzed using ten benchmark datasets.Compared with other algorithms from existing works,SA-ITL improves the accuracy of all datasets.Ablation studies present the accuracy enhancements of the SA-ITL,including the hybrid selective algorithm(1.22%-3.82%),transferability enhancement algorithm(1.91%-4.15%),and incremental transfer learning algorithm(0.605%-2.68%).These also show the benefits of enhancing the target model with heterogeneous image datasets that widen the range of domain selection between source and target domains.展开更多
Background Deep convolutional neural networks have garnered considerable attention in numerous machine learning applications,particularly in visual recognition tasks such as image and video analyses.There is a growing...Background Deep convolutional neural networks have garnered considerable attention in numerous machine learning applications,particularly in visual recognition tasks such as image and video analyses.There is a growing interest in applying this technology to diverse applications in medical image analysis.Automated three dimensional Breast Ultrasound is a vital tool for detecting breast cancer,and computer-assisted diagnosis software,developed based on deep learning,can effectively assist radiologists in diagnosis.However,the network model is prone to overfitting during training,owing to challenges such as insufficient training data.This study attempts to solve the problem caused by small datasets and improve model detection performance.Methods We propose a breast cancer detection framework based on deep learning(a transfer learning method based on cross-organ cancer detection)and a contrastive learning method based on breast imaging reporting and data systems(BI-RADS).Results When using cross organ transfer learning and BIRADS based contrastive learning,the average sensitivity of the model increased by a maximum of 16.05%.Conclusion Our experiments have demonstrated that the parameters and experiences of cross-organ cancer detection can be mutually referenced,and contrastive learning method based on BI-RADS can improve the detection performance of the model.展开更多
This paper introduces the Value Engineering method to calculate the value coefficient of Chinese learning efficiency for 377 international students by dimension.Results suggest that attention should be paid to male,Eu...This paper introduces the Value Engineering method to calculate the value coefficient of Chinese learning efficiency for 377 international students by dimension.Results suggest that attention should be paid to male,European and American,rural,introverted,and“work or education needs”international students;Give full play to the driving role of the reference-type,explore the space for improving the learning efficiency of the improvement-type and attention-type,and pay attention to the problems of problem-type international students.展开更多
This paper investigates the related factors of Chinese learning engagement and performance of 377 students from three universities in Guangzhou of China,employs the value engineering method,and calculates the value co...This paper investigates the related factors of Chinese learning engagement and performance of 377 students from three universities in Guangzhou of China,employs the value engineering method,and calculates the value coefficient of learning efficiency.From the value coefficient of Chinese language learning,it proves that over 60%of international students studying in China have a higher efficiency in learning Chinese.展开更多
In order to explore the relationship between college students’English learning motivation and learning effect,this paper uses a questionnaire survey to investigate and analyze the English classroom learning motivatio...In order to explore the relationship between college students’English learning motivation and learning effect,this paper uses a questionnaire survey to investigate and analyze the English classroom learning motivation of first-year non-English majors.Based on the final English score,the paper analyzes the correlation between English learning motivation and learning effect.展开更多
High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency...High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency of local data learning models while preventing privacy leakage in a high mobility environment.In order to protect data privacy and improve data learning efficiency in knowledge sharing,we propose an asynchronous federated broad learning(FBL)framework that integrates broad learning(BL)into federated learning(FL).In FBL,we design a broad fully connected model(BFCM)as a local model for training client data.To enhance the wireless channel quality for knowledge sharing and reduce the communication and computation cost of participating clients,we construct a joint resource allocation and reconfigurable intelligent surface(RIS)configuration optimization framework for FBL.The problem is decoupled into two convex subproblems.Aiming to improve the resource scheduling efficiency in FBL,a double Davidon–Fletcher–Powell(DDFP)algorithm is presented to solve the time slot allocation and RIS configuration problem.Based on the results of resource scheduling,we design a reward-allocation algorithm based on federated incentive learning(FIL)in FBL to compensate clients for their costs.The simulation results show that the proposed FBL framework achieves better performance than the comparison models in terms of efficiency,accuracy,and cost for knowledge sharing in the IoV.展开更多
Although Federated Deep Learning(FDL)enables distributed machine learning in the Internet of Vehicles(IoV),it requires multiple clients to upload model parameters,thus still existing unavoidable communication overhead...Although Federated Deep Learning(FDL)enables distributed machine learning in the Internet of Vehicles(IoV),it requires multiple clients to upload model parameters,thus still existing unavoidable communication overhead and data privacy risks.The recently proposed Swarm Learning(SL)provides a decentralized machine learning approach for unit edge computing and blockchain-based coordination.A Swarm-Federated Deep Learning framework in the IoV system(IoV-SFDL)that integrates SL into the FDL framework is proposed in this paper.The IoV-SFDL organizes vehicles to generate local SL models with adjacent vehicles based on the blockchain empowered SL,then aggregates the global FDL model among different SL groups with a credibility weights prediction algorithm.Extensive experimental results show that compared with the baseline frameworks,the proposed IoV-SFDL framework reduces the overhead of client-to-server communication by 16.72%,while the model performance improves by about 5.02%for the same training iterations.展开更多
In recent years,deep learning methods have gradually been applied to prediction tasks related to Arctic sea ice concentration,but relatively little research has been conducted for larger spatial and temporal scales,ma...In recent years,deep learning methods have gradually been applied to prediction tasks related to Arctic sea ice concentration,but relatively little research has been conducted for larger spatial and temporal scales,mainly due to the limited time coverage of observations and reanalysis data.Meanwhile,deep learning predictions of sea ice thickness(SIT)have yet to receive ample attention.In this study,two data-driven deep learning(DL)models are built based on the ConvLSTM and fully convolutional U-net(FC-Unet)algorithms and trained using CMIP6 historical simulations for transfer learning and fine-tuned using reanalysis/observations.These models enable monthly predictions of Arctic SIT without considering the complex physical processes involved.Through comprehensive assessments of prediction skills by season and region,the results suggest that using a broader set of CMIP6 data for transfer learning,as well as incorporating multiple climate variables as predictors,contribute to better prediction results,although both DL models can effectively predict the spatiotemporal features of SIT anomalies.Regarding the predicted SIT anomalies of the FC-Unet model,the spatial correlations with reanalysis reach an average level of 89%over all months,while the temporal anomaly correlation coefficients are close to unity in most cases.The models also demonstrate robust performances in predicting SIT and SIE during extreme events.The effectiveness and reliability of the proposed deep transfer learning models in predicting Arctic SIT can facilitate more accurate pan-Arctic predictions,aiding climate change research and real-time business applications.展开更多
Leveraging big data analytics and advanced algorithms to accelerate and optimize the process of molecular and materials design, synthesis, and application has revolutionized the field of molecular and materials scienc...Leveraging big data analytics and advanced algorithms to accelerate and optimize the process of molecular and materials design, synthesis, and application has revolutionized the field of molecular and materials science, allowing researchers to gain a deeper understanding of material properties and behaviors,leading to the development of new materials that are more efficient and reliable. However, the difficulty in constructing large-scale datasets of new molecules/materials due to the high cost of data acquisition and annotation limits the development of conventional machine learning(ML) approaches. Knowledgereused transfer learning(TL) methods are expected to break this dilemma. The application of TL lowers the data requirements for model training, which makes TL stand out in researches addressing data quality issues. In this review, we summarize recent progress in TL related to molecular and materials. We focus on the application of TL methods for the discovery of advanced molecules/materials, particularly, the construction of TL frameworks for different systems, and how TL can enhance the performance of models. In addition, the challenges of TL are also discussed.展开更多
The Advanced Geosynchronous Radiation Imager(AGRI)is a mission-critical instrument for the Fengyun series of satellites.AGRI acquires full-disk images every 15 min and views East Asia every 5 min through 14 spectral b...The Advanced Geosynchronous Radiation Imager(AGRI)is a mission-critical instrument for the Fengyun series of satellites.AGRI acquires full-disk images every 15 min and views East Asia every 5 min through 14 spectral bands,enabling the detection of highly variable aerosol optical depth(AOD).Quantitative retrieval of AOD has hitherto been challenging,especially over land.In this study,an AOD retrieval algorithm is proposed that combines deep learning and transfer learning.The algorithm uses core concepts from both the Dark Target(DT)and Deep Blue(DB)algorithms to select features for the machinelearning(ML)algorithm,allowing for AOD retrieval at 550 nm over both dark and bright surfaces.The algorithm consists of two steps:①A baseline deep neural network(DNN)with skip connections is developed using 10 min Advanced Himawari Imager(AHI)AODs as the target variable,and②sunphotometer AODs from 89 ground-based stations are used to fine-tune the DNN parameters.Out-of-station validation shows that the retrieved AOD attains high accuracy,characterized by a coefficient of determination(R2)of 0.70,a mean bias error(MBE)of 0.03,and a percentage of data within the expected error(EE)of 70.7%.A sensitivity study reveals that the top-of-atmosphere reflectance at 650 and 470 nm,as well as the surface reflectance at 650 nm,are the two largest sources of uncertainty impacting the retrieval.In a case study of monitoring an extreme aerosol event,the AGRI AOD is found to be able to capture the detailed temporal evolution of the event.This work demonstrates the superiority of the transfer-learning technique in satellite AOD retrievals and the applicability of the retrieved AGRI AOD in monitoring extreme pollution events.展开更多
The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper ...The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper makes an attempt to assess landslide susceptibility in Shimla district of the northwest Indian Himalayan region.It examined the effectiveness of random forest(RF),multilayer perceptron(MLP),sequential minimal optimization regression(SMOreg)and bagging ensemble(B-RF,BSMOreg,B-MLP)models.A landslide inventory map comprising 1052 locations of past landslide occurrences was classified into training(70%)and testing(30%)datasets.The site-specific influencing factors were selected by employing a multicollinearity test.The relationship between past landslide occurrences and influencing factors was established using the frequency ratio method.The effectiveness of machine learning models was verified through performance assessors.The landslide susceptibility maps were validated by the area under the receiver operating characteristic curves(ROC-AUC),accuracy,precision,recall and F1-score.The key performance metrics and map validation demonstrated that the BRF model(correlation coefficient:0.988,mean absolute error:0.010,root mean square error:0.058,relative absolute error:2.964,ROC-AUC:0.947,accuracy:0.778,precision:0.819,recall:0.917 and F-1 score:0.865)outperformed the single classifiers and other bagging ensemble models for landslide susceptibility.The results show that the largest area was found under the very high susceptibility zone(33.87%),followed by the low(27.30%),high(20.68%)and moderate(18.16%)susceptibility zones.The factors,namely average annual rainfall,slope,lithology,soil texture and earthquake magnitude have been identified as the influencing factors for very high landslide susceptibility.Soil texture,lineament density and elevation have been attributed to high and moderate susceptibility.Thus,the study calls for devising suitable landslide mitigation measures in the study area.Structural measures,an immediate response system,community participation and coordination among stakeholders may help lessen the detrimental impact of landslides.The findings from this study could aid decision-makers in mitigating future catastrophes and devising suitable strategies in other geographical regions with similar geological characteristics.展开更多
Social media(SM)based surveillance systems,combined with machine learning(ML)and deep learning(DL)techniques,have shown potential for early detection of epidemic outbreaks.This review discusses the current state of SM...Social media(SM)based surveillance systems,combined with machine learning(ML)and deep learning(DL)techniques,have shown potential for early detection of epidemic outbreaks.This review discusses the current state of SM-based surveillance methods for early epidemic outbreaks and the role of ML and DL in enhancing their performance.Since,every year,a large amount of data related to epidemic outbreaks,particularly Twitter data is generated by SM.This paper outlines the theme of SM analysis for tracking health-related issues and detecting epidemic outbreaks in SM,along with the ML and DL techniques that have been configured for the detection of epidemic outbreaks.DL has emerged as a promising ML technique that adaptsmultiple layers of representations or features of the data and yields state-of-the-art extrapolation results.In recent years,along with the success of ML and DL in many other application domains,both ML and DL are also popularly used in SM analysis.This paper aims to provide an overview of epidemic outbreaks in SM and then outlines a comprehensive analysis of ML and DL approaches and their existing applications in SM analysis.Finally,this review serves the purpose of offering suggestions,ideas,and proposals,along with highlighting the ongoing challenges in the field of early outbreak detection that still need to be addressed.展开更多
Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression mode...Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression models,extreme gradient boosting(XGBoost),artificial neural network(ANN),support vector regression(SVR),and Gaussian process regression(GP),on two common terminal ballistics’ problems:(a)predicting the V50ballistic limit of monolithic metallic armour impacted by small and medium calibre projectiles and fragments,and(b) predicting the depth to which a projectile will penetrate a target of semi-infinite thickness.To achieve this we utilise two datasets,each consisting of approximately 1000samples,collated from public release sources.We demonstrate that all four model types provide similarly excellent agreement when interpolating within the training data and diverge when extrapolating outside this range.Although extrapolation is not advisable for ML-based regression models,for applications such as lethality/survivability analysis,such capability is required.To circumvent this,we implement expert knowledge and physics-based models via enforced monotonicity,as a Gaussian prior mean,and through a modified loss function.The physics-informed models demonstrate improved performance over both classical physics-based models and the basic ML regression models,providing an ability to accurately fit experimental data when it is available and then revert to the physics-based model when not.The resulting models demonstrate high levels of predictive accuracy over a very wide range of projectile types,target materials and thicknesses,and impact conditions significantly more diverse than that achievable from any existing analytical approach.Compared with numerical analysis tools such as finite element solvers the ML models run orders of magnitude faster.We provide some general guidelines throughout for the development,application,and reporting of ML models in terminal ballistics problems.展开更多
Sleep posture surveillance is crucial for patient comfort,yet current systems face difficulties in providing compre-hensive studies due to the obstruction caused by blankets.Precise posture assessment remains challeng...Sleep posture surveillance is crucial for patient comfort,yet current systems face difficulties in providing compre-hensive studies due to the obstruction caused by blankets.Precise posture assessment remains challenging because of the complex nature of the human body and variations in sleep patterns.Consequently,this study introduces an innovative method utilizing RGB and thermal cameras for comprehensive posture classification,thereby enhancing the analysis of body position and comfort.This method begins by capturing a dataset of sleep postures in the form of videos using RGB and thermal cameras,which depict six commonly adopted postures:supine,left log,right log,prone head,prone left,and prone right.The study involves 10 participants under two conditions:with and without blankets.Initially,the database is normalized into a video frame.The subsequent step entails training a fine-tuned,pretrained Visual Geometry Group(VGG16)and ResNet50 model.In the third phase,the extracted features are utilized for classification.The fourth step of the proposed approach employs a serial fusion technique based on the normal distribution to merge the vectors derived from both the RGB and thermal datasets.Finally,the fused vectors are passed to machine learning classifiers for final classification.The dataset,which includes human sleep postures used in this study’s experiments,achieved a 96.7%accuracy rate using the Quadratic Support Vector Machine(QSVM)without the blanket.Moreover,the Linear SVM,when utilized with a blanket,attained an accuracy of 96%.When normal distribution serial fusion was applied to the blanket features,it resulted in a remarkable average accuracy of 99%.展开更多
Machine learning combined with density functional theory(DFT)enables rapid exploration of catalyst descriptors space such as adsorption energy,facilitating rapid and effective catalyst screening.However,there is still...Machine learning combined with density functional theory(DFT)enables rapid exploration of catalyst descriptors space such as adsorption energy,facilitating rapid and effective catalyst screening.However,there is still a lack of models for predicting adsorption energies on oxides,due to the complexity of elemental species and the ambiguous coordination environment.This work proposes an active learning workflow(LeNN)founded on local electronic transfer features(e)and the principle of coordinate rotation invariance.By accurately characterizing the electron transfer to adsorption site atoms and their surrounding geometric structures,LeNN mitigates abrupt feature changes due to different element types and clarifies coordination environments.As a result,it enables the prediction of^(*)H adsorption energy on binary oxide surfaces with a mean absolute error(MAE)below 0.18 eV.Moreover,we incorporate local coverage(θ_(l))and leverage neutral network ensemble to establish an active learning workflow,attaining a prediction MAE below 0.2 eV for 5419 multi-^(*)H adsorption structures.These findings validate the universality and capability of the proposed features in predicting^(*)H adsorption energy on binary oxide surfaces.展开更多
Additive manufacturing technology is highly regarded due to its advantages,such as high precision and the ability to address complex geometric challenges.However,the development of additive manufacturing process is co...Additive manufacturing technology is highly regarded due to its advantages,such as high precision and the ability to address complex geometric challenges.However,the development of additive manufacturing process is constrained by issues like unclear fundamental principles,complex experimental cycles,and high costs.Machine learning,as a novel artificial intelligence technology,has the potential to deeply engage in the development of additive manufacturing process,assisting engineers in learning and developing new techniques.This paper provides a comprehensive overview of the research and applications of machine learning in the field of additive manufacturing,particularly in model design and process development.Firstly,it introduces the background and significance of machine learning-assisted design in additive manufacturing process.It then further delves into the application of machine learning in additive manufacturing,focusing on model design and process guidance.Finally,it concludes by summarizing and forecasting the development trends of machine learning technology in the field of additive manufacturing.展开更多
文摘Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease.
文摘Ransomware has emerged as a critical cybersecurity threat,characterized by its ability to encrypt user data or lock devices,demanding ransom for their release.Traditional ransomware detection methods face limitations due to their assumption of similar data distributions between training and testing phases,rendering them less effective against evolving ransomware families.This paper introduces TLERAD(Transfer Learning for Enhanced Ransomware Attack Detection),a novel approach that leverages unsupervised transfer learning and co-clustering techniques to bridge the gap between source and target domains,enabling robust detection of both known and unknown ransomware variants.The proposed method achieves high detection accuracy,with an AUC of 0.98 for known ransomware and 0.93 for unknown ransomware,significantly outperforming baseline methods.Comprehensive experiments demonstrate TLERAD’s effectiveness in real-world scenarios,highlighting its adapt-ability to the rapidly evolving ransomware landscape.The paper also discusses future directions for enhancing TLERAD,including real-time adaptation,integration with lightweight and post-quantum cryptography,and the incorporation of explainable AI techniques.
基金the financial support by the National Natural Science Foundation of China(52230004 and 52293445)the Key Research and Development Project of Shandong Province(2020CXGC011202-005)the Shenzhen Science and Technology Program(KCXFZ20211020163404007 and KQTD20190929172630447).
文摘The potential for reducing greenhouse gas(GHG)emissions and energy consumption in wastewater treatment can be realized through intelligent control,with machine learning(ML)and multimodality emerging as a promising solution.Here,we introduce an ML technique based on multimodal strategies,focusing specifically on intelligent aeration control in wastewater treatment plants(WWTPs).The generalization of the multimodal strategy is demonstrated on eight ML models.The results demonstrate that this multimodal strategy significantly enhances model indicators for ML in environmental science and the efficiency of aeration control,exhibiting exceptional performance and interpretability.Integrating random forest with visual models achieves the highest accuracy in forecasting aeration quantity in multimodal models,with a mean absolute percentage error of 4.4%and a coefficient of determination of 0.948.Practical testing in a full-scale plant reveals that the multimodal model can reduce operation costs by 19.8%compared to traditional fuzzy control methods.The potential application of these strategies in critical water science domains is discussed.To foster accessibility and promote widespread adoption,the multimodal ML models are freely available on GitHub,thereby eliminating technical barriers and encouraging the application of artificial intelligence in urban wastewater treatment.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2024R513),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Cross-Site Scripting(XSS)remains a significant threat to web application security,exploiting vulnerabilities to hijack user sessions and steal sensitive data.Traditional detection methods often fail to keep pace with the evolving sophistication of cyber threats.This paper introduces a novel hybrid ensemble learning framework that leverages a combination of advanced machine learning algorithms—Logistic Regression(LR),Support Vector Machines(SVM),eXtreme Gradient Boosting(XGBoost),Categorical Boosting(CatBoost),and Deep Neural Networks(DNN).Utilizing the XSS-Attacks-2021 dataset,which comprises 460 instances across various real-world trafficrelated scenarios,this framework significantly enhances XSS attack detection.Our approach,which includes rigorous feature engineering and model tuning,not only optimizes accuracy but also effectively minimizes false positives(FP)(0.13%)and false negatives(FN)(0.19%).This comprehensive methodology has been rigorously validated,achieving an unprecedented accuracy of 99.87%.The proposed system is scalable and efficient,capable of adapting to the increasing number of web applications and user demands without a decline in performance.It demonstrates exceptional real-time capabilities,with the ability to detect XSS attacks dynamically,maintaining high accuracy and low latency even under significant loads.Furthermore,despite the computational complexity introduced by the hybrid ensemble approach,strategic use of parallel processing and algorithm tuning ensures that the system remains scalable and performs robustly in real-time applications.Designed for easy integration with existing web security systems,our framework supports adaptable Application Programming Interfaces(APIs)and a modular design,facilitating seamless augmentation of current defenses.This innovation represents a significant advancement in cybersecurity,offering a scalable and effective solution for securing modern web applications against evolving threats.
文摘The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation for automatically recognizing machine failure,and thus timely maintenance can ensure safe operations.Transfer learning is a promising solution that can enhance the machine fault diagnosis model by borrowing pre-trained knowledge from the source model and applying it to the target model,which typically involves two datasets.In response to the availability of multiple datasets,this paper proposes using selective and adaptive incremental transfer learning(SA-ITL),which fuses three algorithms,namely,the hybrid selective algorithm,the transferability enhancement algorithm,and the incremental transfer learning algorithm.It is a selective algorithm that enables selecting and ordering appropriate datasets for transfer learning and selecting useful knowledge to avoid negative transfer.The algorithm also adaptively adjusts the portion of training data to balance the learning rate and training time.The proposed algorithm is evaluated and analyzed using ten benchmark datasets.Compared with other algorithms from existing works,SA-ITL improves the accuracy of all datasets.Ablation studies present the accuracy enhancements of the SA-ITL,including the hybrid selective algorithm(1.22%-3.82%),transferability enhancement algorithm(1.91%-4.15%),and incremental transfer learning algorithm(0.605%-2.68%).These also show the benefits of enhancing the target model with heterogeneous image datasets that widen the range of domain selection between source and target domains.
基金Macao Polytechnic University Grant(RP/FCSD-01/2022RP/FCA-05/2022)Science and Technology Development Fund of Macao(0105/2022/A).
文摘Background Deep convolutional neural networks have garnered considerable attention in numerous machine learning applications,particularly in visual recognition tasks such as image and video analyses.There is a growing interest in applying this technology to diverse applications in medical image analysis.Automated three dimensional Breast Ultrasound is a vital tool for detecting breast cancer,and computer-assisted diagnosis software,developed based on deep learning,can effectively assist radiologists in diagnosis.However,the network model is prone to overfitting during training,owing to challenges such as insufficient training data.This study attempts to solve the problem caused by small datasets and improve model detection performance.Methods We propose a breast cancer detection framework based on deep learning(a transfer learning method based on cross-organ cancer detection)and a contrastive learning method based on breast imaging reporting and data systems(BI-RADS).Results When using cross organ transfer learning and BIRADS based contrastive learning,the average sensitivity of the model increased by a maximum of 16.05%.Conclusion Our experiments have demonstrated that the parameters and experiences of cross-organ cancer detection can be mutually referenced,and contrastive learning method based on BI-RADS can improve the detection performance of the model.
基金funded by Project:2024 Youth Project of Philosophy and Social Sciences Planning in Guangdong Province“Research on the Relationship between Mandarin and Economic Development in Cantonese Speaking Areas(GD24YZY03)”.
文摘This paper introduces the Value Engineering method to calculate the value coefficient of Chinese learning efficiency for 377 international students by dimension.Results suggest that attention should be paid to male,European and American,rural,introverted,and“work or education needs”international students;Give full play to the driving role of the reference-type,explore the space for improving the learning efficiency of the improvement-type and attention-type,and pay attention to the problems of problem-type international students.
基金this paper is funded by Project:2024 Youth Project of Philosophy and Social Sciences Planning in Guangdong Province“Research on the Relationship between Mandarin and Economic Development in Cantonese Speaking Areas (GD24YZY03)”.
文摘This paper investigates the related factors of Chinese learning engagement and performance of 377 students from three universities in Guangzhou of China,employs the value engineering method,and calculates the value coefficient of learning efficiency.From the value coefficient of Chinese language learning,it proves that over 60%of international students studying in China have a higher efficiency in learning Chinese.
文摘In order to explore the relationship between college students’English learning motivation and learning effect,this paper uses a questionnaire survey to investigate and analyze the English classroom learning motivation of first-year non-English majors.Based on the final English score,the paper analyzes the correlation between English learning motivation and learning effect.
基金supported in part by the National Natural Science Foundation of China(62371116 and 62231020)in part by the Science and Technology Project of Hebei Province Education Department(ZD2022164)+2 种基金in part by the Fundamental Research Funds for the Central Universities(N2223031)in part by the Open Research Project of Xidian University(ISN24-08)Key Laboratory of Cognitive Radio and Information Processing,Ministry of Education(Guilin University of Electronic Technology,China,CRKL210203)。
文摘High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency of local data learning models while preventing privacy leakage in a high mobility environment.In order to protect data privacy and improve data learning efficiency in knowledge sharing,we propose an asynchronous federated broad learning(FBL)framework that integrates broad learning(BL)into federated learning(FL).In FBL,we design a broad fully connected model(BFCM)as a local model for training client data.To enhance the wireless channel quality for knowledge sharing and reduce the communication and computation cost of participating clients,we construct a joint resource allocation and reconfigurable intelligent surface(RIS)configuration optimization framework for FBL.The problem is decoupled into two convex subproblems.Aiming to improve the resource scheduling efficiency in FBL,a double Davidon–Fletcher–Powell(DDFP)algorithm is presented to solve the time slot allocation and RIS configuration problem.Based on the results of resource scheduling,we design a reward-allocation algorithm based on federated incentive learning(FIL)in FBL to compensate clients for their costs.The simulation results show that the proposed FBL framework achieves better performance than the comparison models in terms of efficiency,accuracy,and cost for knowledge sharing in the IoV.
基金supported by the National Natural Science Foundation of China(NSFC)under Grant 62071179.
文摘Although Federated Deep Learning(FDL)enables distributed machine learning in the Internet of Vehicles(IoV),it requires multiple clients to upload model parameters,thus still existing unavoidable communication overhead and data privacy risks.The recently proposed Swarm Learning(SL)provides a decentralized machine learning approach for unit edge computing and blockchain-based coordination.A Swarm-Federated Deep Learning framework in the IoV system(IoV-SFDL)that integrates SL into the FDL framework is proposed in this paper.The IoV-SFDL organizes vehicles to generate local SL models with adjacent vehicles based on the blockchain empowered SL,then aggregates the global FDL model among different SL groups with a credibility weights prediction algorithm.Extensive experimental results show that compared with the baseline frameworks,the proposed IoV-SFDL framework reduces the overhead of client-to-server communication by 16.72%,while the model performance improves by about 5.02%for the same training iterations.
基金supported by the National Natural Science Foundation of China(Grant Nos.41976193 and 42176243).
文摘In recent years,deep learning methods have gradually been applied to prediction tasks related to Arctic sea ice concentration,but relatively little research has been conducted for larger spatial and temporal scales,mainly due to the limited time coverage of observations and reanalysis data.Meanwhile,deep learning predictions of sea ice thickness(SIT)have yet to receive ample attention.In this study,two data-driven deep learning(DL)models are built based on the ConvLSTM and fully convolutional U-net(FC-Unet)algorithms and trained using CMIP6 historical simulations for transfer learning and fine-tuned using reanalysis/observations.These models enable monthly predictions of Arctic SIT without considering the complex physical processes involved.Through comprehensive assessments of prediction skills by season and region,the results suggest that using a broader set of CMIP6 data for transfer learning,as well as incorporating multiple climate variables as predictors,contribute to better prediction results,although both DL models can effectively predict the spatiotemporal features of SIT anomalies.Regarding the predicted SIT anomalies of the FC-Unet model,the spatial correlations with reanalysis reach an average level of 89%over all months,while the temporal anomaly correlation coefficients are close to unity in most cases.The models also demonstrate robust performances in predicting SIT and SIE during extreme events.The effectiveness and reliability of the proposed deep transfer learning models in predicting Arctic SIT can facilitate more accurate pan-Arctic predictions,aiding climate change research and real-time business applications.
基金National Key R&D Program of China (No. 2021YFC2100100)Shanghai Science and Technology Project (No. 21JC1403400, 23JC1402300)。
文摘Leveraging big data analytics and advanced algorithms to accelerate and optimize the process of molecular and materials design, synthesis, and application has revolutionized the field of molecular and materials science, allowing researchers to gain a deeper understanding of material properties and behaviors,leading to the development of new materials that are more efficient and reliable. However, the difficulty in constructing large-scale datasets of new molecules/materials due to the high cost of data acquisition and annotation limits the development of conventional machine learning(ML) approaches. Knowledgereused transfer learning(TL) methods are expected to break this dilemma. The application of TL lowers the data requirements for model training, which makes TL stand out in researches addressing data quality issues. In this review, we summarize recent progress in TL related to molecular and materials. We focus on the application of TL methods for the discovery of advanced molecules/materials, particularly, the construction of TL frameworks for different systems, and how TL can enhance the performance of models. In addition, the challenges of TL are also discussed.
基金supported by the National Natural Science of Foundation of China(41825011,42030608,42105128,and 42075079)the Opening Foundation of Key Laboratory of Atmospheric Sounding,the CMA and the CMA Research Center on Meteorological Observation Engineering Technology(U2021Z03).
文摘The Advanced Geosynchronous Radiation Imager(AGRI)is a mission-critical instrument for the Fengyun series of satellites.AGRI acquires full-disk images every 15 min and views East Asia every 5 min through 14 spectral bands,enabling the detection of highly variable aerosol optical depth(AOD).Quantitative retrieval of AOD has hitherto been challenging,especially over land.In this study,an AOD retrieval algorithm is proposed that combines deep learning and transfer learning.The algorithm uses core concepts from both the Dark Target(DT)and Deep Blue(DB)algorithms to select features for the machinelearning(ML)algorithm,allowing for AOD retrieval at 550 nm over both dark and bright surfaces.The algorithm consists of two steps:①A baseline deep neural network(DNN)with skip connections is developed using 10 min Advanced Himawari Imager(AHI)AODs as the target variable,and②sunphotometer AODs from 89 ground-based stations are used to fine-tune the DNN parameters.Out-of-station validation shows that the retrieved AOD attains high accuracy,characterized by a coefficient of determination(R2)of 0.70,a mean bias error(MBE)of 0.03,and a percentage of data within the expected error(EE)of 70.7%.A sensitivity study reveals that the top-of-atmosphere reflectance at 650 and 470 nm,as well as the surface reflectance at 650 nm,are the two largest sources of uncertainty impacting the retrieval.In a case study of monitoring an extreme aerosol event,the AGRI AOD is found to be able to capture the detailed temporal evolution of the event.This work demonstrates the superiority of the transfer-learning technique in satellite AOD retrievals and the applicability of the retrieved AGRI AOD in monitoring extreme pollution events.
文摘The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper makes an attempt to assess landslide susceptibility in Shimla district of the northwest Indian Himalayan region.It examined the effectiveness of random forest(RF),multilayer perceptron(MLP),sequential minimal optimization regression(SMOreg)and bagging ensemble(B-RF,BSMOreg,B-MLP)models.A landslide inventory map comprising 1052 locations of past landslide occurrences was classified into training(70%)and testing(30%)datasets.The site-specific influencing factors were selected by employing a multicollinearity test.The relationship between past landslide occurrences and influencing factors was established using the frequency ratio method.The effectiveness of machine learning models was verified through performance assessors.The landslide susceptibility maps were validated by the area under the receiver operating characteristic curves(ROC-AUC),accuracy,precision,recall and F1-score.The key performance metrics and map validation demonstrated that the BRF model(correlation coefficient:0.988,mean absolute error:0.010,root mean square error:0.058,relative absolute error:2.964,ROC-AUC:0.947,accuracy:0.778,precision:0.819,recall:0.917 and F-1 score:0.865)outperformed the single classifiers and other bagging ensemble models for landslide susceptibility.The results show that the largest area was found under the very high susceptibility zone(33.87%),followed by the low(27.30%),high(20.68%)and moderate(18.16%)susceptibility zones.The factors,namely average annual rainfall,slope,lithology,soil texture and earthquake magnitude have been identified as the influencing factors for very high landslide susceptibility.Soil texture,lineament density and elevation have been attributed to high and moderate susceptibility.Thus,the study calls for devising suitable landslide mitigation measures in the study area.Structural measures,an immediate response system,community participation and coordination among stakeholders may help lessen the detrimental impact of landslides.The findings from this study could aid decision-makers in mitigating future catastrophes and devising suitable strategies in other geographical regions with similar geological characteristics.
基金authors are thankful to the Deanship of Scientific Research at Najran University for funding this work,under the Research Groups Funding Program Grant Code(NU/RG/SERC/12/27).
文摘Social media(SM)based surveillance systems,combined with machine learning(ML)and deep learning(DL)techniques,have shown potential for early detection of epidemic outbreaks.This review discusses the current state of SM-based surveillance methods for early epidemic outbreaks and the role of ML and DL in enhancing their performance.Since,every year,a large amount of data related to epidemic outbreaks,particularly Twitter data is generated by SM.This paper outlines the theme of SM analysis for tracking health-related issues and detecting epidemic outbreaks in SM,along with the ML and DL techniques that have been configured for the detection of epidemic outbreaks.DL has emerged as a promising ML technique that adaptsmultiple layers of representations or features of the data and yields state-of-the-art extrapolation results.In recent years,along with the success of ML and DL in many other application domains,both ML and DL are also popularly used in SM analysis.This paper aims to provide an overview of epidemic outbreaks in SM and then outlines a comprehensive analysis of ML and DL approaches and their existing applications in SM analysis.Finally,this review serves the purpose of offering suggestions,ideas,and proposals,along with highlighting the ongoing challenges in the field of early outbreak detection that still need to be addressed.
文摘Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression models,extreme gradient boosting(XGBoost),artificial neural network(ANN),support vector regression(SVR),and Gaussian process regression(GP),on two common terminal ballistics’ problems:(a)predicting the V50ballistic limit of monolithic metallic armour impacted by small and medium calibre projectiles and fragments,and(b) predicting the depth to which a projectile will penetrate a target of semi-infinite thickness.To achieve this we utilise two datasets,each consisting of approximately 1000samples,collated from public release sources.We demonstrate that all four model types provide similarly excellent agreement when interpolating within the training data and diverge when extrapolating outside this range.Although extrapolation is not advisable for ML-based regression models,for applications such as lethality/survivability analysis,such capability is required.To circumvent this,we implement expert knowledge and physics-based models via enforced monotonicity,as a Gaussian prior mean,and through a modified loss function.The physics-informed models demonstrate improved performance over both classical physics-based models and the basic ML regression models,providing an ability to accurately fit experimental data when it is available and then revert to the physics-based model when not.The resulting models demonstrate high levels of predictive accuracy over a very wide range of projectile types,target materials and thicknesses,and impact conditions significantly more diverse than that achievable from any existing analytical approach.Compared with numerical analysis tools such as finite element solvers the ML models run orders of magnitude faster.We provide some general guidelines throughout for the development,application,and reporting of ML models in terminal ballistics problems.
基金supported by a grant from the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI)funded by the Ministry of Health&Welfare,Republic of Korea(Grant Number:H12C1831)+2 种基金Korea Institute for Advancement of Technology(KIAT)Grant funded by the Korea Government(MOTIE)(P0012724,HRD Program for Industrial Innovation)the National Research Foundation of Korea(NRF)Grant funded by the Korea Government(MSIT)(No.RS-2023-00218176)the Soonchunhyang University Research Fund.
文摘Sleep posture surveillance is crucial for patient comfort,yet current systems face difficulties in providing compre-hensive studies due to the obstruction caused by blankets.Precise posture assessment remains challenging because of the complex nature of the human body and variations in sleep patterns.Consequently,this study introduces an innovative method utilizing RGB and thermal cameras for comprehensive posture classification,thereby enhancing the analysis of body position and comfort.This method begins by capturing a dataset of sleep postures in the form of videos using RGB and thermal cameras,which depict six commonly adopted postures:supine,left log,right log,prone head,prone left,and prone right.The study involves 10 participants under two conditions:with and without blankets.Initially,the database is normalized into a video frame.The subsequent step entails training a fine-tuned,pretrained Visual Geometry Group(VGG16)and ResNet50 model.In the third phase,the extracted features are utilized for classification.The fourth step of the proposed approach employs a serial fusion technique based on the normal distribution to merge the vectors derived from both the RGB and thermal datasets.Finally,the fused vectors are passed to machine learning classifiers for final classification.The dataset,which includes human sleep postures used in this study’s experiments,achieved a 96.7%accuracy rate using the Quadratic Support Vector Machine(QSVM)without the blanket.Moreover,the Linear SVM,when utilized with a blanket,attained an accuracy of 96%.When normal distribution serial fusion was applied to the blanket features,it resulted in a remarkable average accuracy of 99%.
基金supported by the National Natural Science Foundation of China(No.52488201)the Natural Science Basic Research Program of Shaanxi(No.2024JC-YBMS-284)+1 种基金the Key Research and Development Program of Shaanxi(No.2024GHYBXM-02)the Fundamental Research Funds for the Central Universities.
文摘Machine learning combined with density functional theory(DFT)enables rapid exploration of catalyst descriptors space such as adsorption energy,facilitating rapid and effective catalyst screening.However,there is still a lack of models for predicting adsorption energies on oxides,due to the complexity of elemental species and the ambiguous coordination environment.This work proposes an active learning workflow(LeNN)founded on local electronic transfer features(e)and the principle of coordinate rotation invariance.By accurately characterizing the electron transfer to adsorption site atoms and their surrounding geometric structures,LeNN mitigates abrupt feature changes due to different element types and clarifies coordination environments.As a result,it enables the prediction of^(*)H adsorption energy on binary oxide surfaces with a mean absolute error(MAE)below 0.18 eV.Moreover,we incorporate local coverage(θ_(l))and leverage neutral network ensemble to establish an active learning workflow,attaining a prediction MAE below 0.2 eV for 5419 multi-^(*)H adsorption structures.These findings validate the universality and capability of the proposed features in predicting^(*)H adsorption energy on binary oxide surfaces.
基金financially supported by the Technology Development Fund of China Academy of Machinery Science and Technology(No.170221ZY01)。
文摘Additive manufacturing technology is highly regarded due to its advantages,such as high precision and the ability to address complex geometric challenges.However,the development of additive manufacturing process is constrained by issues like unclear fundamental principles,complex experimental cycles,and high costs.Machine learning,as a novel artificial intelligence technology,has the potential to deeply engage in the development of additive manufacturing process,assisting engineers in learning and developing new techniques.This paper provides a comprehensive overview of the research and applications of machine learning in the field of additive manufacturing,particularly in model design and process development.Firstly,it introduces the background and significance of machine learning-assisted design in additive manufacturing process.It then further delves into the application of machine learning in additive manufacturing,focusing on model design and process guidance.Finally,it concludes by summarizing and forecasting the development trends of machine learning technology in the field of additive manufacturing.