期刊文献+
共找到6,486篇文章
< 1 2 250 >
每页显示 20 50 100
Use of machine learning models for the prognostication of liver transplantation: A systematic review 被引量:1
1
作者 Gidion Chongo Jonathan Soldera 《World Journal of Transplantation》 2024年第1期164-188,共25页
BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are p... BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication. 展开更多
关键词 Liver transplantation machine learning models PROGNOSTICATION Allograft allocation Artificial intelligence
下载PDF
Social Media-Based Surveillance Systems for Health Informatics Using Machine and Deep Learning Techniques:A Comprehensive Review and Open Challenges
2
作者 Samina Amin Muhammad Ali Zeb +3 位作者 Hani Alshahrani Mohammed Hamdi Mohammad Alsulami Asadullah Shaikh 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期1167-1202,共36页
Social media(SM)based surveillance systems,combined with machine learning(ML)and deep learning(DL)techniques,have shown potential for early detection of epidemic outbreaks.This review discusses the current state of SM... Social media(SM)based surveillance systems,combined with machine learning(ML)and deep learning(DL)techniques,have shown potential for early detection of epidemic outbreaks.This review discusses the current state of SM-based surveillance methods for early epidemic outbreaks and the role of ML and DL in enhancing their performance.Since,every year,a large amount of data related to epidemic outbreaks,particularly Twitter data is generated by SM.This paper outlines the theme of SM analysis for tracking health-related issues and detecting epidemic outbreaks in SM,along with the ML and DL techniques that have been configured for the detection of epidemic outbreaks.DL has emerged as a promising ML technique that adaptsmultiple layers of representations or features of the data and yields state-of-the-art extrapolation results.In recent years,along with the success of ML and DL in many other application domains,both ML and DL are also popularly used in SM analysis.This paper aims to provide an overview of epidemic outbreaks in SM and then outlines a comprehensive analysis of ML and DL approaches and their existing applications in SM analysis.Finally,this review serves the purpose of offering suggestions,ideas,and proposals,along with highlighting the ongoing challenges in the field of early outbreak detection that still need to be addressed. 展开更多
关键词 Social media EPIDEMIC machine learning deep learning health informatics PANDEMIC
下载PDF
Intelligent Design of High Strength and High Conductivity Copper Alloys Using Machine Learning Assisted by Genetic Algor
3
作者 Parth Khandelwal Harshit Indranil Manna 《Computers, Materials & Continua》 SCIE EI 2024年第4期1727-1755,共29页
Metallic alloys for a given application are usually designed to achieve the desired properties by devising experimentsbased on experience, thermodynamic and kinetic principles, and various modeling and simulation exer... Metallic alloys for a given application are usually designed to achieve the desired properties by devising experimentsbased on experience, thermodynamic and kinetic principles, and various modeling and simulation exercises.However, the influence of process parameters and material properties is often non-linear and non-colligative. Inrecent years, machine learning (ML) has emerged as a promising tool to dealwith the complex interrelation betweencomposition, properties, and process parameters to facilitate accelerated discovery and development of new alloysand functionalities. In this study, we adopt an ML-based approach, coupled with genetic algorithm (GA) principles,to design novel copper alloys for achieving seemingly contradictory targets of high strength and high electricalconductivity. Initially, we establish a correlation between the alloy composition (binary to multi-component) andthe target properties, namely, electrical conductivity and mechanical strength. Catboost, an ML model coupledwith GA, was used for this task. The accuracy of the model was above 93.5%. Next, for obtaining the optimizedcompositions the outputs fromthe initial model were refined by combining the concepts of data augmentation andPareto front. Finally, the ultimate objective of predicting the target composition that would deliver the desired rangeof properties was achieved by developing an advancedMLmodel through data segregation and data augmentation.To examine the reliability of this model, results were rigorously compared and verified using several independentdata reported in the literature. This comparison substantiates that the results predicted by our model regarding thevariation of conductivity and evolution ofmicrostructure and mechanical properties with composition are in goodagreement with the reports published in the literature. 展开更多
关键词 machine learning genetic algorithm SOLID-SOLUTION precipitation strengthening pareto front data augmentation
下载PDF
Enhancing Secure Development in Globally Distributed Software Product Lines: A Machine Learning-Powered Framework for Cyber-Resilient Ecosystems
4
作者 Marya Iqbal Yaser Hafeez +5 位作者 Nabil Almashfi Amjad Alsirhani Faeiz Alserhani Sadia Ali Mamoona Humayun Muhammad Jamal 《Computers, Materials & Continua》 SCIE EI 2024年第6期5031-5049,共19页
Embracing software product lines(SPLs)is pivotal in the dynamic landscape of contemporary software devel-opment.However,the flexibility and global distribution inherent in modern systems pose significant challenges to... Embracing software product lines(SPLs)is pivotal in the dynamic landscape of contemporary software devel-opment.However,the flexibility and global distribution inherent in modern systems pose significant challenges to managing SPL variability,underscoring the critical importance of robust cybersecurity measures.This paper advocates for leveraging machine learning(ML)to address variability management issues and fortify the security of SPL.In the context of the broader special issue theme on innovative cybersecurity approaches,our proposed ML-based framework offers an interdisciplinary perspective,blending insights from computing,social sciences,and business.Specifically,it employs ML for demand analysis,dynamic feature extraction,and enhanced feature selection in distributed settings,contributing to cyber-resilient ecosystems.Our experiments demonstrate the framework’s superiority,emphasizing its potential to boost productivity and security in SPLs.As digital threats evolve,this research catalyzes interdisciplinary collaborations,aligning with the special issue’s goal of breaking down academic barriers to strengthen digital ecosystems against sophisticated attacks while upholding ethics,privacy,and human values. 展开更多
关键词 machine Learning variability management CYBERSECURITY digital ecosystems cyber-resilience
下载PDF
A Hybrid Machine Learning Approach for Improvised QoE in Video Services over 5G Wireless Networks
5
作者 K.B.Ajeyprasaath P.Vetrivelan 《Computers, Materials & Continua》 SCIE EI 2024年第3期3195-3213,共19页
Video streaming applications have grown considerably in recent years.As a result,this becomes one of the most significant contributors to global internet traffic.According to recent studies,the telecommunications indu... Video streaming applications have grown considerably in recent years.As a result,this becomes one of the most significant contributors to global internet traffic.According to recent studies,the telecommunications industry loses millions of dollars due to poor video Quality of Experience(QoE)for users.Among the standard proposals for standardizing the quality of video streaming over internet service providers(ISPs)is the Mean Opinion Score(MOS).However,the accurate finding of QoE by MOS is subjective and laborious,and it varies depending on the user.A fully automated data analytics framework is required to reduce the inter-operator variability characteristic in QoE assessment.This work addresses this concern by suggesting a novel hybrid XGBStackQoE analytical model using a two-level layering technique.Level one combines multiple Machine Learning(ML)models via a layer one Hybrid XGBStackQoE-model.Individual ML models at level one are trained using the entire training data set.The level two Hybrid XGBStackQoE-Model is fitted using the outputs(meta-features)of the layer one ML models.The proposed model outperformed the conventional models,with an accuracy improvement of 4 to 5 percent,which is still higher than the current traditional models.The proposed framework could significantly improve video QoE accuracy. 展开更多
关键词 Hybrid XGBStackQoE-model machine learning MOS performance metrics QOE 5G video services
下载PDF
Computing large deviation prefactors of stochastic dynamical systems based on machine learning
6
作者 李扬 袁胜兰 +1 位作者 陆凌宏志 刘先斌 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第4期364-373,共10页
We present a large deviation theory that characterizes the exponential estimate for rare events in stochastic dynamical systems in the limit of weak noise.We aim to consider a next-to-leading-order approximation for m... We present a large deviation theory that characterizes the exponential estimate for rare events in stochastic dynamical systems in the limit of weak noise.We aim to consider a next-to-leading-order approximation for more accurate calculation of the mean exit time by computing large deviation prefactors with the aid of machine learning.More specifically,we design a neural network framework to compute quasipotential,most probable paths and prefactors based on the orthogonal decomposition of a vector field.We corroborate the higher effectiveness and accuracy of our algorithm with two toy models.Numerical experiments demonstrate its powerful functionality in exploring the internal mechanism of rare events triggered by weak random fluctuations. 展开更多
关键词 machine learning large deviation prefactors stochastic dynamical systems rare events
下载PDF
Reconstruction of poloidal magnetic field profiles in field-reversed configurations with machine learning in laser-driven ion-beam trace probe
7
作者 徐栩涛 徐田超 +4 位作者 肖池阶 张祖煜 何任川 袁瑞鑫 许平 《Plasma Science and Technology》 SCIE EI CAS CSCD 2024年第3期83-87,共5页
The diagnostic of poloidal magnetic field(B_(p))in field-reversed configuration(FRC),promising for achieving efficient plasma confinement due to its highβ,is a huge challenge because B_(p)is small and reverses around... The diagnostic of poloidal magnetic field(B_(p))in field-reversed configuration(FRC),promising for achieving efficient plasma confinement due to its highβ,is a huge challenge because B_(p)is small and reverses around the core region.The laser-driven ion-beam trace probe(LITP)has been proven to diagnose the B_(p)profile in FRCs recently,whereas the existing iterative reconstruction approach cannot handle the measurement errors well.In this work,the machine learning approach,a fast-growing and powerful technology in automation and control,is applied to B_(p)reconstruction in FRCs based on LITP principles and it has a better performance than the previous approach.The machine learning approach achieves a more accurate reconstruction of B_(p)profile when 20%detector errors are considered,15%B_(p)fluctuation is introduced and the size of the detector is remarkably reduced.Therefore,machine learning could be a powerful support for LITP diagnosis of the magnetic field in magnetic confinement fusion devices. 展开更多
关键词 FRC LITP poloidal magnetic field diagnostics machine learning
下载PDF
A Systematic Literature Review of Machine Learning and Deep Learning Approaches for Spectral Image Classification in Agricultural Applications Using Aerial Photography
8
作者 Usman Khan Muhammad Khalid Khan +4 位作者 Muhammad Ayub Latif Muhammad Naveed Muhammad Mansoor Alam Salman A.Khan Mazliham Mohd Su’ud 《Computers, Materials & Continua》 SCIE EI 2024年第3期2967-3000,共34页
Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unma... Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements. 展开更多
关键词 machine learning deep learning unmanned aerial vehicles multi-spectral images image recognition object detection hyperspectral images aerial photography
下载PDF
Feature extraction for machine learning-based intrusion detection in IoT networks
9
作者 Mohanad Sarhan Siamak Layeghy +2 位作者 Nour Moustafa Marcus Gallagher Marius Portmann 《Digital Communications and Networks》 SCIE CSCD 2024年第1期205-216,共12页
A large number of network security breaches in IoT networks have demonstrated the unreliability of current Network Intrusion Detection Systems(NIDSs).Consequently,network interruptions and loss of sensitive data have ... A large number of network security breaches in IoT networks have demonstrated the unreliability of current Network Intrusion Detection Systems(NIDSs).Consequently,network interruptions and loss of sensitive data have occurred,which led to an active research area for improving NIDS technologies.In an analysis of related works,it was observed that most researchers aim to obtain better classification results by using a set of untried combinations of Feature Reduction(FR)and Machine Learning(ML)techniques on NIDS datasets.However,these datasets are different in feature sets,attack types,and network design.Therefore,this paper aims to discover whether these techniques can be generalised across various datasets.Six ML models are utilised:a Deep Feed Forward(DFF),Convolutional Neural Network(CNN),Recurrent Neural Network(RNN),Decision Tree(DT),Logistic Regression(LR),and Naive Bayes(NB).The accuracy of three Feature Extraction(FE)algorithms is detected;Principal Component Analysis(PCA),Auto-encoder(AE),and Linear Discriminant Analysis(LDA),are evaluated using three benchmark datasets:UNSW-NB15,ToN-IoT and CSE-CIC-IDS2018.Although PCA and AE algorithms have been widely used,the determination of their optimal number of extracted dimensions has been overlooked.The results indicate that no clear FE method or ML model can achieve the best scores for all datasets.The optimal number of extracted dimensions has been identified for each dataset,and LDA degrades the performance of the ML models on two datasets.The variance is used to analyse the extracted dimensions of LDA and PCA.Finally,this paper concludes that the choice of datasets significantly alters the performance of the applied techniques.We believe that a universal(benchmark)feature set is needed to facilitate further advancement and progress of research in this field. 展开更多
关键词 Feature extraction machine learning network intrusion detection system IOT
下载PDF
Electromagnetic Performance Analysis of Variable Flux Memory Machines with Series-magnetic-circuit and Different Rotor Topologies
10
作者 Qiang Wei Z.Q.Zhu +4 位作者 Yan Jia Jianghua Feng Shuying Guo Yifeng Li Shouzhi Feng 《CES Transactions on Electrical Machines and Systems》 EI CSCD 2024年第1期3-11,共9页
In this paper,the electromagnetic performance of variable flux memory(VFM)machines with series-magnetic-circuit is investigated and compared for different rotor topologies.Based on a V-type VFM machine,five topologies... In this paper,the electromagnetic performance of variable flux memory(VFM)machines with series-magnetic-circuit is investigated and compared for different rotor topologies.Based on a V-type VFM machine,five topologies with different interior permanent magnet(IPM)arrangements are evolved and optimized under same constrains.Based on two-dimensional(2-D)finite element(FE)method,their electromagnetic performance at magnetization and demagnetization states is evaluated.It reveals that the iron bridge and rotor lamination region between constant PM(CPM)and variable PM(VPM)play an important role in torque density and flux regulation(FR)capabilities.Besides,the global efficiency can be improved in VFM machines by adjusting magnetization state(MS)under different operating conditions. 展开更多
关键词 Memory machine Permanent magnet Rotor topologies Series magnetic circuit Variable flux
下载PDF
Automated Machine Learning Algorithm Using Recurrent Neural Network to Perform Long-Term Time Series Forecasting
11
作者 Ying Su Morgan C.Wang Shuai Liu 《Computers, Materials & Continua》 SCIE EI 2024年第3期3529-3549,共21页
Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically ... Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically relies on expert input and necessitates substantial manual involvement.This manual effort spans model development,feature engineering,hyper-parameter tuning,and the intricate construction of time series models.The complexity of these tasks renders complete automation unfeasible,as they inherently demand human intervention at multiple junctures.To surmount these challenges,this article proposes leveraging Long Short-Term Memory,which is the variant of Recurrent Neural Networks,harnessing memory cells and gating mechanisms to facilitate long-term time series prediction.However,forecasting accuracy by particular neural network and traditional models can degrade significantly,when addressing long-term time-series tasks.Therefore,our research demonstrates that this innovative approach outperforms the traditional Autoregressive Integrated Moving Average(ARIMA)method in forecasting long-term univariate time series.ARIMA is a high-quality and competitive model in time series prediction,and yet it requires significant preprocessing efforts.Using multiple accuracy metrics,we have evaluated both ARIMA and proposed method on the simulated time-series data and real data in both short and long term.Furthermore,our findings indicate its superiority over alternative network architectures,including Fully Connected Neural Networks,Convolutional Neural Networks,and Nonpooling Convolutional Neural Networks.Our AutoML approach enables non-professional to attain highly accurate and effective time series forecasting,and can be widely applied to various domains,particularly in business and finance. 展开更多
关键词 Automated machine learning autoregressive integrated moving average neural networks time series analysis
下载PDF
Transparent and Accountable Training Data Sharing in Decentralized Machine Learning Systems
12
作者 Siwan Noh Kyung-Hyune Rhee 《Computers, Materials & Continua》 SCIE EI 2024年第6期3805-3826,共22页
In Decentralized Machine Learning(DML)systems,system participants contribute their resources to assist others in developing machine learning solutions.Identifying malicious contributions in DML systems is challenging,... In Decentralized Machine Learning(DML)systems,system participants contribute their resources to assist others in developing machine learning solutions.Identifying malicious contributions in DML systems is challenging,which has led to the exploration of blockchain technology.Blockchain leverages its transparency and immutability to record the provenance and reliability of training data.However,storing massive datasets or implementing model evaluation processes on smart contracts incurs high computational costs.Additionally,current research on preventing malicious contributions in DML systems primarily focuses on protecting models from being exploited by workers who contribute incorrect or misleading data.However,less attention has been paid to the scenario where malicious requesters intentionally manipulate test data during evaluation to gain an unfair advantage.This paper proposes a transparent and accountable training data sharing method that securely shares data among potentially malicious system participants.First,we introduce a blockchain-based DML system architecture that supports secure training data sharing through the IPFS network.Second,we design a blockchain smart contract to transparently split training datasets into training and test datasets,respectively,without involving system participants.Under the system,transparent and accountable training data sharing can be achieved with attribute-based proxy re-encryption.We demonstrate the security analysis for the system,and conduct experiments on the Ethereum and IPFS platforms to show the feasibility and practicality of the system. 展开更多
关键词 Decentralized machine learning data accountability dataset sharing
下载PDF
Effects of data smoothing and recurrent neural network(RNN)algorithms for real-time forecasting of tunnel boring machine(TBM)performance
13
作者 Feng Shan Xuzhen He +1 位作者 Danial Jahed Armaghani Daichao Sheng 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第5期1538-1551,共14页
Tunnel boring machines(TBMs)have been widely utilised in tunnel construction due to their high efficiency and reliability.Accurately predicting TBM performance can improve project time management,cost control,and risk... Tunnel boring machines(TBMs)have been widely utilised in tunnel construction due to their high efficiency and reliability.Accurately predicting TBM performance can improve project time management,cost control,and risk management.This study aims to use deep learning to develop real-time models for predicting the penetration rate(PR).The models are built using data from the Changsha metro project,and their performances are evaluated using unseen data from the Zhengzhou Metro project.In one-step forecast,the predicted penetration rate follows the trend of the measured penetration rate in both training and testing.The autoregressive integrated moving average(ARIMA)model is compared with the recurrent neural network(RNN)model.The results show that univariate models,which only consider historical penetration rate itself,perform better than multivariate models that take into account multiple geological and operational parameters(GEO and OP).Next,an RNN variant combining time series of penetration rate with the last-step geological and operational parameters is developed,and it performs better than other models.A sensitivity analysis shows that the penetration rate is the most important parameter,while other parameters have a smaller impact on time series forecasting.It is also found that smoothed data are easier to predict with high accuracy.Nevertheless,over-simplified data can lose real characteristics in time series.In conclusion,the RNN variant can accurately predict the next-step penetration rate,and data smoothing is crucial in time series forecasting.This study provides practical guidance for TBM performance forecasting in practical engineering. 展开更多
关键词 Tunnel boring machine(TBM) Penetration rate(PR) Time series forecasting Recurrent neural network(RNN)
下载PDF
Smart Energy Management System Using Machine Learning
14
作者 Ali Sheraz Akram Sagheer Abbas +3 位作者 Muhammad Adnan Khan Atifa Athar Taher M.Ghazal Hussam Al Hamadi 《Computers, Materials & Continua》 SCIE EI 2024年第1期959-973,共15页
Energy management is an inspiring domain in developing of renewable energy sources.However,the growth of decentralized energy production is revealing an increased complexity for power grid managers,inferring more qual... Energy management is an inspiring domain in developing of renewable energy sources.However,the growth of decentralized energy production is revealing an increased complexity for power grid managers,inferring more quality and reliability to regulate electricity flows and less imbalance between electricity production and demand.The major objective of an energy management system is to achieve optimum energy procurement and utilization throughout the organization,minimize energy costs without affecting production,and minimize environmental effects.Modern energy management is an essential and complex subject because of the excessive consumption in residential buildings,which necessitates energy optimization and increased user comfort.To address the issue of energy management,many researchers have developed various frameworks;while the objective of each framework was to sustain a balance between user comfort and energy consumption,this problem hasn’t been fully solved because of how difficult it is to solve it.An inclusive and Intelligent Energy Management System(IEMS)aims to provide overall energy efficiency regarding increased power generation,increase flexibility,increase renewable generation systems,improve energy consumption,reduce carbon dioxide emissions,improve stability,and reduce energy costs.Machine Learning(ML)is an emerging approach that may be beneficial to predict energy efficiency in a better way with the assistance of the Internet of Energy(IoE)network.The IoE network is playing a vital role in the energy sector for collecting effective data and usage,resulting in smart resource management.In this research work,an IEMS is proposed for Smart Cities(SC)using the ML technique to better resolve the energy management problem.The proposed system minimized the energy consumption with its intelligent nature and provided better outcomes than the previous approaches in terms of 92.11% accuracy,and 7.89% miss-rate. 展开更多
关键词 Intelligent energy management system smart cities machine learning
下载PDF
Label Recovery and Trajectory Designable Network for Transfer Fault Diagnosis of Machines With Incorrect Annotation
15
作者 Bin Yang Yaguo Lei +2 位作者 Xiang Li Naipeng Li Asoke K.Nandi 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第4期932-945,共14页
The success of deep transfer learning in fault diagnosis is attributed to the collection of high-quality labeled data from the source domain.However,in engineering scenarios,achieving such high-quality label annotatio... The success of deep transfer learning in fault diagnosis is attributed to the collection of high-quality labeled data from the source domain.However,in engineering scenarios,achieving such high-quality label annotation is difficult and expensive.The incorrect label annotation produces two negative effects:1)the complex decision boundary of diagnosis models lowers the generalization performance on the target domain,and2)the distribution of target domain samples becomes misaligned with the false-labeled samples.To overcome these negative effects,this article proposes a solution called the label recovery and trajectory designable network(LRTDN).LRTDN consists of three parts.First,a residual network with dual classifiers is to learn features from cross-domain samples.Second,an annotation check module is constructed to generate a label anomaly indicator that could modify the abnormal labels of false-labeled samples in the source domain.With the training of relabeled samples,the complexity of diagnosis model is reduced via semi-supervised learning.Third,the adaptation trajectories are designed for sample distributions across domains.This ensures that the target domain samples are only adapted with the pure-labeled samples.The LRTDN is verified by two case studies,in which the diagnosis knowledge of bearings is transferred across different working conditions as well as different yet related machines.The results show that LRTDN offers a high diagnosis accuracy even in the presence of incorrect annotation. 展开更多
关键词 Deep transfer learning domain adaptation incorrect label annotation intelligent fault diagnosis rotating machines
下载PDF
Machine learning and human‐machine trust in healthcare:A systematic survey
16
作者 Han Lin Jiatong Han +4 位作者 Pingping Wu Jiangyan Wang Juan Tu Hao Tang Liuning Zhu 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第2期286-302,共17页
As human‐machine interaction(HMI)in healthcare continues to evolve,the issue of trust in HMI in healthcare has been raised and explored.It is critical for the development and safety of healthcare that humans have pro... As human‐machine interaction(HMI)in healthcare continues to evolve,the issue of trust in HMI in healthcare has been raised and explored.It is critical for the development and safety of healthcare that humans have proper trust in medical machines.Intelligent machines that have applied machine learning(ML)technologies continue to penetrate deeper into the medical environment,which also places higher demands on intelligent healthcare.In order to make machines play a role in HMI in healthcare more effectively and make human‐machine cooperation more harmonious,the authors need to build good humanmachine trust(HMT)in healthcare.This article provides a systematic overview of the prominent research on ML and HMT in healthcare.In addition,this study explores and analyses ML and three important factors that influence HMT in healthcare,and then proposes a HMT model in healthcare.Finally,general trends are summarised and issues to consider addressing in future research on HMT in healthcare are identified. 展开更多
关键词 human-machine interaction machine learning trust
下载PDF
Security Monitoring and Management for the Network Services in the Orchestration of SDN-NFV Environment Using Machine Learning Techniques
17
作者 Nasser Alshammari Shumaila Shahzadi +7 位作者 Saad Awadh Alanazi Shahid Naseem Muhammad Anwar Madallah Alruwaili Muhammad Rizwan Abid Omar Alruwaili Ahmed Alsayat Fahad Ahmad 《Computer Systems Science & Engineering》 2024年第2期363-394,共32页
Software Defined Network(SDN)and Network Function Virtualization(NFV)technology promote several benefits to network operators,including reduced maintenance costs,increased network operational performance,simplified ne... Software Defined Network(SDN)and Network Function Virtualization(NFV)technology promote several benefits to network operators,including reduced maintenance costs,increased network operational performance,simplified network lifecycle,and policies management.Network vulnerabilities try to modify services provided by Network Function Virtualization MANagement and Orchestration(NFV MANO),and malicious attacks in different scenarios disrupt the NFV Orchestrator(NFVO)and Virtualized Infrastructure Manager(VIM)lifecycle management related to network services or individual Virtualized Network Function(VNF).This paper proposes an anomaly detection mechanism that monitors threats in NFV MANO and manages promptly and adaptively to implement and handle security functions in order to enhance the quality of experience for end users.An anomaly detector investigates these identified risks and provides secure network services.It enables virtual network security functions and identifies anomalies in Kubernetes(a cloud-based platform).For training and testing purpose of the proposed approach,an intrusion-containing dataset is used that hold multiple malicious activities like a Smurf,Neptune,Teardrop,Pod,Land,IPsweep,etc.,categorized as Probing(Prob),Denial of Service(DoS),User to Root(U2R),and Remote to User(R2L)attacks.An anomaly detector is anticipated with the capabilities of a Machine Learning(ML)technique,making use of supervised learning techniques like Logistic Regression(LR),Support Vector Machine(SVM),Random Forest(RF),Naïve Bayes(NB),and Extreme Gradient Boosting(XGBoost).The proposed framework has been evaluated by deploying the identified ML algorithm on a Jupyter notebook in Kubeflow to simulate Kubernetes for validation purposes.RF classifier has shown better outcomes(99.90%accuracy)than other classifiers in detecting anomalies/intrusions in the containerized environment. 展开更多
关键词 Software defined network network function virtualization network function virtualization management and orchestration virtual infrastructure manager virtual network function Kubernetes Kubectl artificial intelligence machine learning
下载PDF
An Improved Enterprise Resource Planning System Using Machine Learning Techniques
18
作者 Ahmed Youssri Zakaria Elsayed Abdelbadea +4 位作者 Atef Raslan Tarek Ali Mervat Gheith Al-Sayed Khater Essam A. Amin 《Journal of Software Engineering and Applications》 2024年第5期203-213,共11页
Traditional Enterprise Resource Planning (ERP) systems with relational databases take weeks to deliver predictable insights instantly. The most accurate information is provided to companies to make the best decisions ... Traditional Enterprise Resource Planning (ERP) systems with relational databases take weeks to deliver predictable insights instantly. The most accurate information is provided to companies to make the best decisions through advanced analytics that examine the past and the future and capture information about the present. Integrating machine learning (ML) into financial ERP systems offers several benefits, including increased accuracy, efficiency, and cost savings. Also, ERP systems are crucial in overseeing different aspects of Human Capital Management (HCM) in organizations. The performance of the staff draws the interest of the management. In particular, to guarantee that the proper employees are assigned to the convenient task at the suitable moment, train and qualify them, and build evaluation systems to follow up their performance and an attempt to maintain the potential talents of workers. Also, predicting employee salaries correctly is necessary for the efficient distribution of resources, retaining talent, and ensuring the success of the organization as a whole. Conventional ERP system salary forecasting methods typically use static reports that only show the system’s current state, without analyzing employee data or providing recommendations. We designed and enforced a prototype to define to apply ML algorithms on Oracle EBS data to enhance employee evaluation using real-time data directly from the ERP system. Based on measurements of accuracy, the Random Forest algorithm enhanced the performance of this system. This model offers an accuracy of 90% on the balanced dataset. 展开更多
关键词 ERP HCM machine Learning Employee Performance Pythonista Pythoneer
下载PDF
Machine Learning Empowered Security and Privacy Architecture for IoT Networks with the Integration of Blockchain
19
作者 Sohaib Latif M.Saad Bin Ilyas +3 位作者 Azhar Imran Hamad Ali Abosaq Abdulaziz Alzubaidi Vincent Karovic Jr. 《Intelligent Automation & Soft Computing》 2024年第2期353-379,共27页
The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes ... The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes of data and security,and data privacy risks are increasing with the advancement of technology and network connections.Traditional access control solutions are inadequate for establishing access control in IoT systems to provide data protection owing to their vulnerability to single-point OF failure.Additionally,conventional privacy preservation methods have high latency costs and overhead for resource-constrained devices.Previous machine learning approaches were also unable to detect denial-of-service(DoS)attacks.This study introduced a novel decentralized and secure framework for blockchain integration.To avoid single-point OF failure,an accredited access control scheme is incorporated,combining blockchain with local peers to record each transaction and verify the signature to access.Blockchain-based attribute-based cryptography is implemented to protect data storage privacy by generating threshold parameters,managing keys,and revoking users on the blockchain.An innovative contract-based DOS attack mitigation method is also incorporated to effectively validate devices with intelligent contracts as trusted or untrusted,preventing the server from becoming overwhelmed.The proposed framework effectively controls access,safeguards data privacy,and reduces the risk of cyberattacks.The results depict that the suggested framework outperforms the results in terms of accuracy,precision,sensitivity,recall,and F-measure at 96.9%,98.43%,98.8%,98.43%,and 98.4%,respectively. 展开更多
关键词 machine learning internet of things blockchain data privacy SECURITY Industry 4.0
下载PDF
Machine Learning Models for Heterogenous Network Security Anomaly Detection
20
作者 Mercy Diligence Ogah Joe Essien +1 位作者 Martin Ogharandukun Monday Abdullahi 《Journal of Computer and Communications》 2024年第6期38-58,共21页
The increasing amount and intricacy of network traffic in the modern digital era have worsened the difficulty of identifying abnormal behaviours that may indicate potential security breaches or operational interruptio... The increasing amount and intricacy of network traffic in the modern digital era have worsened the difficulty of identifying abnormal behaviours that may indicate potential security breaches or operational interruptions. Conventional detection approaches face challenges in keeping up with the ever-changing strategies of cyber-attacks, resulting in heightened susceptibility and significant harm to network infrastructures. In order to tackle this urgent issue, this project focused on developing an effective anomaly detection system that utilizes Machine Learning technology. The suggested model utilizes contemporary machine learning algorithms and frameworks to autonomously detect deviations from typical network behaviour. It promptly identifies anomalous activities that may indicate security breaches or performance difficulties. The solution entails a multi-faceted approach encompassing data collection, preprocessing, feature engineering, model training, and evaluation. By utilizing machine learning methods, the model is trained on a wide range of datasets that include both regular and abnormal network traffic patterns. This training ensures that the model can adapt to numerous scenarios. The main priority is to ensure that the system is functional and efficient, with a particular emphasis on reducing false positives to avoid unwanted alerts. Additionally, efforts are directed on improving anomaly detection accuracy so that the model can consistently distinguish between potentially harmful and benign activity. This project aims to greatly strengthen network security by addressing emerging cyber threats and improving their resilience and reliability. 展开更多
关键词 Cyber-Security network Anomaly Detection machine Learning Random Forest Decision Tree Gaussian Naive Bayes
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部