期刊文献+
共找到1,357篇文章
< 1 2 68 >
每页显示 20 50 100
Dynamic Economic Scheduling with Self-Adaptive Uncertainty in Distribution Network Based on Deep Reinforcement Learning
1
作者 Guanfu Wang Yudie Sun +5 位作者 Jinling Li Yu Jiang Chunhui Li Huanan Yu He Wang Shiqiang Li 《Energy Engineering》 EI 2024年第6期1671-1695,共25页
Traditional optimal scheduling methods are limited to accurate physical models and parameter settings, which aredifficult to adapt to the uncertainty of source and load, and there are problems such as the inability to... Traditional optimal scheduling methods are limited to accurate physical models and parameter settings, which aredifficult to adapt to the uncertainty of source and load, and there are problems such as the inability to make dynamicdecisions continuously. This paper proposed a dynamic economic scheduling method for distribution networksbased on deep reinforcement learning. Firstly, the economic scheduling model of the new energy distributionnetwork is established considering the action characteristics of micro-gas turbines, and the dynamic schedulingmodel based on deep reinforcement learning is constructed for the new energy distribution network system with ahigh proportion of new energy, and the Markov decision process of the model is defined. Secondly, Second, for thechanging characteristics of source-load uncertainty, agents are trained interactively with the distributed networkin a data-driven manner. Then, through the proximal policy optimization algorithm, agents adaptively learn thescheduling strategy and realize the dynamic scheduling decision of the new energy distribution network system.Finally, the feasibility and superiority of the proposed method are verified by an improved IEEE 33-node simulationsystem. 展开更多
关键词 self-adaptive the uncertainty of sources and load deep reinforcement learning dynamic economic scheduling
下载PDF
Prediction model for corrosion rate of low-alloy steels under atmospheric conditions using machine learning algorithms 被引量:2
2
作者 Jingou Kuang Zhilin Long 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS CSCD 2024年第2期337-350,共14页
This work constructed a machine learning(ML)model to predict the atmospheric corrosion rate of low-alloy steels(LAS).The material properties of LAS,environmental factors,and exposure time were used as the input,while ... This work constructed a machine learning(ML)model to predict the atmospheric corrosion rate of low-alloy steels(LAS).The material properties of LAS,environmental factors,and exposure time were used as the input,while the corrosion rate as the output.6 dif-ferent ML algorithms were used to construct the proposed model.Through optimization and filtering,the eXtreme gradient boosting(XG-Boost)model exhibited good corrosion rate prediction accuracy.The features of material properties were then transformed into atomic and physical features using the proposed property transformation approach,and the dominant descriptors that affected the corrosion rate were filtered using the recursive feature elimination(RFE)as well as XGBoost methods.The established ML models exhibited better predic-tion performance and generalization ability via property transformation descriptors.In addition,the SHapley additive exPlanations(SHAP)method was applied to analyze the relationship between the descriptors and corrosion rate.The results showed that the property transformation model could effectively help with analyzing the corrosion behavior,thereby significantly improving the generalization ability of corrosion rate prediction models. 展开更多
关键词 machine learning low-alloy steel atmospheric corrosion prediction corrosion rate feature fusion
下载PDF
Federated Learning Model for Auto Insurance Rate Setting Based on Tweedie Distribution 被引量:1
3
作者 Tao Yin Changgen Peng +2 位作者 Weijie Tan Dequan Xu Hanlin Tang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第1期827-843,共17页
In the assessment of car insurance claims,the claim rate for car insurance presents a highly skewed probability distribution,which is typically modeled using Tweedie distribution.The traditional approach to obtaining ... In the assessment of car insurance claims,the claim rate for car insurance presents a highly skewed probability distribution,which is typically modeled using Tweedie distribution.The traditional approach to obtaining the Tweedie regression model involves training on a centralized dataset,when the data is provided by multiple parties,training a privacy-preserving Tweedie regression model without exchanging raw data becomes a challenge.To address this issue,this study introduces a novel vertical federated learning-based Tweedie regression algorithm for multi-party auto insurance rate setting in data silos.The algorithm can keep sensitive data locally and uses privacy-preserving techniques to achieve intersection operations between the two parties holding the data.After determining which entities are shared,the participants train the model locally using the shared entity data to obtain the local generalized linear model intermediate parameters.The homomorphic encryption algorithms are introduced to interact with and update the model intermediate parameters to collaboratively complete the joint training of the car insurance rate-setting model.Performance tests on two publicly available datasets show that the proposed federated Tweedie regression algorithm can effectively generate Tweedie regression models that leverage the value of data fromboth partieswithout exchanging data.The assessment results of the scheme approach those of the Tweedie regressionmodel learned fromcentralized data,and outperformthe Tweedie regressionmodel learned independently by a single party. 展开更多
关键词 rate setting Tweedie distribution generalized linear models federated learning homomorphic encryption
下载PDF
A performance-based hybrid deep learning model for predicting TBM advance rate using Attention-ResNet-LSTM
4
作者 Sihao Yu Zixin Zhang +2 位作者 Shuaifeng Wang Xin Huang Qinghua Lei 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第1期65-80,共16页
The technology of tunnel boring machine(TBM)has been widely applied for underground construction worldwide;however,how to ensure the TBM tunneling process safe and efficient remains a major concern.Advance rate is a k... The technology of tunnel boring machine(TBM)has been widely applied for underground construction worldwide;however,how to ensure the TBM tunneling process safe and efficient remains a major concern.Advance rate is a key parameter of TBM operation and reflects the TBM-ground interaction,for which a reliable prediction helps optimize the TBM performance.Here,we develop a hybrid neural network model,called Attention-ResNet-LSTM,for accurate prediction of the TBM advance rate.A database including geological properties and TBM operational parameters from the Yangtze River Natural Gas Pipeline Project is used to train and test this deep learning model.The evolutionary polynomial regression method is adopted to aid the selection of input parameters.The results of numerical exper-iments show that our Attention-ResNet-LSTM model outperforms other commonly-used intelligent models with a lower root mean square error and a lower mean absolute percentage error.Further,parametric analyses are conducted to explore the effects of the sequence length of historical data and the model architecture on the prediction accuracy.A correlation analysis between the input and output parameters is also implemented to provide guidance for adjusting relevant TBM operational parameters.The performance of our hybrid intelligent model is demonstrated in a case study of TBM tunneling through a complex ground with variable strata.Finally,data collected from the Baimang River Tunnel Project in Shenzhen of China are used to further test the generalization of our model.The results indicate that,compared to the conventional ResNet-LSTM model,our model has a better predictive capability for scenarios with unknown datasets due to its self-adaptive characteristic. 展开更多
关键词 Tunnel boring machine(TBM) Advance rate Deep learning Attention-ResNet-LSTM Evolutionary polynomial regression
下载PDF
Improved IChOA-Based Reinforcement Learning for Secrecy Rate Optimization in Smart Grid Communications
5
作者 Mehrdad Shoeibi Mohammad Mehdi Sharifi Nevisi +3 位作者 Sarvenaz Sadat Khatami Diego Martín Sepehr Soltani Sina Aghakhani 《Computers, Materials & Continua》 SCIE EI 2024年第11期2819-2843,共25页
In the evolving landscape of the smart grid(SG),the integration of non-organic multiple access(NOMA)technology has emerged as a pivotal strategy for enhancing spectral efficiency and energy management.However,the open... In the evolving landscape of the smart grid(SG),the integration of non-organic multiple access(NOMA)technology has emerged as a pivotal strategy for enhancing spectral efficiency and energy management.However,the open nature of wireless channels in SG raises significant concerns regarding the confidentiality of critical control messages,especially when broadcasted from a neighborhood gateway(NG)to smart meters(SMs).This paper introduces a novel approach based on reinforcement learning(RL)to fortify the performance of secrecy.Motivated by the need for efficient and effective training of the fully connected layers in the RL network,we employ an improved chimp optimization algorithm(IChOA)to update the parameters of the RL.By integrating the IChOA into the training process,the RL agent is expected to learn more robust policies faster and with better convergence properties compared to standard optimization algorithms.This can lead to improved performance in complex SG environments,where the agent must make decisions that enhance the security and efficiency of the network.We compared the performance of our proposed method(IChOA-RL)with several state-of-the-art machine learning(ML)algorithms,including recurrent neural network(RNN),long short-term memory(LSTM),K-nearest neighbors(KNN),support vector machine(SVM),improved crow search algorithm(I-CSA),and grey wolf optimizer(GWO).Extensive simulations demonstrate the efficacy of our approach compared to the related works,showcasing significant improvements in secrecy capacity rates under various network conditions.The proposed IChOA-RL exhibits superior performance compared to other algorithms in various aspects,including the scalability of the NOMA communication system,accuracy,coefficient of determination(R2),root mean square error(RMSE),and convergence trend.For our dataset,the IChOA-RL architecture achieved coefficient of determination of 95.77%and accuracy of 97.41%in validation dataset.This was accompanied by the lowest RMSE(0.95),indicating very precise predictions with minimal error. 展开更多
关键词 Smart grid communication secrecy rate optimization reinforcement learning improved chimp optimization algorithm
下载PDF
Prediction of corrosion rate for friction stir processed WE43 alloy by combining PSO-based virtual sample generation and machine learning
6
作者 Annayath Maqbool Abdul Khalad Noor Zaman Khan 《Journal of Magnesium and Alloys》 SCIE EI CAS CSCD 2024年第4期1518-1528,共11页
The corrosion rate is a crucial factor that impacts the longevity of materials in different applications.After undergoing friction stir processing(FSP),the refined grain structure leads to a notable decrease in corros... The corrosion rate is a crucial factor that impacts the longevity of materials in different applications.After undergoing friction stir processing(FSP),the refined grain structure leads to a notable decrease in corrosion rate.However,a better understanding of the correlation between the FSP process parameters and the corrosion rate is still lacking.The current study used machine learning to establish the relationship between the corrosion rate and FSP process parameters(rotational speed,traverse speed,and shoulder diameter)for WE43 alloy.The Taguchi L27 design of experiments was used for the experimental analysis.In addition,synthetic data was generated using particle swarm optimization for virtual sample generation(VSG).The application of VSG has led to an increase in the prediction accuracy of machine learning models.A sensitivity analysis was performed using Shapley Additive Explanations to determine the key factors affecting the corrosion rate.The shoulder diameter had a significant impact in comparison to the traverse speed.A graphical user interface(GUI)has been created to predict the corrosion rate using the identified factors.This study focuses on the WE43 alloy,but its findings can also be used to predict the corrosion rate of other magnesium alloys. 展开更多
关键词 Corrosion rate Friction stir processing Virtual sample generation Particle swarm optimization Machine learning Graphical user interface
下载PDF
Machine learning-based comparison of factors influencing estimated glomerular filtration rate in Chinese women with or without nonalcoholic fatty liver
7
作者 I-Chien Chen Lin-Ju Chou +2 位作者 Shih-Chen Huang Ta-Wei Chu Shang-Sen Lee 《World Journal of Clinical Cases》 SCIE 2024年第15期2506-2521,共16页
BACKGROUND The prevalence of non-alcoholic fatty liver(NAFLD)has increased recently.Subjects with NAFLD are known to have higher chance for renal function impairment.Many past studies used traditional multiple linear ... BACKGROUND The prevalence of non-alcoholic fatty liver(NAFLD)has increased recently.Subjects with NAFLD are known to have higher chance for renal function impairment.Many past studies used traditional multiple linear regression(MLR)to identify risk factors for decreased estimated glomerular filtration rate(eGFR).However,medical research is increasingly relying on emerging machine learning(Mach-L)methods.The present study enrolled healthy women to identify factors affecting eGFR in subjects with and without NAFLD(NAFLD+,NAFLD-)and to rank their importance.AIM To uses three different Mach-L methods to identify key impact factors for eGFR in healthy women with and without NAFLD.METHODS A total of 65535 healthy female study participants were enrolled from the Taiwan MJ cohort,accounting for 32 independent variables including demographic,biochemistry and lifestyle parameters(independent variables),while eGFR was used as the dependent variable.Aside from MLR,three Mach-L methods were applied,including stochastic gradient boosting,eXtreme gradient boosting and elastic net.Errors of estimation were used to define method accuracy,where smaller degree of error indicated better model performance.RESULTS Income,albumin,eGFR,High density lipoprotein-Cholesterol,phosphorus,forced expiratory volume in one second(FEV1),and sleep time were all lower in the NAFLD+group,while other factors were all significantly higher except for smoking area.Mach-L had lower estimation errors,thus outperforming MLR.In Model 1,age,uric acid(UA),FEV1,plasma calcium level(Ca),plasma albumin level(Alb)and T-bilirubin were the most important factors in the NAFLD+group,as opposed to age,UA,FEV1,Alb,lactic dehydrogenase(LDH)and Ca for the NAFLD-group.Given the importance percentage was much higher than the 2nd important factor,we built Model 2 by removing age.CONCLUSION The eGFR were lower in the NAFLD+group compared to the NAFLD-group,with age being was the most important impact factor in both groups of healthy Chinese women,followed by LDH,UA,FEV1 and Alb.However,for the NAFLD-group,TSH and SBP were the 5th and 6th most important factors,as opposed to Ca and BF in the NAFLD+group. 展开更多
关键词 Non-alcoholic fatty liver Estimated glomerular filtration rate Machine learning Chinese women
下载PDF
Deep Learning Based Signal Detection for Quadrature Spatial Modulation System
8
作者 Shu Dingyun Peng Yuyang +2 位作者 Yue Ming Fawaz AL-Hazemi Mohammad Meraj Mirza 《China Communications》 SCIE CSCD 2024年第10期78-85,共8页
With the development of communication systems, modulation methods are becoming more and more diverse. Among them, quadrature spatial modulation(QSM) is considered as one method with less capacity and high efficiency. ... With the development of communication systems, modulation methods are becoming more and more diverse. Among them, quadrature spatial modulation(QSM) is considered as one method with less capacity and high efficiency. In QSM, the traditional signal detection methods sometimes are unable to meet the actual requirement of low complexity of the system. Therefore, this paper proposes a signal detection scheme for QSM systems using deep learning to solve the complexity problem. Results from the simulations show that the bit error rate performance of the proposed deep learning-based detector is better than that of the zero-forcing(ZF) and minimum mean square error(MMSE) detectors, and similar to the maximum likelihood(ML) detector. Moreover, the proposed method requires less processing time than ZF, MMSE,and ML. 展开更多
关键词 bit error rate COMPLEXITY deep learning quadrature spatial modulation
下载PDF
A Secure Framework for WSN-IoT Using Deep Learning for Enhanced Intrusion Detection
9
作者 Chandraumakantham Om Kumar Sudhakaran Gajendran +2 位作者 Suguna Marappan Mohammed Zakariah Abdulaziz S.Almazyad 《Computers, Materials & Continua》 SCIE EI 2024年第10期471-501,共31页
The security of the wireless sensor network-Internet of Things(WSN-IoT)network is more challenging due to its randomness and self-organized nature.Intrusion detection is one of the key methodologies utilized to ensure... The security of the wireless sensor network-Internet of Things(WSN-IoT)network is more challenging due to its randomness and self-organized nature.Intrusion detection is one of the key methodologies utilized to ensure the security of the network.Conventional intrusion detection mechanisms have issues such as higher misclassification rates,increased model complexity,insignificant feature extraction,increased training time,increased run time complexity,computation overhead,failure to identify new attacks,increased energy consumption,and a variety of other factors that limit the performance of the intrusion system model.In this research a security framework for WSN-IoT,through a deep learning technique is introduced using Modified Fuzzy-Adaptive DenseNet(MF_AdaDenseNet)and is benchmarked with datasets like NSL-KDD,UNSWNB15,CIDDS-001,Edge IIoT,Bot IoT.In this,the optimal feature selection using Capturing Dingo Optimization(CDO)is devised to acquire relevant features by removing redundant features.The proposed MF_AdaDenseNet intrusion detection model offers significant benefits by utilizing optimal feature selection with the CDO algorithm.This results in enhanced Detection Capacity with minimal computation complexity,as well as a reduction in False Alarm Rate(FAR)due to the consideration of classification error in the fitness estimation.As a result,the combined CDO-based feature selection and MF_AdaDenseNet intrusion detection mechanism outperform other state-of-the-art techniques,achieving maximal Detection Capacity,precision,recall,and F-Measure of 99.46%,99.54%,99.91%,and 99.68%,respectively,along with minimal FAR and Mean Absolute Error(MAE)of 0.9%and 0.11. 展开更多
关键词 Deep learning intrusion detection fuzzy rules feature selection false alarm rate ACCURACY wireless sensor networks
下载PDF
Deep Learning-Based ECG Classification for Arterial Fibrillation Detection
10
作者 Muhammad Sohail Irshad Tehreem Masood +3 位作者 Arfan Jaffar Muhammad Rashid Sheeraz Akram Abeer Aljohani 《Computers, Materials & Continua》 SCIE EI 2024年第6期4805-4824,共20页
The application of deep learning techniques in the medical field,specifically for Atrial Fibrillation(AFib)detection through Electrocardiogram(ECG)signals,has witnessed significant interest.Accurate and timely diagnos... The application of deep learning techniques in the medical field,specifically for Atrial Fibrillation(AFib)detection through Electrocardiogram(ECG)signals,has witnessed significant interest.Accurate and timely diagnosis increases the patient’s chances of recovery.However,issues like overfitting and inconsistent accuracy across datasets remain challenges.In a quest to address these challenges,a study presents two prominent deep learning architectures,ResNet-50 and DenseNet-121,to evaluate their effectiveness in AFib detection.The aim was to create a robust detection mechanism that consistently performs well.Metrics such as loss,accuracy,precision,sensitivity,and Area Under the Curve(AUC)were utilized for evaluation.The findings revealed that ResNet-50 surpassed DenseNet-121 in all evaluated categories.It demonstrated lower loss rate 0.0315 and 0.0305 superior accuracy of 98.77%and 98.88%,precision of 98.78%and 98.89%and sensitivity of 98.76%and 98.86%for training and validation,hinting at its advanced capability for AFib detection.These insights offer a substantial contribution to the existing literature on deep learning applications for AFib detection from ECG signals.The comparative performance data assists future researchers in selecting suitable deep-learning architectures for AFib detection.Moreover,the outcomes of this study are anticipated to stimulate the development of more advanced and efficient ECG-based AFib detection methodologies,for more accurate and early detection of AFib,thereby fostering improved patient care and outcomes. 展开更多
关键词 Convolution neural network atrial fibrillation area under curve ECG false positive rate deep learning CLASSIFICATION
下载PDF
Energy-Efficient Traffic Offloading for RSMA-Based Hybrid Satellite Terrestrial Networks with Deep Reinforcement Learning
11
作者 Qingmiao Zhang Lidong Zhu +1 位作者 Yanyan Chen Shan Jiang 《China Communications》 SCIE CSCD 2024年第2期49-58,共10页
As the demands of massive connections and vast coverage rapidly grow in the next wireless communication networks, rate splitting multiple access(RSMA) is considered to be the new promising access scheme since it can p... As the demands of massive connections and vast coverage rapidly grow in the next wireless communication networks, rate splitting multiple access(RSMA) is considered to be the new promising access scheme since it can provide higher efficiency with limited spectrum resources. In this paper, combining spectrum splitting with rate splitting, we propose to allocate resources with traffic offloading in hybrid satellite terrestrial networks. A novel deep reinforcement learning method is adopted to solve this challenging non-convex problem. However, the neverending learning process could prohibit its practical implementation. Therefore, we introduce the switch mechanism to avoid unnecessary learning. Additionally, the QoS constraint in the scheme can rule out unsuccessful transmission. The simulation results validates the energy efficiency performance and the convergence speed of the proposed algorithm. 展开更多
关键词 deep reinforcement learning energy efficiency hybrid satellite terrestrial networks rate splitting multiple access traffic offloading
下载PDF
Machine learning applications in healthcare clinical practice and research
12
作者 Nikolaos-Achilleas Arkoudis Stavros P Papadakos 《World Journal of Clinical Cases》 SCIE 2025年第1期16-21,共6页
Machine learning(ML)is a type of artificial intelligence that assists computers in the acquisition of knowledge through data analysis,thus creating machines that can complete tasks otherwise requiring human intelligen... Machine learning(ML)is a type of artificial intelligence that assists computers in the acquisition of knowledge through data analysis,thus creating machines that can complete tasks otherwise requiring human intelligence.Among its various applications,it has proven groundbreaking in healthcare as well,both in clinical practice and research.In this editorial,we succinctly introduce ML applications and present a study,featured in the latest issue of the World Journal of Clinical Cases.The authors of this study conducted an analysis using both multiple linear regression(MLR)and ML methods to investigate the significant factors that may impact the estimated glomerular filtration rate in healthy women with and without non-alcoholic fatty liver disease(NAFLD).Their results implicated age as the most important determining factor in both groups,followed by lactic dehydrogenase,uric acid,forced expiratory volume in one second,and albumin.In addition,for the NAFLD-group,the 5th and 6th most important impact factors were thyroid-stimulating hormone and systolic blood pressure,as compared to plasma calcium and body fat for the NAFLD+group.However,the study's distinctive contribution lies in its adoption of ML methodologies,showcasing their superiority over traditional statistical approaches(herein MLR),thereby highlighting the potential of ML to represent an invaluable advanced adjunct tool in clinical practice and research. 展开更多
关键词 Machine learning Artificial INTELLIGENCE CLINICAL Practice RESEARCH Glomerular filtration rate Non-alcoholic fatty liver disease MEDICINE
下载PDF
Design of a Multi-Stage Ensemble Model for Thyroid Prediction Using Learning Approaches
13
作者 M.L.Maruthi Prasad R.Santhosh 《Intelligent Automation & Soft Computing》 2024年第1期1-13,共13页
This research concentrates to model an efficient thyroid prediction approach,which is considered a baseline for significant problems faced by the women community.The major research problem is the lack of automated mod... This research concentrates to model an efficient thyroid prediction approach,which is considered a baseline for significant problems faced by the women community.The major research problem is the lack of automated model to attain earlier prediction.Some existing model fails to give better prediction accuracy.Here,a novel clinical decision support system is framed to make the proper decision during a time of complexity.Multiple stages are followed in the proposed framework,which plays a substantial role in thyroid prediction.These steps include i)data acquisition,ii)outlier prediction,and iii)multi-stage weight-based ensemble learning process(MS-WEL).The weighted analysis of the base classifier and other classifier models helps bridge the gap encountered in one single classifier model.Various classifiers aremerged to handle the issues identified in others and intend to enhance the prediction rate.The proposed model provides superior outcomes and gives good quality prediction rate.The simulation is done in the MATLAB 2020a environment and establishes a better trade-off than various existing approaches.The model gives a prediction accuracy of 97.28%accuracy compared to other models and shows a better trade than others. 展开更多
关键词 THYROID machine learning PRE-PROCESSING classification prediction rate
下载PDF
Machine Learning Prediction of Fetal Health Status from Cardiotocography Examination in Developing Healthcare Contexts
14
作者 Olayemi Olasehinde 《Journal of Computer Science Research》 2024年第1期43-53,共11页
Reducing neonatal mortality is a critical global health objective,especially in resource-constrained developing countries.This study employs machine learning(ML)techniques to predict fetal health status based on cardi... Reducing neonatal mortality is a critical global health objective,especially in resource-constrained developing countries.This study employs machine learning(ML)techniques to predict fetal health status based on cardiotocography(CTG)examination findings,utilizing a dataset from the Kaggle repository due to the limited comprehensive healthcare data available in developing nations.Features such as baseline fetal heart rate,uterine contractions,and waveform characteristics were extracted using the RFE wrapper feature engineering technique and scaled with a standard scaler.Six ML models—Logistic Regression(LR),Decision Tree(DT),Random Forest(RF),Gradient Boosting(GB),Categorical Boosting(CB),and Extended Gradient Boosting(XGB)—are trained via cross-validation and evaluated using performance metrics.The developed models were trained via cross-validation and evaluated using ML performance metrics.Eight out of the 21 features selected by GB returned their maximum Matthews Correlation Coefficient(MCC)score of 0.6255,while CB,with 20 of the 21 features,returned the maximum and highest MCC score of 0.6321.The study demonstrated the ability of ML models to predict fetal health conditions from CTG exam results,facilitating early identification of high-risk pregnancies and enabling prompt treatment to prevent severe neonatal outcomes. 展开更多
关键词 NEONATAL Mortality rate CARDIOTOCOGRAPHY Machine learning Foetus health PREDICTION Features engineering
下载PDF
Effective Return Rate Prediction of Blockchain Financial Products Using Machine Learning
15
作者 K.Kalyani Velmurugan Subbiah Parvathy +4 位作者 Hikmat A.M.Abdeljaber T.Satyanarayana Murthy Srijana Acharya Gyanendra Prasad Joshi Sung Won Kim 《Computers, Materials & Continua》 SCIE EI 2023年第1期2303-2316,共14页
In recent times,financial globalization has drastically increased in different ways to improve the quality of services with advanced resources.The successful applications of bitcoin Blockchain(BC)techniques enable the... In recent times,financial globalization has drastically increased in different ways to improve the quality of services with advanced resources.The successful applications of bitcoin Blockchain(BC)techniques enable the stockholders to worry about the return and risk of financial products.The stockholders focused on the prediction of return rate and risk rate of financial products.Therefore,an automatic return rate bitcoin prediction model becomes essential for BC financial products.The newly designed machine learning(ML)and deep learning(DL)approaches pave the way for return rate predictive method.This study introduces a novel Jellyfish search optimization based extreme learning machine with autoencoder(JSO-ELMAE)for return rate prediction of BC financial products.The presented JSO-ELMAE model designs a new ELMAE model for predicting the return rate of financial products.Besides,the JSO algorithm is exploited to tune the parameters related to the ELMAE model which in turn boosts the classification results.The application of JSO technique assists in optimal parameter adjustment of the ELMAE model to predict the bitcoin return rates.The experimental validation of the JSO-ELMAE model was executed and the outcomes are inspected in many aspects.The experimental values demonstrated the enhanced performance of the JSO-ELMAE model over recent state of art approaches with minimal RMSE of 0.1562. 展开更多
关键词 Financial products blockchain return rate prediction model machine learning parameter optimization
下载PDF
Intrusion Detection Using Federated Learning for Computing
16
作者 R.S.Aashmi T.Jaya 《Computer Systems Science & Engineering》 SCIE EI 2023年第5期1295-1308,共14页
The integration of clusters,grids,clouds,edges and other computing platforms result in contemporary technology of jungle computing.This novel technique has the aptitude to tackle high performance computation systems a... The integration of clusters,grids,clouds,edges and other computing platforms result in contemporary technology of jungle computing.This novel technique has the aptitude to tackle high performance computation systems and it manages the usage of all computing platforms at a time.Federated learning is a collaborative machine learning approach without centralized training data.The proposed system effectively detects the intrusion attack without human intervention and subsequently detects anomalous deviations in device communication behavior,potentially caused by malicious adversaries and it can emerge with new and unknown attacks.The main objective is to learn overall behavior of an intruder while performing attacks to the assumed target service.Moreover,the updated system model is send to the centralized server in jungle computing,to detect their pattern.Federated learning greatly helps the machine to study the type of attack from each device and this technique paves a way to complete dominion over all malicious behaviors.In our proposed work,we have implemented an intrusion detection system that has high accuracy,low False Positive Rate(FPR)scalable,and versatile for the jungle computing environment.The execution time taken to complete a round is less than two seconds,with an accuracy rate of 96%. 展开更多
关键词 Jungle computing high performance computation federated learning false positive rate intrusion detection system(IDS)
下载PDF
Adaptive Retransmission Design for Wireless Federated Edge Learning
17
作者 XU Xinyi LIU Shengli YU Guanding 《ZTE Communications》 2023年第1期3-14,共12页
As a popular distributed machine learning framework,wireless federated edge learning(FEEL)can keep original data local,while uploading model training updates to protect privacy and prevent data silos.However,since wir... As a popular distributed machine learning framework,wireless federated edge learning(FEEL)can keep original data local,while uploading model training updates to protect privacy and prevent data silos.However,since wireless channels are usually unreliable,there is no guarantee that the model updates uploaded by local devices are correct,thus greatly degrading the performance of the wireless FEEL.Conventional retransmission schemes designed for wireless systems generally aim to maximize the system throughput or minimize the packet error rate,which is not suitable for the FEEL system.A novel retransmission scheme is proposed for the FEEL system to make a tradeoff between model training accuracy and retransmission latency.In the proposed scheme,a retransmission device selection criterion is first designed based on the channel condition,the number of local data,and the importance of model updates.In addition,we design the air interface signaling under this retransmission scheme to facilitate the implementation of the proposed scheme in practical scenarios.Finally,the effectiveness of the proposed retransmission scheme is validated through simulation experiments. 展开更多
关键词 federated edge learning RETRANSMISSION unreliable communication convergence rate retransmission latency
下载PDF
Advancing COVID-19 Diagnosis with CNNs: An Empirical Study of Learning Rates and Optimization Strategies
18
作者 Mainak Mitra Soumit Roy 《Intelligent Control and Automation》 2023年第4期45-78,共34页
The rapid spread of the novel Coronavirus (COVID-19) has emphasized the necessity for advanced diagnostic tools to enhance the detection and management of the virus. This study investigates the effectiveness of Convol... The rapid spread of the novel Coronavirus (COVID-19) has emphasized the necessity for advanced diagnostic tools to enhance the detection and management of the virus. This study investigates the effectiveness of Convolutional Neural Networks (CNNs) in the diagnosis of COVID-19 from chest X-ray and CT images, focusing on the impact of varying learning rates and optimization strategies. Despite the abundance of chest X-ray datasets from various institutions, the lack of a dedicated COVID-19 dataset for computational analysis presents a significant challenge. Our work introduces an empirical analysis across four distinct learning rate policies—Cyclic, Step Based, Time-Based, and Epoch Based—each tested with four different optimizers: Adam, Adagrad, RMSprop, and Stochastic Gradient Descent (SGD). The performance of these configurations was evaluated in terms of training and validation accuracy over 100 epochs. Our results demonstrate significant differences in model performance, with the Cyclic learning rate policy combined with SGD optimizer achieving the highest validation accuracy of 83.33%. This study contributes to the existing body of knowledge by outlining effective CNN configurations for COVID-19 image dataset analysis, offering insights into the optimization of machine learning models for the diagnosis of infectious diseases. Our findings underscore the potential of CNNs in supplementing traditional PCR tests, providing a computational approach to identify patterns in chest X-rays and CT scans indicative of COVID-19, thereby aiding in the swift and accurate diagnosis of the virus. 展开更多
关键词 learning rate AI OPTIMIZER Deep learning CNN Multi Class Classification
下载PDF
Accurate Machine Learning Predictions of Sci-Fi Film Performance
19
作者 Amjed Al Fahoum Tahani A.Ghobon 《Journal of New Media》 2023年第1期1-22,共22页
A groundbreaking method is introduced to leverage machine learn-ing algorithms to revolutionize the prediction of success rates for science fiction films.In the captivating world of the film industry,extensive researc... A groundbreaking method is introduced to leverage machine learn-ing algorithms to revolutionize the prediction of success rates for science fiction films.In the captivating world of the film industry,extensive research and accurate forecasting are vital to anticipating a movie’s triumph prior to its debut.Our study aims to harness the power of available data to estimate a film’s early success rate.With the vast resources offered by the internet,we can access a plethora of movie-related information,including actors,directors,critic reviews,user reviews,ratings,writers,budgets,genres,Facebook likes,YouTube views for movie trailers,and Twitter followers.The first few weeks of a film’s release are crucial in determining its fate,and online reviews and film evaluations profoundly impact its opening-week earnings.Hence,our research employs advanced supervised machine learning techniques to predict a film’s triumph.The Internet Movie Database(IMDb)is a comprehensive data repository for nearly all movies.A robust predictive classification approach is developed by employing various machine learning algorithms,such as fine,medium,coarse,cosine,cubic,and weighted KNN.To determine the best model,the performance of each feature was evaluated based on composite metrics.Moreover,the significant influences of social media platforms were recognized including Twitter,Instagram,and Facebook on shaping individuals’opinions.A hybrid success rating prediction model is obtained by integrating the proposed prediction models with sentiment analysis from available platforms.The findings of this study demonstrate that the chosen algorithms offer more precise estimations,faster execution times,and higher accuracy rates when compared to previous research.By integrating the features of existing prediction models and social media sentiment analysis models,our proposed approach provides a remarkably accurate prediction of a movie’s success.This breakthrough can help movie producers and marketers anticipate a film’s triumph before its release,allowing them to tailor their promotional activities accordingly.Furthermore,the adopted research lays the foundation for developing even more accurate prediction models,considering the ever-increasing significance of social media platforms in shaping individ-uals’opinions.In conclusion,this study showcases the immense potential of machine learning algorithms in predicting the success rate of science fiction films,opening new avenues for the film industry. 展开更多
关键词 Film success rate prediction optimized feature selection robust machine learning nearest neighbors’ algorithms
下载PDF
A Novel Machine Learning-Based Hand Gesture Recognition Using HCI on IoT Assisted Cloud Platform 被引量:1
20
作者 Saurabh Adhikari Tushar Kanti Gangopadhayay +4 位作者 Souvik Pal D.Akila Mamoona Humayun Majed Alfayad N.Z.Jhanjhi 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期2123-2140,共18页
Machine learning is a technique for analyzing data that aids the construction of mathematical models.Because of the growth of the Internet of Things(IoT)and wearable sensor devices,gesture interfaces are becoming a mo... Machine learning is a technique for analyzing data that aids the construction of mathematical models.Because of the growth of the Internet of Things(IoT)and wearable sensor devices,gesture interfaces are becoming a more natural and expedient human-machine interaction method.This type of artificial intelligence that requires minimal or no direct human intervention in decision-making is predicated on the ability of intelligent systems to self-train and detect patterns.The rise of touch-free applications and the number of deaf people have increased the significance of hand gesture recognition.Potential applications of hand gesture recognition research span from online gaming to surgical robotics.The location of the hands,the alignment of the fingers,and the hand-to-body posture are the fundamental components of hierarchical emotions in gestures.Linguistic gestures may be difficult to distinguish from nonsensical motions in the field of gesture recognition.Linguistic gestures may be difficult to distinguish from nonsensical motions in the field of gesture recognition.In this scenario,it may be difficult to overcome segmentation uncertainty caused by accidental hand motions or trembling.When a user performs the same dynamic gesture,the hand shapes and speeds of each user,as well as those often generated by the same user,vary.A machine-learning-based Gesture Recognition Framework(ML-GRF)for recognizing the beginning and end of a gesture sequence in a continuous stream of data is suggested to solve the problem of distinguishing between meaningful dynamic gestures and scattered generation.We have recommended using a similarity matching-based gesture classification approach to reduce the overall computing cost associated with identifying actions,and we have shown how an efficient feature extraction method can be used to reduce the thousands of single gesture information to four binary digit gesture codes.The findings from the simulation support the accuracy,precision,gesture recognition,sensitivity,and efficiency rates.The Machine Learning-based Gesture Recognition Framework(ML-GRF)had an accuracy rate of 98.97%,a precision rate of 97.65%,a gesture recognition rate of 98.04%,a sensitivity rate of 96.99%,and an efficiency rate of 95.12%. 展开更多
关键词 Machine learning gesture recognition framework accuracy rate precision rate gesture recognition rate sensitivity rate efficiency rate
下载PDF
上一页 1 2 68 下一页 到第
使用帮助 返回顶部