期刊文献+
共找到479,394篇文章
< 1 2 250 >
每页显示 20 50 100
Quantification of the concrete freeze–thaw environment across the Qinghai–Tibet Plateau based on machine learning algorithms
1
作者 QIN Yanhui MA Haoyuan +3 位作者 ZHANG Lele YIN Jinshuai ZHENG Xionghui LI Shuo 《Journal of Mountain Science》 SCIE CSCD 2024年第1期322-334,共13页
The reasonable quantification of the concrete freezing environment on the Qinghai–Tibet Plateau(QTP) is the primary issue in frost resistant concrete design, which is one of the challenges that the QTP engineering ma... The reasonable quantification of the concrete freezing environment on the Qinghai–Tibet Plateau(QTP) is the primary issue in frost resistant concrete design, which is one of the challenges that the QTP engineering managers should take into account. In this paper, we propose a more realistic method to calculate the number of concrete freeze–thaw cycles(NFTCs) on the QTP. The calculated results show that the NFTCs increase as the altitude of the meteorological station increases with the average NFTCs being 208.7. Four machine learning methods, i.e., the random forest(RF) model, generalized boosting method(GBM), generalized linear model(GLM), and generalized additive model(GAM), are used to fit the NFTCs. The root mean square error(RMSE) values of the RF, GBM, GLM, and GAM are 32.3, 4.3, 247.9, and 161.3, respectively. The R^(2) values of the RF, GBM, GLM, and GAM are 0.93, 0.99, 0.48, and 0.66, respectively. The GBM method performs the best compared to the other three methods, which was shown by the results of RMSE and R^(2) values. The quantitative results from the GBM method indicate that the lowest, medium, and highest NFTC values are distributed in the northern, central, and southern parts of the QTP, respectively. The annual NFTCs in the QTP region are mainly concentrated at 160 and above, and the average NFTCs is 200 across the QTP. Our results can provide scientific guidance and a theoretical basis for the freezing resistance design of concrete in various projects on the QTP. 展开更多
关键词 Freeze–thaw cycles Quantification Machine learning algorithms Qinghai–Tibet Plateau CONCRETE
下载PDF
Prediction model for corrosion rate of low-alloy steels under atmospheric conditions using machine learning algorithms 被引量:3
2
作者 Jingou Kuang Zhilin Long 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS CSCD 2024年第2期337-350,共14页
This work constructed a machine learning(ML)model to predict the atmospheric corrosion rate of low-alloy steels(LAS).The material properties of LAS,environmental factors,and exposure time were used as the input,while ... This work constructed a machine learning(ML)model to predict the atmospheric corrosion rate of low-alloy steels(LAS).The material properties of LAS,environmental factors,and exposure time were used as the input,while the corrosion rate as the output.6 dif-ferent ML algorithms were used to construct the proposed model.Through optimization and filtering,the eXtreme gradient boosting(XG-Boost)model exhibited good corrosion rate prediction accuracy.The features of material properties were then transformed into atomic and physical features using the proposed property transformation approach,and the dominant descriptors that affected the corrosion rate were filtered using the recursive feature elimination(RFE)as well as XGBoost methods.The established ML models exhibited better predic-tion performance and generalization ability via property transformation descriptors.In addition,the SHapley additive exPlanations(SHAP)method was applied to analyze the relationship between the descriptors and corrosion rate.The results showed that the property transformation model could effectively help with analyzing the corrosion behavior,thereby significantly improving the generalization ability of corrosion rate prediction models. 展开更多
关键词 machine learning low-alloy steel atmospheric corrosion prediction corrosion rate feature fusion
下载PDF
Extended Deep Learning Algorithm for Improved Brain Tumor Diagnosis System
3
作者 M.Adimoolam K.Maithili +7 位作者 N.M.Balamurugan R.Rajkumar S.Leelavathy Raju Kannadasan Mohd Anul Haq Ilyas Khan ElSayed M.Tag El Din Arfat Ahmad Khan 《Intelligent Automation & Soft Computing》 2024年第1期33-55,共23页
At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns st... At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns still need enhancement,particularly accuracy,sensitivity,false positive and false negative,to improve the brain tumor prediction system symmetrically.Therefore,this work proposed an Extended Deep Learning Algorithm(EDLA)to measure performance parameters such as accuracy,sensitivity,and false positive and false negative rates.In addition,these iterated measures were analyzed by comparing the EDLA method with the Convolutional Neural Network(CNN)way further using the SPSS tool,and respective graphical illustrations were shown.The results were that the mean performance measures for the proposed EDLA algorithm were calculated,and those measured were accuracy(97.665%),sensitivity(97.939%),false positive(3.012%),and false negative(3.182%)for ten iterations.Whereas in the case of the CNN,the algorithm means accuracy gained was 94.287%,mean sensitivity 95.612%,mean false positive 5.328%,and mean false negative 4.756%.These results show that the proposed EDLA method has outperformed existing algorithms,including CNN,and ensures symmetrically improved parameters.Thus EDLA algorithm introduces novelty concerning its performance and particular activation function.This proposed method will be utilized effectively in brain tumor detection in a precise and accurate manner.This algorithm would apply to brain tumor diagnosis and be involved in various medical diagnoses aftermodification.If the quantity of dataset records is enormous,then themethod’s computation power has to be updated. 展开更多
关键词 Brain tumor extended deep learning algorithm convolution neural network tumor detection deep learning
下载PDF
Optimizing Deep Learning for Computer-Aided Diagnosis of Lung Diseases: An Automated Method Combining Evolutionary Algorithm, Transfer Learning, and Model Compression
4
作者 Hassen Louati Ali Louati +1 位作者 Elham Kariri Slim Bechikh 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期2519-2547,共29页
Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,w... Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,which are commonly utilized in radiology.To fully exploit their potential,researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems.However,constructing and compressing these systems presents a significant challenge,as it relies heavily on the expertise of data scientists.To tackle this issue,we propose an automated approach that utilizes an evolutionary algorithm(EA)to optimize the design and compression of a convolutional neural network(CNN)for X-Ray image classification.Our approach accurately classifies radiography images and detects potential chest abnormalities and infections,including COVID-19.Furthermore,our approach incorporates transfer learning,where a pre-trainedCNNmodel on a vast dataset of chest X-Ray images is fine-tuned for the specific task of detecting COVID-19.This method can help reduce the amount of labeled data required for the task and enhance the overall performance of the model.We have validated our method via a series of experiments against state-of-the-art architectures. 展开更多
关键词 Computer-aided diagnosis deep learning evolutionary algorithms deep compression transfer learning
下载PDF
A Deep-Learning and Transfer-Learning Hybrid Aerosol Retrieval Algorithm for FY4-AGRI:Development and Verification over Asia
5
作者 Disong Fu Hongrong Shi +9 位作者 Christian AGueymard Dazhi Yang Yu Zheng Huizheng Che Xuehua Fan Xinlei Han Lin Gao Jianchun Bian Minzheng Duan Xiangao Xia 《Engineering》 SCIE EI CAS CSCD 2024年第7期164-174,共11页
The Advanced Geosynchronous Radiation Imager(AGRI)is a mission-critical instrument for the Fengyun series of satellites.AGRI acquires full-disk images every 15 min and views East Asia every 5 min through 14 spectral b... The Advanced Geosynchronous Radiation Imager(AGRI)is a mission-critical instrument for the Fengyun series of satellites.AGRI acquires full-disk images every 15 min and views East Asia every 5 min through 14 spectral bands,enabling the detection of highly variable aerosol optical depth(AOD).Quantitative retrieval of AOD has hitherto been challenging,especially over land.In this study,an AOD retrieval algorithm is proposed that combines deep learning and transfer learning.The algorithm uses core concepts from both the Dark Target(DT)and Deep Blue(DB)algorithms to select features for the machinelearning(ML)algorithm,allowing for AOD retrieval at 550 nm over both dark and bright surfaces.The algorithm consists of two steps:①A baseline deep neural network(DNN)with skip connections is developed using 10 min Advanced Himawari Imager(AHI)AODs as the target variable,and②sunphotometer AODs from 89 ground-based stations are used to fine-tune the DNN parameters.Out-of-station validation shows that the retrieved AOD attains high accuracy,characterized by a coefficient of determination(R2)of 0.70,a mean bias error(MBE)of 0.03,and a percentage of data within the expected error(EE)of 70.7%.A sensitivity study reveals that the top-of-atmosphere reflectance at 650 and 470 nm,as well as the surface reflectance at 650 nm,are the two largest sources of uncertainty impacting the retrieval.In a case study of monitoring an extreme aerosol event,the AGRI AOD is found to be able to capture the detailed temporal evolution of the event.This work demonstrates the superiority of the transfer-learning technique in satellite AOD retrievals and the applicability of the retrieved AGRI AOD in monitoring extreme pollution events. 展开更多
关键词 Aerosol optical depth Retrieval algorithm Deep learning Transfer learning Advanced Geosynchronous Radiation IMAGER
下载PDF
An Opposition-Based Learning-Based Search Mechanism for Flying Foxes Optimization Algorithm
6
作者 Chen Zhang Liming Liu +5 位作者 Yufei Yang Yu Sun Jiaxu Ning Yu Zhang Changsheng Zhang Ying Guo 《Computers, Materials & Continua》 SCIE EI 2024年第6期5201-5223,共23页
The flying foxes optimization(FFO)algorithm,as a newly introduced metaheuristic algorithm,is inspired by the survival tactics of flying foxes in heat wave environments.FFO preferentially selects the best-performing in... The flying foxes optimization(FFO)algorithm,as a newly introduced metaheuristic algorithm,is inspired by the survival tactics of flying foxes in heat wave environments.FFO preferentially selects the best-performing individuals.This tendency will cause the newly generated solution to remain closely tied to the candidate optimal in the search area.To address this issue,the paper introduces an opposition-based learning-based search mechanism for FFO algorithm(IFFO).Firstly,this paper introduces niching techniques to improve the survival list method,which not only focuses on the adaptability of individuals but also considers the population’s crowding degree to enhance the global search capability.Secondly,an initialization strategy of opposition-based learning is used to perturb the initial population and elevate its quality.Finally,to verify the superiority of the improved search mechanism,IFFO,FFO and the cutting-edge metaheuristic algorithms are compared and analyzed using a set of test functions.The results prove that compared with other algorithms,IFFO is characterized by its rapid convergence,precise results and robust stability. 展开更多
关键词 Flying foxes optimization(FFO)algorithm opposition-based learning niching techniques swarm intelligence metaheuristics evolutionary algorithms
下载PDF
An Online Fake Review Detection Approach Using Famous Machine Learning Algorithms
7
作者 Asma Hassan Alshehri 《Computers, Materials & Continua》 SCIE EI 2024年第2期2767-2786,共20页
Online review platforms are becoming increasingly popular,encouraging dishonest merchants and service providers to deceive customers by creating fake reviews for their goods or services.Using Sybil accounts,bot farms,... Online review platforms are becoming increasingly popular,encouraging dishonest merchants and service providers to deceive customers by creating fake reviews for their goods or services.Using Sybil accounts,bot farms,and real account purchases,immoral actors demonize rivals and advertise their goods.Most academic and industry efforts have been aimed at detecting fake/fraudulent product or service evaluations for years.The primary hurdle to identifying fraudulent reviews is the lack of a reliable means to distinguish fraudulent reviews from real ones.This paper adopts a semi-supervised machine learning method to detect fake reviews on any website,among other things.Online reviews are classified using a semi-supervised approach(PU-learning)since there is a shortage of labeled data,and they are dynamic.Then,classification is performed using the machine learning techniques Support Vector Machine(SVM)and Nave Bayes.The performance of the suggested system has been compared with standard works,and experimental findings are assessed using several assessment metrics. 展开更多
关键词 SECURITY fake review semi-supervised learning ML algorithms review detection
下载PDF
Data-Driven Learning Control Algorithms for Unachievable Tracking Problems
8
作者 Zeyi Zhang Hao Jiang +1 位作者 Dong Shen Samer S.Saab 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第1期205-218,共14页
For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to in... For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to investigate solutions using the Ptype learning control scheme. Initially, we demonstrate the necessity of gradient information for achieving the best approximation.Subsequently, we propose an input-output-driven learning gain design to handle the imprecise gradients of a class of uncertain systems. However, it is discovered that the desired performance may not be attainable when faced with incomplete information.To address this issue, an extended iterative learning control scheme is introduced. In this scheme, the tracking errors are modified through output data sampling, which incorporates lowmemory footprints and offers flexibility in learning gain design.The input sequence is shown to converge towards the desired input, resulting in an output that is closest to the given reference in the least square sense. Numerical simulations are provided to validate the theoretical findings. 展开更多
关键词 Data-driven algorithms incomplete information iterative learning control gradient information unachievable problems
下载PDF
Unleashing the Power of Multi-Agent Reinforcement Learning for Algorithmic Trading in the Digital Financial Frontier and Enterprise Information Systems
9
作者 Saket Sarin Sunil K.Singh +4 位作者 Sudhakar Kumar Shivam Goyal Brij Bhooshan Gupta Wadee Alhalabi Varsha Arya 《Computers, Materials & Continua》 SCIE EI 2024年第8期3123-3138,共16页
In the rapidly evolving landscape of today’s digital economy,Financial Technology(Fintech)emerges as a trans-formative force,propelled by the dynamic synergy between Artificial Intelligence(AI)and Algorithmic Trading... In the rapidly evolving landscape of today’s digital economy,Financial Technology(Fintech)emerges as a trans-formative force,propelled by the dynamic synergy between Artificial Intelligence(AI)and Algorithmic Trading.Our in-depth investigation delves into the intricacies of merging Multi-Agent Reinforcement Learning(MARL)and Explainable AI(XAI)within Fintech,aiming to refine Algorithmic Trading strategies.Through meticulous examination,we uncover the nuanced interactions of AI-driven agents as they collaborate and compete within the financial realm,employing sophisticated deep learning techniques to enhance the clarity and adaptability of trading decisions.These AI-infused Fintech platforms harness collective intelligence to unearth trends,mitigate risks,and provide tailored financial guidance,fostering benefits for individuals and enterprises navigating the digital landscape.Our research holds the potential to revolutionize finance,opening doors to fresh avenues for investment and asset management in the digital age.Additionally,our statistical evaluation yields encouraging results,with metrics such as Accuracy=0.85,Precision=0.88,and F1 Score=0.86,reaffirming the efficacy of our approach within Fintech and emphasizing its reliability and innovative prowess. 展开更多
关键词 Neurodynamic Fintech multi-agent reinforcement learning algorithmic trading digital financial frontier
下载PDF
Uniaxial Compressive Strength Prediction for Rock Material in Deep Mine Using Boosting-Based Machine Learning Methods and Optimization Algorithms
10
作者 Junjie Zhao Diyuan Li +1 位作者 Jingtai Jiang Pingkuang Luo 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期275-304,共30页
Traditional laboratory tests for measuring rock uniaxial compressive strength(UCS)are tedious and timeconsuming.There is a pressing need for more effective methods to determine rock UCS,especially in deep mining envir... Traditional laboratory tests for measuring rock uniaxial compressive strength(UCS)are tedious and timeconsuming.There is a pressing need for more effective methods to determine rock UCS,especially in deep mining environments under high in-situ stress.Thus,this study aims to develop an advanced model for predicting the UCS of rockmaterial in deepmining environments by combining three boosting-basedmachine learning methods with four optimization algorithms.For this purpose,the Lead-Zinc mine in Southwest China is considered as the case study.Rock density,P-wave velocity,and point load strength index are used as input variables,and UCS is regarded as the output.Subsequently,twelve hybrid predictive models are obtained.Root mean square error(RMSE),mean absolute error(MAE),coefficient of determination(R2),and the proportion of the mean absolute percentage error less than 20%(A-20)are selected as the evaluation metrics.Experimental results showed that the hybridmodel consisting of the extreme gradient boostingmethod and the artificial bee colony algorithm(XGBoost-ABC)achieved satisfactory results on the training dataset and exhibited the best generalization performance on the testing dataset.The values of R2,A-20,RMSE,and MAE on the training dataset are 0.98,1.0,3.11 MPa,and 2.23MPa,respectively.The highest values of R2 and A-20(0.93 and 0.96),and the smallest RMSE and MAE values of 4.78 MPa and 3.76MPa,are observed on the testing dataset.The proposed hybrid model can be considered a reliable and effective method for predicting rock UCS in deep mines. 展开更多
关键词 Uniaxial compression strength strength prediction machine learning optimization algorithm
下载PDF
Marine Predators Algorithm with Deep Learning-Based Leukemia Cancer Classification on Medical Images
11
作者 Sonali Das Saroja Kumar Rout +5 位作者 Sujit Kumar Panda Pradyumna Kumar Mohapatra Abdulaziz S.Almazyad Muhammed Basheer Jasser Guojiang Xiong Ali Wagdy Mohamed 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期893-916,共24页
In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia... In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia and the extent to which cancer has established throughout the body.Identifying leukemia in the initial stage is vital to providing timely patient care.Medical image-analysis-related approaches grant safer,quicker,and less costly solutions while ignoring the difficulties of these invasive processes.It can be simple to generalize Computer vision(CV)-based and image-processing techniques and eradicate human error.Many researchers have implemented computer-aided diagnosticmethods andmachine learning(ML)for laboratory image analysis,hopefully overcoming the limitations of late leukemia detection and determining its subgroups.This study establishes a Marine Predators Algorithm with Deep Learning Leukemia Cancer Classification(MPADL-LCC)algorithm onMedical Images.The projectedMPADL-LCC system uses a bilateral filtering(BF)technique to pre-process medical images.The MPADL-LCC system uses Faster SqueezeNet withMarine Predators Algorithm(MPA)as a hyperparameter optimizer for feature extraction.Lastly,the denoising autoencoder(DAE)methodology can be executed to accurately detect and classify leukemia cancer.The hyperparameter tuning process using MPA helps enhance leukemia cancer classification performance.Simulation results are compared with other recent approaches concerning various measurements and the MPADL-LCC algorithm exhibits the best results over other recent approaches. 展开更多
关键词 Leukemia cancer medical imaging image classification deep learning marine predators algorithm
下载PDF
Navigating challenges and opportunities of machine learning in hydrogen catalysis and production processes: Beyond algorithm development
12
作者 Mohd Nur Ikhmal Salehmin Sieh Kiong Tiong +5 位作者 Hassan Mohamed Dallatu Abbas Umar Kai Ling Yu Hwai Chyuan Ong Saifuddin Nomanbhay Swee Su Lim 《Journal of Energy Chemistry》 SCIE EI CAS CSCD 2024年第12期223-252,共30页
With the projected global surge in hydrogen demand, driven by increasing applications and the imperative for low-emission hydrogen, the integration of machine learning(ML) across the hydrogen energy value chain is a c... With the projected global surge in hydrogen demand, driven by increasing applications and the imperative for low-emission hydrogen, the integration of machine learning(ML) across the hydrogen energy value chain is a compelling avenue. This review uniquely focuses on harnessing the synergy between ML and computational modeling(CM) or optimization tools, as well as integrating multiple ML techniques with CM, for the synthesis of diverse hydrogen evolution reaction(HER) catalysts and various hydrogen production processes(HPPs). Furthermore, this review addresses a notable gap in the literature by offering insights, analyzing challenges, and identifying research prospects and opportunities for sustainable hydrogen production. While the literature reflects a promising landscape for ML applications in hydrogen energy domains, transitioning AI-based algorithms from controlled environments to real-world applications poses significant challenges. Hence, this comprehensive review delves into the technical,practical, and ethical considerations associated with the application of ML in HER catalyst development and HPP optimization. Overall, this review provides guidance for unlocking the transformative potential of ML in enhancing prediction efficiency and sustainability in the hydrogen production sector. 展开更多
关键词 Machine learning Computational modeling HER catalyst synthesis Hydrogen energy Hydrogen production processes algorithm development
下载PDF
Gradient Optimizer Algorithm with Hybrid Deep Learning Based Failure Detection and Classification in the Industrial Environment
13
作者 Mohamed Zarouan Ibrahim M.Mehedi +1 位作者 Shaikh Abdul Latif Md.Masud Rana 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1341-1364,共24页
Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Indu... Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Industry 4.0.Specifically, various modernized industrial processes have been equipped with quite a few sensors to collectprocess-based data to find faults arising or prevailing in processes along with monitoring the status of processes.Fault diagnosis of rotating machines serves a main role in the engineering field and industrial production. Dueto the disadvantages of existing fault, diagnosis approaches, which greatly depend on professional experienceand human knowledge, intellectual fault diagnosis based on deep learning (DL) has attracted the researcher’sinterest. DL reaches the desired fault classification and automatic feature learning. Therefore, this article designs a Gradient Optimizer Algorithm with Hybrid Deep Learning-based Failure Detection and Classification (GOAHDLFDC)in the industrial environment. The presented GOAHDL-FDC technique initially applies continuous wavelettransform (CWT) for preprocessing the actual vibrational signals of the rotating machinery. Next, the residualnetwork (ResNet18) model was exploited for the extraction of features from the vibration signals which are thenfed into theHDLmodel for automated fault detection. Finally, theGOA-based hyperparameter tuning is performedtoadjust the parameter valuesof theHDLmodel accurately.The experimental result analysis of the GOAHDL-FD Calgorithm takes place using a series of simulations and the experimentation outcomes highlight the better resultsof the GOAHDL-FDC technique under different aspects. 展开更多
关键词 Fault detection Industry 4.0 gradient optimizer algorithm deep learning rotating machineries artificial intelligence
下载PDF
Internet of Things Enabled DDoS Attack Detection Using Pigeon Inspired Optimization Algorithm with Deep Learning Approach
14
作者 Turki Ali Alghamdi Saud S.Alotaibi 《Computers, Materials & Continua》 SCIE EI 2024年第9期4047-4064,共18页
Internet of Things(IoTs)provides better solutions in various fields,namely healthcare,smart transportation,home,etc.Recognizing Denial of Service(DoS)outbreaks in IoT platforms is significant in certifying the accessi... Internet of Things(IoTs)provides better solutions in various fields,namely healthcare,smart transportation,home,etc.Recognizing Denial of Service(DoS)outbreaks in IoT platforms is significant in certifying the accessibility and integrity of IoT systems.Deep learning(DL)models outperform in detecting complex,non-linear relationships,allowing them to effectually severe slight deviations fromnormal IoT activities that may designate a DoS outbreak.The uninterrupted observation and real-time detection actions of DL participate in accurate and rapid detection,permitting proactive reduction events to be executed,hence securing the IoT network’s safety and functionality.Subsequently,this study presents pigeon-inspired optimization with a DL-based attack detection and classification(PIODL-ADC)approach in an IoT environment.The PIODL-ADC approach implements a hyperparameter-tuned DL method for Distributed Denial-of-Service(DDoS)attack detection in an IoT platform.Initially,the PIODL-ADC model utilizes Z-score normalization to scale input data into a uniformformat.For handling the convolutional and adaptive behaviors of IoT,the PIODL-ADCmodel employs the pigeon-inspired optimization(PIO)method for feature selection to detect the related features,considerably enhancing the recognition’s accuracy.Also,the Elman Recurrent Neural Network(ERNN)model is utilized to recognize and classify DDoS attacks.Moreover,reptile search algorithm(RSA)based hyperparameter tuning is employed to improve the precision and robustness of the ERNN method.A series of investigational validations is made to ensure the accomplishment of the PIODL-ADC method.The experimental outcome exhibited that the PIODL-ADC method shows greater accomplishment when related to existing models,with a maximum accuracy of 99.81%. 展开更多
关键词 Internet of things denial of service deep learning reptile search algorithm feature selection
下载PDF
Adaptive Music Recommendation: Applying Machine Learning Algorithms Using Low Computing Device
15
作者 Tianhui Zhang Xianchen Liu +1 位作者 Zhen Guo Yuanhao Tian 《Journal of Software Engineering and Applications》 2024年第11期817-831,共15页
In the digital music landscape, the accuracy and response speed of music recommendation systems (MRS) are crucial for user experience optimization. Traditional MRS often relies on the use of high-performance servers f... In the digital music landscape, the accuracy and response speed of music recommendation systems (MRS) are crucial for user experience optimization. Traditional MRS often relies on the use of high-performance servers for large-scale training to produce recommendation results, which may result in the inability to achieve music recommendation in some areas due to substandard hardware conditions. This study evaluates the adaptability of four popular machine learning algorithms (K-means clustering, fuzzy C-means (FCM) clustering, hierarchical clustering, and self-organizing map (SOM)) on low-computing servers. Our comparative analysis highlights that while K-means and FCM are robust in high-performance settings, they underperform in low-power scenarios where SOM excels, delivering fast and reliable recommendations with minimal computational overhead. This research addresses a gap in the literature by providing a detailed comparative analysis of MRS algorithms, offering practical insights for implementing adaptive MRS in technologically diverse environments. We conclude with strategic recommendations for emerging streaming services in resource-constrained settings, emphasizing the need for scalable solutions that balance cost and performance. This study advocates an adaptive selection of recommendation algorithms to manage operational costs effectively and accommodate growth. 展开更多
关键词 Music Recommendation Media Arts and Sciences Artificial Intelligence Machine learning algorithmS Comparative Analysis
下载PDF
A Genetic Algorithm-Based Optimized Transfer Learning Approach for Breast Cancer Diagnosis
16
作者 Hussain AlSalman Taha Alfakih +2 位作者 Mabrook Al-Rakhami Mohammad Mehedi Hassan Amerah Alabrah 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第12期2575-2608,共34页
Breast cancer diagnosis through mammography is a pivotal application within medical image-based diagnostics,integral for early detection and effective treatment.While deep learning has significantly advanced the analy... Breast cancer diagnosis through mammography is a pivotal application within medical image-based diagnostics,integral for early detection and effective treatment.While deep learning has significantly advanced the analysis of mammographic images,challenges such as low contrast,image noise,and the high dimensionality of features often degrade model performance.Addressing these challenges,our study introduces a novel method integrating Genetic Algorithms(GA)with pre-trained Convolutional Neural Network(CNN)models to enhance feature selection and classification accuracy.Our approach involves a systematic process:first,we employ widely-used CNN architectures(VGG16,VGG19,MobileNet,and DenseNet)to extract a broad range of features from the Medical Image Analysis Society(MIAS)mammography dataset.Subsequently,a GA optimizes these features by selecting the most relevant and least redundant,aiming to overcome the typical pitfalls of high dimensionality.The selected features are then utilized to train several classifiers,including Linear and Polynomial Support Vector Machines(SVMs),K-Nearest Neighbors,Decision Trees,and Random Forests,enabling a robust evaluation of the method’s effectiveness across varied learning algorithms.Our extensive experimental evaluation demonstrates that the integration of MobileNet and GA significantly improves classification accuracy,from 83.33%to 89.58%,underscoring the method’s efficacy.By detailing these steps,we highlight the innovation of our approach which not only addresses key issues in breast cancer imaging analysis but also offers a scalable solution potentially applicable to other domains within medical imaging. 展开更多
关键词 Deep learning convolution neural network(CNN) support vector machine(SVM) genetic algorithmic(GA) breast cancer an optimized smart diagnosis
下载PDF
Improved PSO-Extreme Learning Machine Algorithm for Indoor Localization
17
作者 Qiu Wanqing Zhang Qingmiao +1 位作者 Zhao Junhui Yang Lihua 《China Communications》 SCIE CSCD 2024年第5期113-122,共10页
Wi Fi and fingerprinting localization method have been a hot topic in indoor positioning because of their universality and location-related features.The basic assumption of fingerprinting localization is that the rece... Wi Fi and fingerprinting localization method have been a hot topic in indoor positioning because of their universality and location-related features.The basic assumption of fingerprinting localization is that the received signal strength indication(RSSI)distance is accord with the location distance.Therefore,how to efficiently match the current RSSI of the user with the RSSI in the fingerprint database is the key to achieve high-accuracy localization.In this paper,a particle swarm optimization-extreme learning machine(PSO-ELM)algorithm is proposed on the basis of the original fingerprinting localization.Firstly,we collect the RSSI of the experimental area to construct the fingerprint database,and the ELM algorithm is applied to the online stages to determine the corresponding relation between the location of the terminal and the RSSI it receives.Secondly,PSO algorithm is used to improve the bias and weight of ELM neural network,and the global optimal results are obtained.Finally,extensive simulation results are presented.It is shown that the proposed algorithm can effectively reduce mean error of localization and improve positioning accuracy when compared with K-Nearest Neighbor(KNN),Kmeans and Back-propagation(BP)algorithms. 展开更多
关键词 extreme learning machine fingerprinting localization indoor localization machine learning particle swarm optimization
下载PDF
Stochastic Ranking Improved Teaching-Learning and Adaptive Grasshopper Optimization Algorithm-Based Clustering Scheme for Augmenting Network Lifetime in WSNs
18
作者 N Tamilarasan SB Lenin +1 位作者 P Mukunthan NC Sendhilkumar 《China Communications》 SCIE CSCD 2024年第9期159-178,共20页
In Wireless Sensor Networks(WSNs),Clustering process is widely utilized for increasing the lifespan with sustained energy stability during data transmission.Several clustering protocols were devised for extending netw... In Wireless Sensor Networks(WSNs),Clustering process is widely utilized for increasing the lifespan with sustained energy stability during data transmission.Several clustering protocols were devised for extending network lifetime,but most of them failed in handling the problem of fixed clustering,static rounds,and inadequate Cluster Head(CH)selection criteria which consumes more energy.In this paper,Stochastic Ranking Improved Teaching-Learning and Adaptive Grasshopper Optimization Algorithm(SRITL-AGOA)-based Clustering Scheme for energy stabilization and extending network lifespan.This SRITL-AGOA selected CH depending on the weightage of factors such as node mobility degree,neighbour's density distance to sink,single-hop or multihop communication and Residual Energy(RE)that directly influences the energy consumption of sensor nodes.In specific,Grasshopper Optimization Algorithm(GOA)is improved through tangent-based nonlinear strategy for enhancing the ability of global optimization.On the other hand,stochastic ranking and violation constraint handling strategies are embedded into Teaching-Learning-based Optimization Algorithm(TLOA)for improving its exploitation tendencies.Then,SR and VCH improved TLOA is embedded into the exploitation phase of AGOA for selecting better CH by maintaining better balance amid exploration and exploitation.Simulation results confirmed that the proposed SRITL-AGOA improved throughput by 21.86%,network stability by 18.94%,load balancing by 16.14%with minimized energy depletion by19.21%,compared to the competitive CH selection approaches. 展开更多
关键词 Adaptive Grasshopper Optimization algorithm(AGOA) Cluster Head(CH) network lifetime Teaching-learning-based Optimization algorithm(TLOA) Wireless Sensor Networks(WSNs)
下载PDF
Developing an atmospheric aging evaluation model of acrylic coatings:A semi-supervised machine learning algorithm
19
作者 Yiran Li Zhongheng Fu +5 位作者 Xiangyang Yu Zhihui Jin Haiyan Gong Lingwei Ma Xiaogang Li Dawei Zhang 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS CSCD 2024年第7期1617-1627,共11页
To study the atmospheric aging of acrylic coatings,a two-year aging exposure experiment was conducted in 13 representative climatic environments in China.An atmospheric aging evaluation model of acrylic coatings was d... To study the atmospheric aging of acrylic coatings,a two-year aging exposure experiment was conducted in 13 representative climatic environments in China.An atmospheric aging evaluation model of acrylic coatings was developed based on aging data including11 environmental factors from 567 cities.A hybrid method of random forest and Spearman correlation analysis was used to reduce the redundancy and multicollinearity of the data set by dimensionality reduction.A semi-supervised collaborative trained regression model was developed with the environmental factors as input and the low-frequency impedance modulus values of the electrochemical impedance spectra of acrylic coatings in 3.5wt%NaCl solution as output.The model improves accuracy compared to supervised learning algorithms model(support vector machines model).The model provides a new method for the rapid evaluation of the aging performance of acrylic coatings,and may also serve as a reference to evaluate the aging performance of other organic coatings. 展开更多
关键词 acrylic coatings coatings aging atmospheric environment machine learning
下载PDF
Deep learning algorithm featuring continuous learning for modulation classifications in wireless networks
20
作者 WU Nan SUN Yu WANG Xudong 《太赫兹科学与电子信息学报》 2024年第2期209-218,共10页
Although modulation classification based on deep neural network can achieve high Modulation Classification(MC)accuracies,catastrophic forgetting will occur when the neural network model continues to learn new tasks.In... Although modulation classification based on deep neural network can achieve high Modulation Classification(MC)accuracies,catastrophic forgetting will occur when the neural network model continues to learn new tasks.In this paper,we simulate the dynamic wireless communication environment and focus on breaking the learning paradigm of isolated automatic MC.We innovate a research algorithm for continuous automatic MC.Firstly,a memory for storing representative old task modulation signals is built,which is employed to limit the gradient update direction of new tasks in the continuous learning stage to ensure that the loss of old tasks is also in a downward trend.Secondly,in order to better simulate the dynamic wireless communication environment,we employ the mini-batch gradient algorithm which is more suitable for continuous learning.Finally,the signal in the memory can be replayed to further strengthen the characteristics of the old task signal in the model.Simulation results verify the effectiveness of the method. 展开更多
关键词 Deep learning(DL) modulation classification continuous learning catastrophic forgetting cognitive radio communications
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部