In the digital music landscape, the accuracy and response speed of music recommendation systems (MRS) are crucial for user experience optimization. Traditional MRS often relies on the use of high-performance servers f...In the digital music landscape, the accuracy and response speed of music recommendation systems (MRS) are crucial for user experience optimization. Traditional MRS often relies on the use of high-performance servers for large-scale training to produce recommendation results, which may result in the inability to achieve music recommendation in some areas due to substandard hardware conditions. This study evaluates the adaptability of four popular machine learning algorithms (K-means clustering, fuzzy C-means (FCM) clustering, hierarchical clustering, and self-organizing map (SOM)) on low-computing servers. Our comparative analysis highlights that while K-means and FCM are robust in high-performance settings, they underperform in low-power scenarios where SOM excels, delivering fast and reliable recommendations with minimal computational overhead. This research addresses a gap in the literature by providing a detailed comparative analysis of MRS algorithms, offering practical insights for implementing adaptive MRS in technologically diverse environments. We conclude with strategic recommendations for emerging streaming services in resource-constrained settings, emphasizing the need for scalable solutions that balance cost and performance. This study advocates an adaptive selection of recommendation algorithms to manage operational costs effectively and accommodate growth.展开更多
Big data analytic techniques associated with machine learning algorithms are playing an increasingly important role in various application fields,including stock market investment.However,few studies have focused on f...Big data analytic techniques associated with machine learning algorithms are playing an increasingly important role in various application fields,including stock market investment.However,few studies have focused on forecasting daily stock market returns,especially when using powerful machine learning techniques,such as deep neural networks(DNNs),to perform the analyses.DNNs employ various deep learning algorithms based on the combination of network structure,activation function,and model parameters,with their performance depending on the format of the data representation.This paper presents a comprehensive big data analytics process to predict the daily return direction of the SPDR S&P 500 ETF(ticker symbol:SPY)based on 60 financial and economic features.DNNs and traditional artificial neural networks(ANNs)are then deployed over the entire preprocessed but untransformed dataset,along with two datasets transformed via principal component analysis(PCA),to predict the daily direction of future stock market index returns.While controlling for overfitting,a pattern for the classification accuracy of the DNNs is detected and demonstrated as the number of the hidden layers increases gradually from 12 to 1000.Moreover,a set of hypothesis testing procedures are implemented on the classification,and the simulation results show that the DNNs using two PCA-represented datasets give significantly higher classification accuracy than those using the entire untransformed dataset,as well as several other hybrid machine learning algorithms.In addition,the trading strategies guided by the DNN classification process based on PCA-represented data perform slightly better than the others tested,including in a comparison against two standard benchmarks.展开更多
This article firstly explains the concepts of artificial intelligence and algorithm separately,then determines the research status of artificial intelligence and machine learning in the background of the increasing po...This article firstly explains the concepts of artificial intelligence and algorithm separately,then determines the research status of artificial intelligence and machine learning in the background of the increasing popularity of artificial intelligence,and finally briefly describes the machine learning algorithm in the field of artificial intelligence,as well as puts forward appropriate development prospects,in order to provide theoretical reference for industry insider.展开更多
Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Indu...Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Industry 4.0.Specifically, various modernized industrial processes have been equipped with quite a few sensors to collectprocess-based data to find faults arising or prevailing in processes along with monitoring the status of processes.Fault diagnosis of rotating machines serves a main role in the engineering field and industrial production. Dueto the disadvantages of existing fault, diagnosis approaches, which greatly depend on professional experienceand human knowledge, intellectual fault diagnosis based on deep learning (DL) has attracted the researcher’sinterest. DL reaches the desired fault classification and automatic feature learning. Therefore, this article designs a Gradient Optimizer Algorithm with Hybrid Deep Learning-based Failure Detection and Classification (GOAHDLFDC)in the industrial environment. The presented GOAHDL-FDC technique initially applies continuous wavelettransform (CWT) for preprocessing the actual vibrational signals of the rotating machinery. Next, the residualnetwork (ResNet18) model was exploited for the extraction of features from the vibration signals which are thenfed into theHDLmodel for automated fault detection. Finally, theGOA-based hyperparameter tuning is performedtoadjust the parameter valuesof theHDLmodel accurately.The experimental result analysis of the GOAHDL-FD Calgorithm takes place using a series of simulations and the experimentation outcomes highlight the better resultsof the GOAHDL-FDC technique under different aspects.展开更多
Sleep and well-being have been intricately linked,and sleep hygiene is paramount for developing mental well-being and resilience.Although widespread,sleep disorders require elaborate polysomnography laboratory and pat...Sleep and well-being have been intricately linked,and sleep hygiene is paramount for developing mental well-being and resilience.Although widespread,sleep disorders require elaborate polysomnography laboratory and patient-stay with sleep in unfamiliar environments.Current technologies have allowed various devices to diagnose sleep disorders at home.However,these devices are in various validation stages,with many already receiving approvals from competent authorities.This has captured vast patient-related physiologic data for advanced analytics using artificial intelligence through machine and deep learning applications.This is expected to be integrated with patients’Electronic Health Records and provide individualized prescriptive therapy for sleep disorders in the future.展开更多
Blasting is well-known as an effective method for fragmenting or moving rock in open-pit mines.To evaluate the quality of blasting,the size of rock distribution is used as a critical criterion in blasting operations.A...Blasting is well-known as an effective method for fragmenting or moving rock in open-pit mines.To evaluate the quality of blasting,the size of rock distribution is used as a critical criterion in blasting operations.A high percentage of oversized rocks generated by blasting operations can lead to economic and environmental damage.Therefore,this study proposed four novel intelligent models to predict the size of rock distribution in mine blasting in order to optimize blasting parameters,as well as the efficiency of blasting operation in open mines.Accordingly,a nature-inspired algorithm(i.e.,firefly algorithm-FFA)and different machine learning algorithms(i.e.,gradient boosting machine(GBM),support vector machine(SVM),Gaussian process(GP),and artificial neural network(ANN))were combined for this aim,abbreviated as FFA-GBM,FFA-SVM,FFA-GP,and FFA-ANN,respectively.Subsequently,predicted results from the abovementioned models were compared with each other using three statistical indicators(e.g.,mean absolute error,root-mean-squared error,and correlation coefficient)and color intensity method.For developing and simulating the size of rock in blasting operations,136 blasting events with their images were collected and analyzed by the Split-Desktop software.In which,111 events were randomly selected for the development and optimization of the models.Subsequently,the remaining 25 blasting events were applied to confirm the accuracy of the proposed models.Herein,blast design parameters were regarded as input variables to predict the size of rock in blasting operations.Finally,the obtained results revealed that the FFA is a robust optimization algorithm for estimating rock fragmentation in bench blasting.Among the models developed in this study,FFA-GBM provided the highest accuracy in predicting the size of fragmented rocks.The other techniques(i.e.,FFA-SVM,FFA-GP,and FFA-ANN)yielded lower computational stability and efficiency.Hence,the FFA-GBM model can be used as a powerful and precise soft computing tool that can be applied to practical engineering cases aiming to improve the quality of blasting and rock fragmentation.展开更多
Computational Intelligence (CI) holds the key to the development of smart grid to overcome the challenges of planning and optimization through accurate prediction of Renewable Energy Sources (RES). This paper presents...Computational Intelligence (CI) holds the key to the development of smart grid to overcome the challenges of planning and optimization through accurate prediction of Renewable Energy Sources (RES). This paper presents an architectural framework for the construction of hybrid intelligent predictor for solar power. This research investigates the applicability of heterogeneous regression algorithms for 6 hour ahead solar power availability forecasting using historical data from Rockhampton, Australia. Real life solar radiation data is collected across six years with hourly resolution from 2005 to 2010. We observe that the hybrid prediction method is suitable for a reliable smart grid energy management. Prediction reliability of the proposed hybrid prediction method is carried out in terms of prediction error performance based on statistical and graphical methods. The experimental results show that the proposed hybrid method achieved acceptable prediction accuracy. This potential hybrid model is applicable as a local predictor for any proposed hybrid method in real life application for 6 hours in advance prediction to ensure constant solar power supply in the smart grid operation.展开更多
The flying foxes optimization(FFO)algorithm,as a newly introduced metaheuristic algorithm,is inspired by the survival tactics of flying foxes in heat wave environments.FFO preferentially selects the best-performing in...The flying foxes optimization(FFO)algorithm,as a newly introduced metaheuristic algorithm,is inspired by the survival tactics of flying foxes in heat wave environments.FFO preferentially selects the best-performing individuals.This tendency will cause the newly generated solution to remain closely tied to the candidate optimal in the search area.To address this issue,the paper introduces an opposition-based learning-based search mechanism for FFO algorithm(IFFO).Firstly,this paper introduces niching techniques to improve the survival list method,which not only focuses on the adaptability of individuals but also considers the population’s crowding degree to enhance the global search capability.Secondly,an initialization strategy of opposition-based learning is used to perturb the initial population and elevate its quality.Finally,to verify the superiority of the improved search mechanism,IFFO,FFO and the cutting-edge metaheuristic algorithms are compared and analyzed using a set of test functions.The results prove that compared with other algorithms,IFFO is characterized by its rapid convergence,precise results and robust stability.展开更多
Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinfor...Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinforce-ment learning(DRL)theory and an improved Multi-Agent Deep Deterministic Policy Gradient(MADDPG-D2)algorithm with a dual experience replay pool and a dual noise based on multi-agent architecture is proposed to improve the efficiency of DTA.The algorithm is based on the traditional Multi-Agent Deep Deterministic Policy Gradient(MADDPG)algorithm,and considers the introduction of a double noise mechanism to increase the action exploration space in the early stage of the algorithm,and the introduction of a double experience pool to improve the data utilization rate;at the same time,in order to accelerate the training speed and efficiency of the agents,and to solve the cold-start problem of the training,the a priori knowledge technology is applied to the training of the algorithm.Finally,the MADDPG-D2 algorithm is compared and analyzed based on the digital battlefield of ground and air confrontation.The experimental results show that the agents trained by the MADDPG-D2 algorithm have higher win rates and average rewards,can utilize the resources more reasonably,and better solve the problem of the traditional single agent algorithms facing the difficulty of solving the problem in the high-dimensional decision space.The MADDPG-D2 algorithm based on multi-agent architecture proposed in this paper has certain superiority and rationality in DTA.展开更多
Neuromuscular diseases present profound challenges to individuals and healthcare systems worldwide, profoundly impacting motor functions. This research provides a comprehensive exploration of how artificial intelligen...Neuromuscular diseases present profound challenges to individuals and healthcare systems worldwide, profoundly impacting motor functions. This research provides a comprehensive exploration of how artificial intelligence (AI) technology is revolutionizing rehabilitation for individuals with neuromuscular disorders. Through an extensive review, this paper elucidates a wide array of AI-driven interventions spanning robotic-assisted therapy, virtual reality rehabilitation, and intricately tailored machine learning algorithms. The aim is to delve into the nuanced applications of AI, unlocking its transformative potential in optimizing personalized treatment plans for those grappling with the complexities of neuromuscular diseases. By examining the multifaceted intersection of AI and rehabilitation, this paper not only contributes to our understanding of cutting-edge advancements but also envisions a future where technological innovations play a pivotal role in alleviating the challenges posed by neuromuscular diseases. From employing neural-fuzzy adaptive controllers for precise trajectory tracking amidst uncertainties to utilizing machine learning algorithms for recognizing patient motor intentions and adapting training accordingly, this research encompasses a holistic approach towards harnessing AI for enhanced rehabilitation outcomes. By embracing the synergy between AI and rehabilitation, we pave the way for a future where individuals with neuromuscular disorders can access tailored, effective, and technologically-driven interventions to improve their quality of life and functional independence.展开更多
Background:Deep Learning Algorithms(DLA)have become prominent as an application of Artificial Intelligence(Al)Techniques since 2010.This paper introduces the DLA to predict the relationships between individual tree he...Background:Deep Learning Algorithms(DLA)have become prominent as an application of Artificial Intelligence(Al)Techniques since 2010.This paper introduces the DLA to predict the relationships between individual tree height(ITH)and the diameter at breast height(DBH).Methods:A set of 2024 pairs of individual height and diameter at breast height measurements,originating from 150 sample plots located in stands of even aged and pure Anatolian Crimean Pine(Pinus nigra J.F.Arnold ssp.pallasiana(Lamb.)Holmboe)in Konya Forest Enterprise.The present study primarily investigated the capability and usability of DLA models for predicting the relationships between the ITH and the DBH sampled from some stands with different growth structures.The 80 different DLA models,which involve different the alternatives for the numbers of hidden layers and neuron,have been trained and compared to determine optimum and best predictive DLAs network structure.Results:It was determined that the DLA model with 9 layers and 100 neurons has been the best predictive network model compared as those by other different DLA,Artificial Neural Network,Nonlinear Regression and Nonlinear Mixed Effect models.The alternative of 100#neurons and 9#hidden layers in deep learning algorithms resulted in best predictive ITH values with root mean squared error(RMSE,0.5575),percent of the root mean squared error(RMSE%,4.9504%),Akaike information criterion(AIC,-998.9540),Bayesian information criterion(BIC,884.6591),fit index(Fl,0.9436),average absolute error(AAE,0.4077),maximum absolute error(max.AE,2.5106),Bias(0.0057)and percent Bias(Bias%,0.0502%).In addition,these predictive results with DLAs were further validated by the Equivalence tests that showed the DLA models successfully predicted the tree height in the independent dataset.Conclusion:This study has emphasized the capability of the DLA models,novel artificial intelligence technique,for predicting the relationships between individual tree height and the diameter at breast height that can be required information for the management of forests.展开更多
For a class of non-uniform output sampling hybrid system with actuator faults and bounded disturbances,an iterative learning fault diagnosis algorithm is proposed.Firstly,in order to measure the impact of fault on sys...For a class of non-uniform output sampling hybrid system with actuator faults and bounded disturbances,an iterative learning fault diagnosis algorithm is proposed.Firstly,in order to measure the impact of fault on system between every consecutive output sampling instants,the actual fault function is transformed to obtain an equivalent fault model by using the integral mean value theorem,then the non-uniform sampling hybrid system is converted to continuous systems with timevarying delay based on the output delay method.Afterwards,an observer-based fault diagnosis filter with virtual fault is designed to estimate the equivalent fault,and the iterative learning regulation algorithm is chosen to update the virtual fault repeatedly to make it approximate the actual equivalent fault after some iterative learning trials,so the algorithm can detect and estimate the system faults adaptively.Simulation results of an electro-mechanical control system model with different types of faults illustrate the feasibility and effectiveness of this algorithm.展开更多
Objective To develope a deep learning algorithm for pathological classification of chronic gastritis and assess its performance using whole-slide images(WSIs).Methods We retrospectively collected 1,250 gastric biopsy ...Objective To develope a deep learning algorithm for pathological classification of chronic gastritis and assess its performance using whole-slide images(WSIs).Methods We retrospectively collected 1,250 gastric biopsy specimens(1,128 gastritis,122 normal mucosa)from PLA General Hospital.The deep learning algorithm based on DeepLab v3(ResNet-50)architecture was trained and validated using 1,008 WSIs and 100 WSIs,respectively.The diagnostic performance of the algorithm was tested on an independent test set of 142 WSIs,with the pathologists’consensus diagnosis as the gold standard.Results The receiver operating characteristic(ROC)curves were generated for chronic superficial gastritis(CSuG),chronic active gastritis(CAcG),and chronic atrophic gastritis(CAtG)in the test set,respectively.The areas under the ROC curves(AUCs)of the algorithm for CSuG,CAcG,and CAtG were 0.882,0.905 and 0.910,respectively.The sensitivity and specificity of the deep learning algorithm for the classification of CSuG,CAcG,and CAtG were 0.790 and 1.000(accuracy 0.880),0.985 and 0.829(accuracy 0.901),0.952 and 0.992(accuracy 0.986),respectively.The overall predicted accuracy for three different types of gastritis was 0.867.By flagging the suspicious regions identified by the algorithm in WSI,a more transparent and interpretable diagnosis can be generated.Conclusion The deep learning algorithm achieved high accuracy for chronic gastritis classification using WSIs.By pre-highlighting the different gastritis regions,it might be used as an auxiliary diagnostic tool to improve the work efficiency of pathologists.展开更多
In order to resolve the coordination and optimization of the power network planning effectively, on the basis of introducing the concept of power intelligence center (PIC), the key factor power flow, line investment a...In order to resolve the coordination and optimization of the power network planning effectively, on the basis of introducing the concept of power intelligence center (PIC), the key factor power flow, line investment and load that impact generation sector, transmission sector and dispatching center in PIC were analyzed and a multi-objective coordination optimal model for new power intelligence center (NPIC) was established. To ensure the reliability and coordination of power grid and reduce investment cost, two aspects were optimized. The evolutionary algorithm was introduced to solve optimal power flow problem and the fitness function was improved to ensure the minimum cost of power generation. The gray particle swarm optimization (GPSO) algorithm was used to forecast load accurately, which can ensure the network with high reliability. On this basis, the multi-objective coordination optimal model which was more practical and in line with the need of the electricity market was proposed, then the coordination model was effectively solved through the improved particle swarm optimization algorithm, and the corresponding algorithm was obtained. The optimization of IEEE30 node system shows that the evolutionary algorithm can effectively solve the problem of optimal power flow. The average load forecasting of GPSO is 26.97 MW, which has an error of 0.34 MW compared with the actual load. The algorithm has higher forecasting accuracy. The multi-objective coordination optimal model for NPIC can effectively process the coordination and optimization problem of power network.展开更多
Human fall detection(FD)acts as an important part in creating sensor based alarm system,enabling physical therapists to minimize the effect of fall events and save human lives.Generally,elderly people suffer from seve...Human fall detection(FD)acts as an important part in creating sensor based alarm system,enabling physical therapists to minimize the effect of fall events and save human lives.Generally,elderly people suffer from several diseases,and fall action is a common situation which can occur at any time.In this view,this paper presents an Improved Archimedes Optimization Algorithm with Deep Learning Empowered Fall Detection(IAOA-DLFD)model to identify the fall/non-fall events.The proposed IAOA-DLFD technique comprises different levels of pre-processing to improve the input image quality.Besides,the IAOA with Capsule Network based feature extractor is derived to produce an optimal set of feature vectors.In addition,the IAOA uses to significantly boost the overall FD performance by optimal choice of CapsNet hyperparameters.Lastly,radial basis function(RBF)network is applied for determining the proper class labels of the test images.To showcase the enhanced performance of the IAOA-DLFD technique,a wide range of experiments are executed and the outcomes stated the enhanced detection outcome of the IAOA-DLFD approach over the recent methods with the accuracy of 0.997.展开更多
Aiming at the problems that fuzzy neural network controller has heavy computation and lag,a T-S norm Fuzzy Neural Network Control based on hybrid learning algorithm was proposed.Immune genetic algorithm (IGA) was used...Aiming at the problems that fuzzy neural network controller has heavy computation and lag,a T-S norm Fuzzy Neural Network Control based on hybrid learning algorithm was proposed.Immune genetic algorithm (IGA) was used to optimize the parameters of membership functions (MFs) off line,and the neural network was used to adjust the parameters of MFs on line to enhance the response of the controller.Moreover,the latter network was used to adjust the fuzzy rules automatically to reduce the computation of the neural network and improve the robustness and adaptability of the controller,so that the controller can work well ever when the underwater vehicle works in hostile ocean environment.Finally,experiments were carried on " XX" mini autonomous underwater vehicle (min-AUV) in tank.The results showed that this controller has great improvement in response and overshoot,compared with the traditional controllers.展开更多
A novel algorithm is presented for supervised inductive learning by integrating a genetic algorithm with hot'tom-up induction process.The hybrid learning algorithm has been implemented in C on a personal computer(...A novel algorithm is presented for supervised inductive learning by integrating a genetic algorithm with hot'tom-up induction process.The hybrid learning algorithm has been implemented in C on a personal computer(386DX/40).The performance of the algorithm has been evaluated by applying it to 11-multiplexer problem and the results show that the algorithm's accuracy is higher than the others[5,12, 13].展开更多
Solar energy is a widely used type of renewable energy.Photovoltaic arrays are used to harvest solar energy.The major goal,in harvesting the maximum possible power,is to operate the system at its maximum power point(M...Solar energy is a widely used type of renewable energy.Photovoltaic arrays are used to harvest solar energy.The major goal,in harvesting the maximum possible power,is to operate the system at its maximum power point(MPP).If the irradiation conditions are uniform,the P-V curve of the PV array has only one peak that is called its MPP.But when the irradiation conditions are non-uniform,the P-V curve has multiple peaks.Each peak represents an MPP for a specific irradiation condition.The highest of all the peaks is called Global Maximum Power Point(GMPP).Under uniform irradiation conditions,there is zero or no partial shading.But the changing irradiance causes a shading effect which is called Partial Shading.Many conventional and soft computing techniques have been in use to harvest solar energy.These techniques perform well under uniform and weak shading conditions but fail when shading conditions are strong.In this paper,a new method is proposed which uses Machine Learning based algorithm called Opposition-Based-Learning(OBL)to deal with partial shading conditions.Simulation studies on different cases of partial shading have proven this technique effective in attaining MPP.展开更多
The Industrial Internet of Things(IIoT)has brought numerous benefits,such as improved efficiency,smart analytics,and increased automation.However,it also exposes connected devices,users,applications,and data generated...The Industrial Internet of Things(IIoT)has brought numerous benefits,such as improved efficiency,smart analytics,and increased automation.However,it also exposes connected devices,users,applications,and data generated to cyber security threats that need to be addressed.This work investigates hybrid cyber threats(HCTs),which are now working on an entirely new level with the increasingly adopted IIoT.This work focuses on emerging methods to model,detect,and defend against hybrid cyber attacks using machine learning(ML)techniques.Specifically,a novel ML-based HCT modelling and analysis framework was proposed,in which L1 regularisation and Random Forest were used to cluster features and analyse the importance and impact of each feature in both individual threats and HCTs.A grey relation analysis-based model was employed to construct the correlation between IIoT components and different threats.展开更多
Hybridizing metaheuristic algorithms involves synergistically combining different optimization techniques to effectively address complex and challenging optimization problems.This approach aims to leverage the strengt...Hybridizing metaheuristic algorithms involves synergistically combining different optimization techniques to effectively address complex and challenging optimization problems.This approach aims to leverage the strengths of multiple algorithms,enhancing solution quality,convergence speed,and robustness,thereby offering a more versatile and efficient means of solving intricate real-world optimization tasks.In this paper,we introduce a hybrid algorithm that amalgamates three distinct metaheuristics:the Beluga Whale Optimization(BWO),the Honey Badger Algorithm(HBA),and the Jellyfish Search(JS)optimizer.The proposed hybrid algorithm will be referred to as BHJO.Through this fusion,the BHJO algorithm aims to leverage the strengths of each optimizer.Before this hybridization,we thoroughly examined the exploration and exploitation capabilities of the BWO,HBA,and JS metaheuristics,as well as their ability to strike a balance between exploration and exploitation.This meticulous analysis allowed us to identify the pros and cons of each algorithm,enabling us to combine them in a novel hybrid approach that capitalizes on their respective strengths for enhanced optimization performance.In addition,the BHJO algorithm incorporates Opposition-Based Learning(OBL)to harness the advantages offered by this technique,leveraging its diverse exploration,accelerated convergence,and improved solution quality to enhance the overall performance and effectiveness of the hybrid algorithm.Moreover,the performance of the BHJO algorithm was evaluated across a range of both unconstrained and constrained optimization problems,providing a comprehensive assessment of its efficacy and applicability in diverse problem domains.Similarly,the BHJO algorithm was subjected to a comparative analysis with several renowned algorithms,where mean and standard deviation values were utilized as evaluation metrics.This rigorous comparison aimed to assess the performance of the BHJOalgorithmabout its counterparts,shedding light on its effectiveness and reliability in solving optimization problems.Finally,the obtained numerical statistics underwent rigorous analysis using the Friedman post hoc Dunn’s test.The resulting numerical values revealed the BHJO algorithm’s competitiveness in tackling intricate optimization problems,affirming its capability to deliver favorable outcomes in challenging scenarios.展开更多
文摘In the digital music landscape, the accuracy and response speed of music recommendation systems (MRS) are crucial for user experience optimization. Traditional MRS often relies on the use of high-performance servers for large-scale training to produce recommendation results, which may result in the inability to achieve music recommendation in some areas due to substandard hardware conditions. This study evaluates the adaptability of four popular machine learning algorithms (K-means clustering, fuzzy C-means (FCM) clustering, hierarchical clustering, and self-organizing map (SOM)) on low-computing servers. Our comparative analysis highlights that while K-means and FCM are robust in high-performance settings, they underperform in low-power scenarios where SOM excels, delivering fast and reliable recommendations with minimal computational overhead. This research addresses a gap in the literature by providing a detailed comparative analysis of MRS algorithms, offering practical insights for implementing adaptive MRS in technologically diverse environments. We conclude with strategic recommendations for emerging streaming services in resource-constrained settings, emphasizing the need for scalable solutions that balance cost and performance. This study advocates an adaptive selection of recommendation algorithms to manage operational costs effectively and accommodate growth.
文摘Big data analytic techniques associated with machine learning algorithms are playing an increasingly important role in various application fields,including stock market investment.However,few studies have focused on forecasting daily stock market returns,especially when using powerful machine learning techniques,such as deep neural networks(DNNs),to perform the analyses.DNNs employ various deep learning algorithms based on the combination of network structure,activation function,and model parameters,with their performance depending on the format of the data representation.This paper presents a comprehensive big data analytics process to predict the daily return direction of the SPDR S&P 500 ETF(ticker symbol:SPY)based on 60 financial and economic features.DNNs and traditional artificial neural networks(ANNs)are then deployed over the entire preprocessed but untransformed dataset,along with two datasets transformed via principal component analysis(PCA),to predict the daily direction of future stock market index returns.While controlling for overfitting,a pattern for the classification accuracy of the DNNs is detected and demonstrated as the number of the hidden layers increases gradually from 12 to 1000.Moreover,a set of hypothesis testing procedures are implemented on the classification,and the simulation results show that the DNNs using two PCA-represented datasets give significantly higher classification accuracy than those using the entire untransformed dataset,as well as several other hybrid machine learning algorithms.In addition,the trading strategies guided by the DNN classification process based on PCA-represented data perform slightly better than the others tested,including in a comparison against two standard benchmarks.
文摘This article firstly explains the concepts of artificial intelligence and algorithm separately,then determines the research status of artificial intelligence and machine learning in the background of the increasing popularity of artificial intelligence,and finally briefly describes the machine learning algorithm in the field of artificial intelligence,as well as puts forward appropriate development prospects,in order to provide theoretical reference for industry insider.
基金The Deanship of Scientific Research(DSR)at King Abdulaziz University(KAU),Jeddah,Saudi Arabia has funded this project under Grant No.(G:651-135-1443).
文摘Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Industry 4.0.Specifically, various modernized industrial processes have been equipped with quite a few sensors to collectprocess-based data to find faults arising or prevailing in processes along with monitoring the status of processes.Fault diagnosis of rotating machines serves a main role in the engineering field and industrial production. Dueto the disadvantages of existing fault, diagnosis approaches, which greatly depend on professional experienceand human knowledge, intellectual fault diagnosis based on deep learning (DL) has attracted the researcher’sinterest. DL reaches the desired fault classification and automatic feature learning. Therefore, this article designs a Gradient Optimizer Algorithm with Hybrid Deep Learning-based Failure Detection and Classification (GOAHDLFDC)in the industrial environment. The presented GOAHDL-FDC technique initially applies continuous wavelettransform (CWT) for preprocessing the actual vibrational signals of the rotating machinery. Next, the residualnetwork (ResNet18) model was exploited for the extraction of features from the vibration signals which are thenfed into theHDLmodel for automated fault detection. Finally, theGOA-based hyperparameter tuning is performedtoadjust the parameter valuesof theHDLmodel accurately.The experimental result analysis of the GOAHDL-FD Calgorithm takes place using a series of simulations and the experimentation outcomes highlight the better resultsof the GOAHDL-FDC technique under different aspects.
文摘Sleep and well-being have been intricately linked,and sleep hygiene is paramount for developing mental well-being and resilience.Although widespread,sleep disorders require elaborate polysomnography laboratory and patient-stay with sleep in unfamiliar environments.Current technologies have allowed various devices to diagnose sleep disorders at home.However,these devices are in various validation stages,with many already receiving approvals from competent authorities.This has captured vast patient-related physiologic data for advanced analytics using artificial intelligence through machine and deep learning applications.This is expected to be integrated with patients’Electronic Health Records and provide individualized prescriptive therapy for sleep disorders in the future.
基金supported by the Center for Mining,Electro-Mechanical research of Hanoi University of Mining and Geology(HUMG),Hanoi,Vietnamfinancially supported by the Hunan Provincial Department of Education General Project(19C1744)+1 种基金Hunan Province Science Foundation for Youth Scholars of China fund(2018JJ3510)the Innovation-Driven Project of Central South University(2020CX040)。
文摘Blasting is well-known as an effective method for fragmenting or moving rock in open-pit mines.To evaluate the quality of blasting,the size of rock distribution is used as a critical criterion in blasting operations.A high percentage of oversized rocks generated by blasting operations can lead to economic and environmental damage.Therefore,this study proposed four novel intelligent models to predict the size of rock distribution in mine blasting in order to optimize blasting parameters,as well as the efficiency of blasting operation in open mines.Accordingly,a nature-inspired algorithm(i.e.,firefly algorithm-FFA)and different machine learning algorithms(i.e.,gradient boosting machine(GBM),support vector machine(SVM),Gaussian process(GP),and artificial neural network(ANN))were combined for this aim,abbreviated as FFA-GBM,FFA-SVM,FFA-GP,and FFA-ANN,respectively.Subsequently,predicted results from the abovementioned models were compared with each other using three statistical indicators(e.g.,mean absolute error,root-mean-squared error,and correlation coefficient)and color intensity method.For developing and simulating the size of rock in blasting operations,136 blasting events with their images were collected and analyzed by the Split-Desktop software.In which,111 events were randomly selected for the development and optimization of the models.Subsequently,the remaining 25 blasting events were applied to confirm the accuracy of the proposed models.Herein,blast design parameters were regarded as input variables to predict the size of rock in blasting operations.Finally,the obtained results revealed that the FFA is a robust optimization algorithm for estimating rock fragmentation in bench blasting.Among the models developed in this study,FFA-GBM provided the highest accuracy in predicting the size of fragmented rocks.The other techniques(i.e.,FFA-SVM,FFA-GP,and FFA-ANN)yielded lower computational stability and efficiency.Hence,the FFA-GBM model can be used as a powerful and precise soft computing tool that can be applied to practical engineering cases aiming to improve the quality of blasting and rock fragmentation.
文摘Computational Intelligence (CI) holds the key to the development of smart grid to overcome the challenges of planning and optimization through accurate prediction of Renewable Energy Sources (RES). This paper presents an architectural framework for the construction of hybrid intelligent predictor for solar power. This research investigates the applicability of heterogeneous regression algorithms for 6 hour ahead solar power availability forecasting using historical data from Rockhampton, Australia. Real life solar radiation data is collected across six years with hourly resolution from 2005 to 2010. We observe that the hybrid prediction method is suitable for a reliable smart grid energy management. Prediction reliability of the proposed hybrid prediction method is carried out in terms of prediction error performance based on statistical and graphical methods. The experimental results show that the proposed hybrid method achieved acceptable prediction accuracy. This potential hybrid model is applicable as a local predictor for any proposed hybrid method in real life application for 6 hours in advance prediction to ensure constant solar power supply in the smart grid operation.
基金support from the Ningxia Natural Science Foundation Project(2023AAC03361).
文摘The flying foxes optimization(FFO)algorithm,as a newly introduced metaheuristic algorithm,is inspired by the survival tactics of flying foxes in heat wave environments.FFO preferentially selects the best-performing individuals.This tendency will cause the newly generated solution to remain closely tied to the candidate optimal in the search area.To address this issue,the paper introduces an opposition-based learning-based search mechanism for FFO algorithm(IFFO).Firstly,this paper introduces niching techniques to improve the survival list method,which not only focuses on the adaptability of individuals but also considers the population’s crowding degree to enhance the global search capability.Secondly,an initialization strategy of opposition-based learning is used to perturb the initial population and elevate its quality.Finally,to verify the superiority of the improved search mechanism,IFFO,FFO and the cutting-edge metaheuristic algorithms are compared and analyzed using a set of test functions.The results prove that compared with other algorithms,IFFO is characterized by its rapid convergence,precise results and robust stability.
基金This research was funded by the Project of the National Natural Science Foundation of China,Grant Number 62106283.
文摘Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinforce-ment learning(DRL)theory and an improved Multi-Agent Deep Deterministic Policy Gradient(MADDPG-D2)algorithm with a dual experience replay pool and a dual noise based on multi-agent architecture is proposed to improve the efficiency of DTA.The algorithm is based on the traditional Multi-Agent Deep Deterministic Policy Gradient(MADDPG)algorithm,and considers the introduction of a double noise mechanism to increase the action exploration space in the early stage of the algorithm,and the introduction of a double experience pool to improve the data utilization rate;at the same time,in order to accelerate the training speed and efficiency of the agents,and to solve the cold-start problem of the training,the a priori knowledge technology is applied to the training of the algorithm.Finally,the MADDPG-D2 algorithm is compared and analyzed based on the digital battlefield of ground and air confrontation.The experimental results show that the agents trained by the MADDPG-D2 algorithm have higher win rates and average rewards,can utilize the resources more reasonably,and better solve the problem of the traditional single agent algorithms facing the difficulty of solving the problem in the high-dimensional decision space.The MADDPG-D2 algorithm based on multi-agent architecture proposed in this paper has certain superiority and rationality in DTA.
文摘Neuromuscular diseases present profound challenges to individuals and healthcare systems worldwide, profoundly impacting motor functions. This research provides a comprehensive exploration of how artificial intelligence (AI) technology is revolutionizing rehabilitation for individuals with neuromuscular disorders. Through an extensive review, this paper elucidates a wide array of AI-driven interventions spanning robotic-assisted therapy, virtual reality rehabilitation, and intricately tailored machine learning algorithms. The aim is to delve into the nuanced applications of AI, unlocking its transformative potential in optimizing personalized treatment plans for those grappling with the complexities of neuromuscular diseases. By examining the multifaceted intersection of AI and rehabilitation, this paper not only contributes to our understanding of cutting-edge advancements but also envisions a future where technological innovations play a pivotal role in alleviating the challenges posed by neuromuscular diseases. From employing neural-fuzzy adaptive controllers for precise trajectory tracking amidst uncertainties to utilizing machine learning algorithms for recognizing patient motor intentions and adapting training accordingly, this research encompasses a holistic approach towards harnessing AI for enhanced rehabilitation outcomes. By embracing the synergy between AI and rehabilitation, we pave the way for a future where individuals with neuromuscular disorders can access tailored, effective, and technologically-driven interventions to improve their quality of life and functional independence.
文摘Background:Deep Learning Algorithms(DLA)have become prominent as an application of Artificial Intelligence(Al)Techniques since 2010.This paper introduces the DLA to predict the relationships between individual tree height(ITH)and the diameter at breast height(DBH).Methods:A set of 2024 pairs of individual height and diameter at breast height measurements,originating from 150 sample plots located in stands of even aged and pure Anatolian Crimean Pine(Pinus nigra J.F.Arnold ssp.pallasiana(Lamb.)Holmboe)in Konya Forest Enterprise.The present study primarily investigated the capability and usability of DLA models for predicting the relationships between the ITH and the DBH sampled from some stands with different growth structures.The 80 different DLA models,which involve different the alternatives for the numbers of hidden layers and neuron,have been trained and compared to determine optimum and best predictive DLAs network structure.Results:It was determined that the DLA model with 9 layers and 100 neurons has been the best predictive network model compared as those by other different DLA,Artificial Neural Network,Nonlinear Regression and Nonlinear Mixed Effect models.The alternative of 100#neurons and 9#hidden layers in deep learning algorithms resulted in best predictive ITH values with root mean squared error(RMSE,0.5575),percent of the root mean squared error(RMSE%,4.9504%),Akaike information criterion(AIC,-998.9540),Bayesian information criterion(BIC,884.6591),fit index(Fl,0.9436),average absolute error(AAE,0.4077),maximum absolute error(max.AE,2.5106),Bias(0.0057)and percent Bias(Bias%,0.0502%).In addition,these predictive results with DLAs were further validated by the Equivalence tests that showed the DLA models successfully predicted the tree height in the independent dataset.Conclusion:This study has emphasized the capability of the DLA models,novel artificial intelligence technique,for predicting the relationships between individual tree height and the diameter at breast height that can be required information for the management of forests.
基金supported by the National Natural Science Foundation of China(61273070,61203092)the Enterprise-college-institute Cooperative Project of Jiangsu Province(BY2015019-21)+1 种基金111 Project(B12018)the Fun-damental Research Funds for the Central Universities(JUSRP51733B)
文摘For a class of non-uniform output sampling hybrid system with actuator faults and bounded disturbances,an iterative learning fault diagnosis algorithm is proposed.Firstly,in order to measure the impact of fault on system between every consecutive output sampling instants,the actual fault function is transformed to obtain an equivalent fault model by using the integral mean value theorem,then the non-uniform sampling hybrid system is converted to continuous systems with timevarying delay based on the output delay method.Afterwards,an observer-based fault diagnosis filter with virtual fault is designed to estimate the equivalent fault,and the iterative learning regulation algorithm is chosen to update the virtual fault repeatedly to make it approximate the actual equivalent fault after some iterative learning trials,so the algorithm can detect and estimate the system faults adaptively.Simulation results of an electro-mechanical control system model with different types of faults illustrate the feasibility and effectiveness of this algorithm.
文摘Objective To develope a deep learning algorithm for pathological classification of chronic gastritis and assess its performance using whole-slide images(WSIs).Methods We retrospectively collected 1,250 gastric biopsy specimens(1,128 gastritis,122 normal mucosa)from PLA General Hospital.The deep learning algorithm based on DeepLab v3(ResNet-50)architecture was trained and validated using 1,008 WSIs and 100 WSIs,respectively.The diagnostic performance of the algorithm was tested on an independent test set of 142 WSIs,with the pathologists’consensus diagnosis as the gold standard.Results The receiver operating characteristic(ROC)curves were generated for chronic superficial gastritis(CSuG),chronic active gastritis(CAcG),and chronic atrophic gastritis(CAtG)in the test set,respectively.The areas under the ROC curves(AUCs)of the algorithm for CSuG,CAcG,and CAtG were 0.882,0.905 and 0.910,respectively.The sensitivity and specificity of the deep learning algorithm for the classification of CSuG,CAcG,and CAtG were 0.790 and 1.000(accuracy 0.880),0.985 and 0.829(accuracy 0.901),0.952 and 0.992(accuracy 0.986),respectively.The overall predicted accuracy for three different types of gastritis was 0.867.By flagging the suspicious regions identified by the algorithm in WSI,a more transparent and interpretable diagnosis can be generated.Conclusion The deep learning algorithm achieved high accuracy for chronic gastritis classification using WSIs.By pre-highlighting the different gastritis regions,it might be used as an auxiliary diagnostic tool to improve the work efficiency of pathologists.
基金Project (70671039) supported by the National Natural Science Foundation of China
文摘In order to resolve the coordination and optimization of the power network planning effectively, on the basis of introducing the concept of power intelligence center (PIC), the key factor power flow, line investment and load that impact generation sector, transmission sector and dispatching center in PIC were analyzed and a multi-objective coordination optimal model for new power intelligence center (NPIC) was established. To ensure the reliability and coordination of power grid and reduce investment cost, two aspects were optimized. The evolutionary algorithm was introduced to solve optimal power flow problem and the fitness function was improved to ensure the minimum cost of power generation. The gray particle swarm optimization (GPSO) algorithm was used to forecast load accurately, which can ensure the network with high reliability. On this basis, the multi-objective coordination optimal model which was more practical and in line with the need of the electricity market was proposed, then the coordination model was effectively solved through the improved particle swarm optimization algorithm, and the corresponding algorithm was obtained. The optimization of IEEE30 node system shows that the evolutionary algorithm can effectively solve the problem of optimal power flow. The average load forecasting of GPSO is 26.97 MW, which has an error of 0.34 MW compared with the actual load. The algorithm has higher forecasting accuracy. The multi-objective coordination optimal model for NPIC can effectively process the coordination and optimization problem of power network.
基金supported by Taif University Researchers Supporting Program(Project Number:TURSP-2020/195),Taif University,Saudi ArabiaThe authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under Grant Number(RGP 2/209/42)Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R234),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Human fall detection(FD)acts as an important part in creating sensor based alarm system,enabling physical therapists to minimize the effect of fall events and save human lives.Generally,elderly people suffer from several diseases,and fall action is a common situation which can occur at any time.In this view,this paper presents an Improved Archimedes Optimization Algorithm with Deep Learning Empowered Fall Detection(IAOA-DLFD)model to identify the fall/non-fall events.The proposed IAOA-DLFD technique comprises different levels of pre-processing to improve the input image quality.Besides,the IAOA with Capsule Network based feature extractor is derived to produce an optimal set of feature vectors.In addition,the IAOA uses to significantly boost the overall FD performance by optimal choice of CapsNet hyperparameters.Lastly,radial basis function(RBF)network is applied for determining the proper class labels of the test images.To showcase the enhanced performance of the IAOA-DLFD technique,a wide range of experiments are executed and the outcomes stated the enhanced detection outcome of the IAOA-DLFD approach over the recent methods with the accuracy of 0.997.
文摘Aiming at the problems that fuzzy neural network controller has heavy computation and lag,a T-S norm Fuzzy Neural Network Control based on hybrid learning algorithm was proposed.Immune genetic algorithm (IGA) was used to optimize the parameters of membership functions (MFs) off line,and the neural network was used to adjust the parameters of MFs on line to enhance the response of the controller.Moreover,the latter network was used to adjust the fuzzy rules automatically to reduce the computation of the neural network and improve the robustness and adaptability of the controller,so that the controller can work well ever when the underwater vehicle works in hostile ocean environment.Finally,experiments were carried on " XX" mini autonomous underwater vehicle (min-AUV) in tank.The results showed that this controller has great improvement in response and overshoot,compared with the traditional controllers.
文摘A novel algorithm is presented for supervised inductive learning by integrating a genetic algorithm with hot'tom-up induction process.The hybrid learning algorithm has been implemented in C on a personal computer(386DX/40).The performance of the algorithm has been evaluated by applying it to 11-multiplexer problem and the results show that the algorithm's accuracy is higher than the others[5,12, 13].
基金supported by the Xiamen University Malaysia Research Fund XMUMRF Grant No:XMUMRF/2019-C3/IECE/0007(received by R.M.Mehmood)The authors are grateful to the Taif University Researchers Supporting Project Number(TURSP-2020/79),Taif University,Taif,Saudi Arabia for funding this work(received by M.Shorfuzzaman).
文摘Solar energy is a widely used type of renewable energy.Photovoltaic arrays are used to harvest solar energy.The major goal,in harvesting the maximum possible power,is to operate the system at its maximum power point(MPP).If the irradiation conditions are uniform,the P-V curve of the PV array has only one peak that is called its MPP.But when the irradiation conditions are non-uniform,the P-V curve has multiple peaks.Each peak represents an MPP for a specific irradiation condition.The highest of all the peaks is called Global Maximum Power Point(GMPP).Under uniform irradiation conditions,there is zero or no partial shading.But the changing irradiance causes a shading effect which is called Partial Shading.Many conventional and soft computing techniques have been in use to harvest solar energy.These techniques perform well under uniform and weak shading conditions but fail when shading conditions are strong.In this paper,a new method is proposed which uses Machine Learning based algorithm called Opposition-Based-Learning(OBL)to deal with partial shading conditions.Simulation studies on different cases of partial shading have proven this technique effective in attaining MPP.
文摘The Industrial Internet of Things(IIoT)has brought numerous benefits,such as improved efficiency,smart analytics,and increased automation.However,it also exposes connected devices,users,applications,and data generated to cyber security threats that need to be addressed.This work investigates hybrid cyber threats(HCTs),which are now working on an entirely new level with the increasingly adopted IIoT.This work focuses on emerging methods to model,detect,and defend against hybrid cyber attacks using machine learning(ML)techniques.Specifically,a novel ML-based HCT modelling and analysis framework was proposed,in which L1 regularisation and Random Forest were used to cluster features and analyse the importance and impact of each feature in both individual threats and HCTs.A grey relation analysis-based model was employed to construct the correlation between IIoT components and different threats.
基金funded by the Researchers Supporting Program at King Saud University(RSPD2024R809).
文摘Hybridizing metaheuristic algorithms involves synergistically combining different optimization techniques to effectively address complex and challenging optimization problems.This approach aims to leverage the strengths of multiple algorithms,enhancing solution quality,convergence speed,and robustness,thereby offering a more versatile and efficient means of solving intricate real-world optimization tasks.In this paper,we introduce a hybrid algorithm that amalgamates three distinct metaheuristics:the Beluga Whale Optimization(BWO),the Honey Badger Algorithm(HBA),and the Jellyfish Search(JS)optimizer.The proposed hybrid algorithm will be referred to as BHJO.Through this fusion,the BHJO algorithm aims to leverage the strengths of each optimizer.Before this hybridization,we thoroughly examined the exploration and exploitation capabilities of the BWO,HBA,and JS metaheuristics,as well as their ability to strike a balance between exploration and exploitation.This meticulous analysis allowed us to identify the pros and cons of each algorithm,enabling us to combine them in a novel hybrid approach that capitalizes on their respective strengths for enhanced optimization performance.In addition,the BHJO algorithm incorporates Opposition-Based Learning(OBL)to harness the advantages offered by this technique,leveraging its diverse exploration,accelerated convergence,and improved solution quality to enhance the overall performance and effectiveness of the hybrid algorithm.Moreover,the performance of the BHJO algorithm was evaluated across a range of both unconstrained and constrained optimization problems,providing a comprehensive assessment of its efficacy and applicability in diverse problem domains.Similarly,the BHJO algorithm was subjected to a comparative analysis with several renowned algorithms,where mean and standard deviation values were utilized as evaluation metrics.This rigorous comparison aimed to assess the performance of the BHJOalgorithmabout its counterparts,shedding light on its effectiveness and reliability in solving optimization problems.Finally,the obtained numerical statistics underwent rigorous analysis using the Friedman post hoc Dunn’s test.The resulting numerical values revealed the BHJO algorithm’s competitiveness in tackling intricate optimization problems,affirming its capability to deliver favorable outcomes in challenging scenarios.