This paper presents an asynchronous output-feed-back control strategy of semi-Markovian systems via sliding mode-based learning technique.Compared with most literature results that require exact prior knowledge of sys...This paper presents an asynchronous output-feed-back control strategy of semi-Markovian systems via sliding mode-based learning technique.Compared with most literature results that require exact prior knowledge of system state and mode information,an asynchronous output-feedback sliding sur-face is adopted in the case of incompletely available state and non-synchronization phenomenon.The holonomic dynamics of the sliding mode are characterized by a descriptor system in which the switching surface is regarded as the fast subsystem and the system dynamics are viewed as the slow subsystem.Based upon the co-occurrence of two subsystems,the sufficient stochastic admissibility criterion of the holonomic dynamics is derived by utilizing the characteristics of cumulative distribution functions.Furthermore,a recursive learning controller is formulated to guarantee the reachability of the sliding manifold and realize the chattering reduction of the asynchronous switching and sliding motion.Finally,the proposed theoretical method is substantia-ted through two numerical simulations with the practical contin-uous stirred tank reactor and F-404 aircraft engine model,respectively.展开更多
BACKGROUND It has been reported that deep learning-based reconstruction(DLR)can reduce image noise and artifacts,thereby improving the signal-to-noise ratio and image sharpness.However,no previous studies have evaluat...BACKGROUND It has been reported that deep learning-based reconstruction(DLR)can reduce image noise and artifacts,thereby improving the signal-to-noise ratio and image sharpness.However,no previous studies have evaluated the efficacy of DLR in improving image quality in reduced-field-of-view(reduced-FOV)diffusionweighted imaging(DWI)[field-of-view optimized and constrained undistorted single-shot(FOCUS)]of the pancreas.We hypothesized that a combination of these techniques would improve DWI image quality without prolonging the scan time but would influence the apparent diffusion coefficient calculation.AIM To evaluate the efficacy of DLR for image quality improvement of FOCUS of the pancreas.METHODS This was a retrospective study evaluated 37 patients with pancreatic cystic lesions who underwent magnetic resonance imaging between August 2021 and October 2021.We evaluated three types of FOCUS examinations:FOCUS with DLR(FOCUS-DLR+),FOCUS without DLR(FOCUS-DLR−),and conventional FOCUS(FOCUS-conv).The three types of FOCUS and their apparent diffusion coefficient(ADC)maps were compared qualitatively and quantitatively.RESULTS FOCUS-DLR+(3.62,average score of two radiologists)showed significantly better qualitative scores for image noise than FOCUS-DLR−(2.62)and FOCUS-conv(2.88)(P<0.05).Furthermore,FOCUS-DLR+showed the highest contrast ratio and 600 s/mm^(2)(0.72±0.08 and 0.68±0.08)and FOCUS-DLR−showed the highest CR between cystic lesions and the pancreatic parenchyma for the b-values of 0 and 600 s/mm2(0.62±0.21 and 0.62±0.21)(P<0.05),respectively.FOCUS-DLR+provided significantly higher ADCs of the pancreas and lesion(1.44±0.24 and 3.00±0.66)compared to FOCUS-DLR−(1.39±0.22 and 2.86±0.61)and significantly lower ADCs compared to FOCUS-conv(1.84±0.45 and 3.32±0.70)(P<0.05),respectively.CONCLUSION This study evaluated the efficacy of DLR for image quality improvement in reduced-FOV DWI of the pancreas.DLR can significantly denoise images without prolonging the scan time or decreasing the spatial resolution.The denoising level of DWI can be controlled to make the images appear more natural to the human eye.However,this study revealed that DLR did not ameliorate pancreatic distortion.Additionally,physicians should pay attention to the interpretation of ADCs after DLR application because ADCs are significantly changed by DLR.展开更多
In the medical profession,recent technological advancements play an essential role in the early detection and categorization of many diseases that cause mortality.The technique rising on daily basis for detecting illn...In the medical profession,recent technological advancements play an essential role in the early detection and categorization of many diseases that cause mortality.The technique rising on daily basis for detecting illness in magnetic resonance through pictures is the inspection of humans.Automatic(computerized)illness detection in medical imaging has found you the emergent region in several medical diagnostic applications.Various diseases that cause death need to be identified through such techniques and technologies to overcome the mortality ratio.The brain tumor is one of the most common causes of death.Researchers have already proposed various models for the classification and detection of tumors,each with its strengths and weaknesses,but there is still a need to improve the classification process with improved efficiency.However,in this study,we give an in-depth analysis of six distinct machine learning(ML)algorithms,including Random Forest(RF),Naïve Bayes(NB),Neural Networks(NN),CN2 Rule Induction(CN2),Support Vector Machine(SVM),and Decision Tree(Tree),to address this gap in improving accuracy.On the Kaggle dataset,these strategies are tested using classification accuracy,the area under the Receiver Operating Characteristic(ROC)curve,precision,recall,and F1 Score(F1).The training and testing process is strengthened by using a 10-fold cross-validation technique.The results show that SVM outperforms other algorithms,with 95.3%accuracy.展开更多
The vehicle routing problem(VRP)is a typical discrete combinatorial optimization problem,and many models and algorithms have been proposed to solve the VRP and its variants.Although existing approaches have contribute...The vehicle routing problem(VRP)is a typical discrete combinatorial optimization problem,and many models and algorithms have been proposed to solve the VRP and its variants.Although existing approaches have contributed significantly to the development of this field,these approaches either are limited in problem size or need manual intervention in choosing parameters.To solve these difficulties,many studies have considered learning-based optimization(LBO)algorithms to solve the VRP.This paper reviews recent advances in this field and divides relevant approaches into end-to-end approaches and step-by-step approaches.We performed a statistical analysis of the reviewed articles from various aspects and designed three experiments to evaluate the performance of four representative LBO algorithms.Finally,we conclude the applicable types of problems for different LBO algorithms and suggest directions in which researchers can improve LBO algorithms.展开更多
It is extensively approved that Channel State Information(CSI) plays an important role for synergetic transmission and interference management. However, pilot overhead to obtain CSI with enough precision is a signific...It is extensively approved that Channel State Information(CSI) plays an important role for synergetic transmission and interference management. However, pilot overhead to obtain CSI with enough precision is a significant issue for wireless communication networks with massive antennas and ultra-dense cell. This paper proposes a learning- based channel model, which can estimate, refine, and manage CSI for a synergetic transmission system. It decomposes the channel impulse response into multiple paths, and uses a learning-based algorithm to estimate paths' parameters without notable degradation caused by sparse pilots. Both indoor measurement and outdoor measurement are conducted to verify the feasibility of the proposed channel model preliminarily.展开更多
Understanding the content of the source code and its regular expression is very difficult when they are written in an unfamiliar language.Pseudo-code explains and describes the content of the code without using syntax...Understanding the content of the source code and its regular expression is very difficult when they are written in an unfamiliar language.Pseudo-code explains and describes the content of the code without using syntax or programming language technologies.However,writing Pseudo-code to each code instruction is laborious.Recently,neural machine translation is used to generate textual descriptions for the source code.In this paper,a novel deep learning-based transformer(DLBT)model is proposed for automatic Pseudo-code generation from the source code.The proposed model uses deep learning which is based on Neural Machine Translation(NMT)to work as a language translator.The DLBT is based on the transformer which is an encoder-decoder structure.There are three major components:tokenizer and embeddings,transformer,and post-processing.Each code line is tokenized to dense vector.Then transformer captures the relatedness between the source code and the matching Pseudo-code without the need of Recurrent Neural Network(RNN).At the post-processing step,the generated Pseudo-code is optimized.The proposed model is assessed using a real Python dataset,which contains more than 18,800 lines of a source code written in Python.The experiments show promising performance results compared with other machine translation methods such as Recurrent Neural Network(RNN).The proposed DLBT records 47.32,68.49 accuracy and BLEU performance measures,respectively.展开更多
As the Internet of Things(IoT)and mobile devices have rapidly proliferated,their computationally intensive applications have developed into complex,concurrent IoT-based workflows involving multiple interdependent task...As the Internet of Things(IoT)and mobile devices have rapidly proliferated,their computationally intensive applications have developed into complex,concurrent IoT-based workflows involving multiple interdependent tasks.By exploiting its low latency and high bandwidth,mobile edge computing(MEC)has emerged to achieve the high-performance computation offloading of these applications to satisfy the quality-of-service requirements of workflows and devices.In this study,we propose an offloading strategy for IoT-based workflows in a high-performance MEC environment.The proposed task-based offloading strategy consists of an optimization problem that includes task dependency,communication costs,workflow constraints,device energy consumption,and the heterogeneous characteristics of the edge environment.In addition,the optimal placement of workflow tasks is optimized using a discrete teaching learning-based optimization(DTLBO)metaheuristic.Extensive experimental evaluations demonstrate that the proposed offloading strategy is effective at minimizing the energy consumption of mobile devices and reducing the execution times of workflows compared to offloading strategies using different metaheuristics,including particle swarm optimization and ant colony optimization.展开更多
For multiuser multiple-input-multiple-output (MIMO) cognitive radio (CR) networks a four-stage transmiision structure is proposed. In learning stage, the learning-based algorithm with low overhead and high flexibi...For multiuser multiple-input-multiple-output (MIMO) cognitive radio (CR) networks a four-stage transmiision structure is proposed. In learning stage, the learning-based algorithm with low overhead and high flexibility is exploited to estimate the channel state information ( CSI ) between primary (PR) terminals and CR terminals. By using channel training in the second stage of CR frame, the channels between CR terminals can be achieved. In the third stage, a multi-criteria user selection scheme is proposed to choose the best user set for service. In data transmission stage, the total capacity maximization problem is solved with the interference constraint of PR terminals. Finally, simulation results show that the multi-criteria user selection scheme, which has the ability of changing the weights of criterions, is more flexible than the other three traditional schemes and achieves a tradeoff between user fairness and system performance.展开更多
Purpose–The purpose of this paper is to propose a novel improved teaching and learning-based algorithm(TLBO)to enhance its convergence ability and solution accuracy,making it more suitable for solving large-scale opt...Purpose–The purpose of this paper is to propose a novel improved teaching and learning-based algorithm(TLBO)to enhance its convergence ability and solution accuracy,making it more suitable for solving large-scale optimization issues.Design/methodology/approach–Utilizing multiple cooperation mechanisms in teaching and learning processes,an improved TBLO named CTLBO(collectivism teaching-learning-based optimization)is developed.This algorithm introduces a new preparation phase before the teaching and learning phases and applies multiple teacher–learner cooperation strategies in teaching and learning processes.Applying modularizationidea,based on the configuration structure of operators ofCTLBO,six variants ofCTLBOare constructed.Foridentifying the best configuration,30 general benchmark functions are tested.Then,three experiments using CEC2020(2020 IEEE Conference on Evolutionary Computation)-constrained optimization problems are conducted to compare CTLBO with other algorithms.At last,a large-scale industrial engineering problem is taken as the application case.Findings–Experiment with 30 general unconstrained benchmark functions indicates that CTLBO-c is the best configuration of all variants of CTLBO.Three experiments using CEC2020-constrained optimization problems show that CTLBO is one powerful algorithm for solving large-scale constrained optimization problems.The application case of industrial engineering problem shows that CTLBO and its variant CTLBO-c can effectively solve the large-scale real problem,while the accuracies of TLBO and other meta-heuristic algorithm are far lower than CLTBO and CTLBO-c,revealing that CTLBO and its variants can far outperform other algorithms.CTLBO is an excellent algorithm for solving large-scale complex optimization issues.Originality/value–The innovation of this paper lies in the improvement strategies in changing the original TLBO with two-phase teaching–learning mechanism to a new algorithm CTLBO with three-phase multiple cooperation teaching–learning mechanism,self-learning mechanism in teaching and group teaching mechanism.CTLBO has important application value in solving large-scale optimization problems.展开更多
This paper reviews recent developments in learning-based adaptive optimal output regulation that aims to solve the problem of adaptive and optimal asymptotic tracking with disturbance rejection.The proposed framework ...This paper reviews recent developments in learning-based adaptive optimal output regulation that aims to solve the problem of adaptive and optimal asymptotic tracking with disturbance rejection.The proposed framework aims to bring together two separate topics—output regulation and adaptive dynamic programming—that have been under extensive investigation due to their broad applications in modern control engineering.Under this framework,one can solve optimal output regulation problems of linear,partially linear,nonlinear,and multi-agent systems in a data-driven manner.We will also review some practical applications based on this framework,such as semi-autonomous vehicles,connected and autonomous vehicles,and nonlinear oscillators.展开更多
A learning-based approach for solving wall shear stresses from Shear-Sensitive Liquid Crystal Coating(SSLCC) color images is presented in this paper. The approach is able to learn and establish the mapping relationshi...A learning-based approach for solving wall shear stresses from Shear-Sensitive Liquid Crystal Coating(SSLCC) color images is presented in this paper. The approach is able to learn and establish the mapping relationship between the SSLCC color-change responses in different observation directions and the shear stress vectors, and then uses the mapping relationship to solve wall shear stress vectors from SSLCC color images. Experimental results show that the proposed approach can solve wall shear stress vectors using two or more SSLCC images, and even using only one image for symmetrical flow field. The accuracy of the approach using four or more observations is found to be comparable to that of the traditional multi-view Gauss curve fitting approach;the accuracy is slightly reduced when using two or fewer observations. The computational efficiency is significantly improved when compared with the traditional Gauss curve fitting approach, and the wall shear stress vectors can be solved in nearly real time. The learning-based approach has no strict requirements on illumination direction and observation directions and is therefore more flexible to use in practical wind tunnel measurement when compared with traditional liquid crystal-based methods.展开更多
Quality control is of vital importance in compressing three-dimensional(3D)medical imaging data.Optimal com-pression parameters need to be determined based on the specific quality requirement.In high efficiency video ...Quality control is of vital importance in compressing three-dimensional(3D)medical imaging data.Optimal com-pression parameters need to be determined based on the specific quality requirement.In high efficiency video coding(HEVC),regarded as the state-of-the-art compression tool,the quantization parameter(QP)plays a dominant role in controlling quality.The direct application of a video-based scheme in predicting the ideal parameters for 3D medical image compression cannot guarantee satisfactory results.In this paper we propose a learning-based parameter prediction scheme to achieve efficient quality control.Its kernel is a support vector regression(SVR)based learning model that is capable of predicting the optimal QP from both vid-eo-based and structural image features extracted directly from raw data,avoiding time-consuming processes such as pre-encoding and iteration,which are often needed in existing techniques.Experimental results on several datasets verify that our approach outperforms current video-based quality control methods.展开更多
Since the Beijing 2022 Winter Olympics was the first Winter Olympics in history held in continental winter monsoon climate conditions across complex terrain areas,there is a deficiency of relevant research,operational...Since the Beijing 2022 Winter Olympics was the first Winter Olympics in history held in continental winter monsoon climate conditions across complex terrain areas,there is a deficiency of relevant research,operational techniques,and experience.This made providing meteorological services for this event particularly challenging.The China Meteorological Administration(CMA)Earth System Modeling and Prediction Centre,achieved breakthroughs in research on short-and medium-term deterministic and ensemble numerical predictions.Several key technologies crucial for precise winter weather services during the Winter Olympics were developed.A comprehensive framework,known as the Operational System for High-Precision Weather Forecasting for the Winter Olympics,was established.Some of these advancements represent the highest level of capabilities currently available in China.The meteorological service provided to the Beijing 2022 Games also exceeded previous Winter Olympic Games in both variety and quality.This included achievements such as the“100-meter level,minute level”downscaled spatiotemporal resolution and forecasts spanning 1 to 15 days.Around 30 new technologies and over 60 kinds of products that align with the requirements of the Winter Olympics Organizing Committee were developed,and many of these techniques have since been integrated into the CMA’s operational national forecasting systems.These accomplishments were facilitated by a dedicated weather forecasting and research initiative,in conjunction with the preexisting real-time operational forecasting systems of the CMA.This program represents one of the five subprograms of the WMO’s high-impact weather forecasting demonstration project(SMART2022),and continues to play an important role in their Regional Association(RA)II Research Development Project(Hangzhou RDP).Therefore,the research accomplishments and meteorological service experiences from this program will be carried forward into forthcoming highimpact weather forecasting activities.This article provides an overview and assessment of this program and the operational national forecasting systems.展开更多
With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection abil...With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection ability of a single vehicle limits the SLAM performance in wide areas.Thereby,cooperative SLAM using multiple vehicles has become an important research direction.The key factor of cooperative SLAM is timely and efficient sonar image transmission among underwater vehicles.However,the limited bandwidth of underwater acoustic channels contradicts a large amount of sonar image data.It is essential to compress the images before transmission.Recently,deep neural networks have great value in image compression by virtue of the powerful learning ability of neural networks,but the existing sonar image compression methods based on neural network usually focus on the pixel-level information without the semantic-level information.In this paper,we propose a novel underwater acoustic transmission scheme called UAT-SSIC that includes semantic segmentation-based sonar image compression(SSIC)framework and the joint source-channel codec,to improve the accuracy of the semantic information of the reconstructed sonar image at the receiver.The SSIC framework consists of Auto-Encoder structure-based sonar image compression network,which is measured by a semantic segmentation network's residual.Considering that sonar images have the characteristics of blurred target edges,the semantic segmentation network used a special dilated convolution neural network(DiCNN)to enhance segmentation accuracy by expanding the range of receptive fields.The joint source-channel codec with unequal error protection is proposed that adjusts the power level of the transmitted data,which deal with sonar image transmission error caused by the serious underwater acoustic channel.Experiment results demonstrate that our method preserves more semantic information,with advantages over existing methods at the same compression ratio.It also improves the error tolerance and packet loss resistance of transmission.展开更多
Arabic is one of the most spoken languages across the globe.However,there are fewer studies concerning Sentiment Analysis(SA)in Arabic.In recent years,the detected sentiments and emotions expressed in tweets have rece...Arabic is one of the most spoken languages across the globe.However,there are fewer studies concerning Sentiment Analysis(SA)in Arabic.In recent years,the detected sentiments and emotions expressed in tweets have received significant interest.The substantial role played by the Arab region in international politics and the global economy has urged the need to examine the sentiments and emotions in the Arabic language.Two common models are available:Machine Learning and lexicon-based approaches to address emotion classification problems.With this motivation,the current research article develops a Teaching and Learning Optimization with Machine Learning Based Emotion Recognition and Classification(TLBOML-ERC)model for Sentiment Analysis on tweets made in the Arabic language.The presented TLBOML-ERC model focuses on recognising emotions and sentiments expressed in Arabic tweets.To attain this,the proposed TLBOMLERC model initially carries out data pre-processing and a Continuous Bag Of Words(CBOW)-based word embedding process.In addition,Denoising Autoencoder(DAE)model is also exploited to categorise different emotions expressed in Arabic tweets.To improve the efficacy of the DAE model,the Teaching and Learning-based Optimization(TLBO)algorithm is utilized to optimize the parameters.The proposed TLBOML-ERC method was experimentally validated with the help of an Arabic tweets dataset.The obtained results show the promising performance of the proposed TLBOML-ERC model on Arabic emotion classification.展开更多
Abnormal high blood pressure or hypertension is still the leading risk factor for death and disability worldwide.This paper presents a new intelligent networked control of medical drug infusion system to regulate the ...Abnormal high blood pressure or hypertension is still the leading risk factor for death and disability worldwide.This paper presents a new intelligent networked control of medical drug infusion system to regulate the mean arterial blood pressure for hypertensive patients with different health status conditions.The infusion of vasoactive drugs to patients endures various issues,such as variation of sensitivity and noise,which require effective and powerful systems to ensure robustness and good performance.The developed intelligent networked system is composed of a hybrid control scheme of interval type-2 fuzzy(IT2F)logic and teaching-learning-based optimization(TLBO)algorithm.This networked IT2F control is capable of managing the uncertain sensitivity of the patient to anti-hypertensive drugs successfully.To avoid the manual selection of control parameter values,the TLBO algorithm is mainly used to automatically find the best parameter values of the networked IT2F controller.The simulation results showed that the optimized networked IT2F achieved a good performance under external disturbances.A comparative study has also been conducted to emphasize the outperformance of the developed controller against traditional PID and type-1 fuzzy controllers.Moreover,the comparative evaluation demonstrated that the performance of the developed networked IT2F controller is superior to other control strategies in previous studies to handle unknown patients’sensitivity to infused vasoactive drugs in a noisy environment.展开更多
The similarity metric in traditional content based 3D model retrieval method mainly refers the distance metric algorithm used in 2D image retrieval. But this method will limit the matching breadth. This paper proposes...The similarity metric in traditional content based 3D model retrieval method mainly refers the distance metric algorithm used in 2D image retrieval. But this method will limit the matching breadth. This paper proposes a new retrieval matching method based on case learning to enlarge the retrieval matching scope. In this method, the shortest path in Graph theory is used to analyze the similarity how the nodes on the path between query model and matched model effect. Then, the label propagation method and k nearest-neighbor method based on case learning is studied and used to improve the retrieval efficiency based on the existing feature extraction.展开更多
The COVID-19 pandemic has caused hundreds of thousands of deaths,millions of infections worldwide,and the loss of trillions of dollars for many large economies.It poses a grave threat to the human population with an e...The COVID-19 pandemic has caused hundreds of thousands of deaths,millions of infections worldwide,and the loss of trillions of dollars for many large economies.It poses a grave threat to the human population with an excessive number of patients constituting an unprecedented challenge with which health systems have to cope.Researchers from many domains have devised diverse approaches for the timely diagnosis of COVID-19 to facilitate medical responses.In the same vein,a wide variety of research studies have investigated underlying medical conditions for indicators suggesting the severity and mortality of,and role of age groups and gender on,the probability of COVID-19 infection.This study aimed to review,analyze,and critically appraise published works that report on various factors to explain their relationship with COVID-19.Such studies span a wide range,including descriptive analyses,ratio analyses,cohort,prospective and retrospective studies.Various studies that describe indicators to determine the probability of infection among the general population,as well as the risk factors associated with severe illness and mortality,are critically analyzed and these ndings are discussed in detail.A comprehensive analysis was conducted on research studies that investigated the perceived differences in vulnerability of different age groups and genders to severe outcomes of COVID-19.Studies incorporating important demographic,health,and socioeconomic characteristics are highlighted to emphasize their importance.Predominantly,the lack of an appropriated dataset that contains demographic,personal health,and socioeconomic information implicates the efcacy and efciency of the discussed methods.Results are overstated on the part of both exclusion of quarantined and patients with mild symptoms and inclusion of the data from hospitals where the majority of the cases are potentially ill.展开更多
Intrusion detection system(IDS)techniques are used in cybersecurity to protect and safeguard sensitive assets.The increasing network security risks can be mitigated by implementing effective IDS methods as a defense m...Intrusion detection system(IDS)techniques are used in cybersecurity to protect and safeguard sensitive assets.The increasing network security risks can be mitigated by implementing effective IDS methods as a defense mechanism.The proposed research presents an IDS model based on the methodology of the adaptive fuzzy k-nearest neighbor(FKNN)algorithm.Using this method,two parameters,i.e.,the neighborhood size(k)and fuzzy strength parameter(m)were characterized by implementing the particle swarm optimization(PSO).In addition to being used for FKNN parametric optimization,PSO is also used for selecting the conditional feature subsets for detection.To proficiently regulate the indigenous and comprehensive search skill of the PSO approach,two control parameters containing the time-varying inertia weight(TVIW)and time-varying acceleration coefficients(TVAC)were applied to the system.In addition,continuous and binary PSO algorithms were both executed on a multi-core platform.The proposed IDS model was compared with other state-of-the-art classifiers.The results of the proposed methodology are superior to the rest of the techniques in terms of the classification accuracy,precision,recall,and f-score.The results showed that the proposed methods gave the highest performance scores compared to the other conventional algorithms in detecting all the attack types in two datasets.Moreover,the proposed method was able to obtain a large number of true positives and negatives,with minimal number of false positives and negatives.展开更多
In the recent years spam became as a big problem of Internet and electronic communication. There developed a lot of techniques to fight them. In this paper the overview of existing e-mail spam filtering methods is giv...In the recent years spam became as a big problem of Internet and electronic communication. There developed a lot of techniques to fight them. In this paper the overview of existing e-mail spam filtering methods is given. The classification, evaluation, and comparison of traditional and learning-based methods are provided. Some personal anti-spam products are tested and compared. The statement for new approach in spam filtering technique is considered.展开更多
基金supported in part by the National Science Fund for Excellent Young Scholars of China(62222317)the National Science Foundation of China(62303492)+3 种基金the Major Science and Technology Projects in Hunan Province(2021GK1030)the Science and Technology Innovation Program of Hunan Province(2022WZ1001)the Key Research and Development Program of Hunan Province(2023GK2023)the Fundamental Research Funds for the Central Universities of Central South University(2024ZZTS0116)。
文摘This paper presents an asynchronous output-feed-back control strategy of semi-Markovian systems via sliding mode-based learning technique.Compared with most literature results that require exact prior knowledge of system state and mode information,an asynchronous output-feedback sliding sur-face is adopted in the case of incompletely available state and non-synchronization phenomenon.The holonomic dynamics of the sliding mode are characterized by a descriptor system in which the switching surface is regarded as the fast subsystem and the system dynamics are viewed as the slow subsystem.Based upon the co-occurrence of two subsystems,the sufficient stochastic admissibility criterion of the holonomic dynamics is derived by utilizing the characteristics of cumulative distribution functions.Furthermore,a recursive learning controller is formulated to guarantee the reachability of the sliding manifold and realize the chattering reduction of the asynchronous switching and sliding motion.Finally,the proposed theoretical method is substantia-ted through two numerical simulations with the practical contin-uous stirred tank reactor and F-404 aircraft engine model,respectively.
文摘BACKGROUND It has been reported that deep learning-based reconstruction(DLR)can reduce image noise and artifacts,thereby improving the signal-to-noise ratio and image sharpness.However,no previous studies have evaluated the efficacy of DLR in improving image quality in reduced-field-of-view(reduced-FOV)diffusionweighted imaging(DWI)[field-of-view optimized and constrained undistorted single-shot(FOCUS)]of the pancreas.We hypothesized that a combination of these techniques would improve DWI image quality without prolonging the scan time but would influence the apparent diffusion coefficient calculation.AIM To evaluate the efficacy of DLR for image quality improvement of FOCUS of the pancreas.METHODS This was a retrospective study evaluated 37 patients with pancreatic cystic lesions who underwent magnetic resonance imaging between August 2021 and October 2021.We evaluated three types of FOCUS examinations:FOCUS with DLR(FOCUS-DLR+),FOCUS without DLR(FOCUS-DLR−),and conventional FOCUS(FOCUS-conv).The three types of FOCUS and their apparent diffusion coefficient(ADC)maps were compared qualitatively and quantitatively.RESULTS FOCUS-DLR+(3.62,average score of two radiologists)showed significantly better qualitative scores for image noise than FOCUS-DLR−(2.62)and FOCUS-conv(2.88)(P<0.05).Furthermore,FOCUS-DLR+showed the highest contrast ratio and 600 s/mm^(2)(0.72±0.08 and 0.68±0.08)and FOCUS-DLR−showed the highest CR between cystic lesions and the pancreatic parenchyma for the b-values of 0 and 600 s/mm2(0.62±0.21 and 0.62±0.21)(P<0.05),respectively.FOCUS-DLR+provided significantly higher ADCs of the pancreas and lesion(1.44±0.24 and 3.00±0.66)compared to FOCUS-DLR−(1.39±0.22 and 2.86±0.61)and significantly lower ADCs compared to FOCUS-conv(1.84±0.45 and 3.32±0.70)(P<0.05),respectively.CONCLUSION This study evaluated the efficacy of DLR for image quality improvement in reduced-FOV DWI of the pancreas.DLR can significantly denoise images without prolonging the scan time or decreasing the spatial resolution.The denoising level of DWI can be controlled to make the images appear more natural to the human eye.However,this study revealed that DLR did not ameliorate pancreatic distortion.Additionally,physicians should pay attention to the interpretation of ADCs after DLR application because ADCs are significantly changed by DLR.
基金support of the Deputy for Research and Innovation-Ministry of Education,Kingdom of Saudi Arabia for this research through a grant(NU/IFC/ENT/01/014)under the institutional Funding Committee at Najran University,Kingdom of Saudi Arabia.
文摘In the medical profession,recent technological advancements play an essential role in the early detection and categorization of many diseases that cause mortality.The technique rising on daily basis for detecting illness in magnetic resonance through pictures is the inspection of humans.Automatic(computerized)illness detection in medical imaging has found you the emergent region in several medical diagnostic applications.Various diseases that cause death need to be identified through such techniques and technologies to overcome the mortality ratio.The brain tumor is one of the most common causes of death.Researchers have already proposed various models for the classification and detection of tumors,each with its strengths and weaknesses,but there is still a need to improve the classification process with improved efficiency.However,in this study,we give an in-depth analysis of six distinct machine learning(ML)algorithms,including Random Forest(RF),Naïve Bayes(NB),Neural Networks(NN),CN2 Rule Induction(CN2),Support Vector Machine(SVM),and Decision Tree(Tree),to address this gap in improving accuracy.On the Kaggle dataset,these strategies are tested using classification accuracy,the area under the Receiver Operating Characteristic(ROC)curve,precision,recall,and F1 Score(F1).The training and testing process is strengthened by using a 10-fold cross-validation technique.The results show that SVM outperforms other algorithms,with 95.3%accuracy.
文摘The vehicle routing problem(VRP)is a typical discrete combinatorial optimization problem,and many models and algorithms have been proposed to solve the VRP and its variants.Although existing approaches have contributed significantly to the development of this field,these approaches either are limited in problem size or need manual intervention in choosing parameters.To solve these difficulties,many studies have considered learning-based optimization(LBO)algorithms to solve the VRP.This paper reviews recent advances in this field and divides relevant approaches into end-to-end approaches and step-by-step approaches.We performed a statistical analysis of the reviewed articles from various aspects and designed three experiments to evaluate the performance of four representative LBO algorithms.Finally,we conclude the applicable types of problems for different LBO algorithms and suggest directions in which researchers can improve LBO algorithms.
基金supported by National Basic Research Program of China (NO 2012CB316002)China’s 863 Project (NO 2014AA01A703)+2 种基金National Major Projec (NO. 2014ZX03003002-002)Program for New Century Excellent Talents in University (NCET-13-0321)Tsinghua University Initiative Scientific Research Program (2011THZ02-2)
文摘It is extensively approved that Channel State Information(CSI) plays an important role for synergetic transmission and interference management. However, pilot overhead to obtain CSI with enough precision is a significant issue for wireless communication networks with massive antennas and ultra-dense cell. This paper proposes a learning- based channel model, which can estimate, refine, and manage CSI for a synergetic transmission system. It decomposes the channel impulse response into multiple paths, and uses a learning-based algorithm to estimate paths' parameters without notable degradation caused by sparse pilots. Both indoor measurement and outdoor measurement are conducted to verify the feasibility of the proposed channel model preliminarily.
文摘Understanding the content of the source code and its regular expression is very difficult when they are written in an unfamiliar language.Pseudo-code explains and describes the content of the code without using syntax or programming language technologies.However,writing Pseudo-code to each code instruction is laborious.Recently,neural machine translation is used to generate textual descriptions for the source code.In this paper,a novel deep learning-based transformer(DLBT)model is proposed for automatic Pseudo-code generation from the source code.The proposed model uses deep learning which is based on Neural Machine Translation(NMT)to work as a language translator.The DLBT is based on the transformer which is an encoder-decoder structure.There are three major components:tokenizer and embeddings,transformer,and post-processing.Each code line is tokenized to dense vector.Then transformer captures the relatedness between the source code and the matching Pseudo-code without the need of Recurrent Neural Network(RNN).At the post-processing step,the generated Pseudo-code is optimized.The proposed model is assessed using a real Python dataset,which contains more than 18,800 lines of a source code written in Python.The experiments show promising performance results compared with other machine translation methods such as Recurrent Neural Network(RNN).The proposed DLBT records 47.32,68.49 accuracy and BLEU performance measures,respectively.
文摘As the Internet of Things(IoT)and mobile devices have rapidly proliferated,their computationally intensive applications have developed into complex,concurrent IoT-based workflows involving multiple interdependent tasks.By exploiting its low latency and high bandwidth,mobile edge computing(MEC)has emerged to achieve the high-performance computation offloading of these applications to satisfy the quality-of-service requirements of workflows and devices.In this study,we propose an offloading strategy for IoT-based workflows in a high-performance MEC environment.The proposed task-based offloading strategy consists of an optimization problem that includes task dependency,communication costs,workflow constraints,device energy consumption,and the heterogeneous characteristics of the edge environment.In addition,the optimal placement of workflow tasks is optimized using a discrete teaching learning-based optimization(DTLBO)metaheuristic.Extensive experimental evaluations demonstrate that the proposed offloading strategy is effective at minimizing the energy consumption of mobile devices and reducing the execution times of workflows compared to offloading strategies using different metaheuristics,including particle swarm optimization and ant colony optimization.
基金Supported by National S&T Major Project of China(2013ZX03003002-003)
文摘For multiuser multiple-input-multiple-output (MIMO) cognitive radio (CR) networks a four-stage transmiision structure is proposed. In learning stage, the learning-based algorithm with low overhead and high flexibility is exploited to estimate the channel state information ( CSI ) between primary (PR) terminals and CR terminals. By using channel training in the second stage of CR frame, the channels between CR terminals can be achieved. In the third stage, a multi-criteria user selection scheme is proposed to choose the best user set for service. In data transmission stage, the total capacity maximization problem is solved with the interference constraint of PR terminals. Finally, simulation results show that the multi-criteria user selection scheme, which has the ability of changing the weights of criterions, is more flexible than the other three traditional schemes and achieves a tradeoff between user fairness and system performance.
基金This research is funded by the National Natural Science Foundation of China(#71772191).
文摘Purpose–The purpose of this paper is to propose a novel improved teaching and learning-based algorithm(TLBO)to enhance its convergence ability and solution accuracy,making it more suitable for solving large-scale optimization issues.Design/methodology/approach–Utilizing multiple cooperation mechanisms in teaching and learning processes,an improved TBLO named CTLBO(collectivism teaching-learning-based optimization)is developed.This algorithm introduces a new preparation phase before the teaching and learning phases and applies multiple teacher–learner cooperation strategies in teaching and learning processes.Applying modularizationidea,based on the configuration structure of operators ofCTLBO,six variants ofCTLBOare constructed.Foridentifying the best configuration,30 general benchmark functions are tested.Then,three experiments using CEC2020(2020 IEEE Conference on Evolutionary Computation)-constrained optimization problems are conducted to compare CTLBO with other algorithms.At last,a large-scale industrial engineering problem is taken as the application case.Findings–Experiment with 30 general unconstrained benchmark functions indicates that CTLBO-c is the best configuration of all variants of CTLBO.Three experiments using CEC2020-constrained optimization problems show that CTLBO is one powerful algorithm for solving large-scale constrained optimization problems.The application case of industrial engineering problem shows that CTLBO and its variant CTLBO-c can effectively solve the large-scale real problem,while the accuracies of TLBO and other meta-heuristic algorithm are far lower than CLTBO and CTLBO-c,revealing that CTLBO and its variants can far outperform other algorithms.CTLBO is an excellent algorithm for solving large-scale complex optimization issues.Originality/value–The innovation of this paper lies in the improvement strategies in changing the original TLBO with two-phase teaching–learning mechanism to a new algorithm CTLBO with three-phase multiple cooperation teaching–learning mechanism,self-learning mechanism in teaching and group teaching mechanism.CTLBO has important application value in solving large-scale optimization problems.
文摘This paper reviews recent developments in learning-based adaptive optimal output regulation that aims to solve the problem of adaptive and optimal asymptotic tracking with disturbance rejection.The proposed framework aims to bring together two separate topics—output regulation and adaptive dynamic programming—that have been under extensive investigation due to their broad applications in modern control engineering.Under this framework,one can solve optimal output regulation problems of linear,partially linear,nonlinear,and multi-agent systems in a data-driven manner.We will also review some practical applications based on this framework,such as semi-autonomous vehicles,connected and autonomous vehicles,and nonlinear oscillators.
基金co-supported by the National Natural Science Foundation of China(No.11602107)the Natural Science Foundation of Jiangsu Province of China(No.BK20150733)。
文摘A learning-based approach for solving wall shear stresses from Shear-Sensitive Liquid Crystal Coating(SSLCC) color images is presented in this paper. The approach is able to learn and establish the mapping relationship between the SSLCC color-change responses in different observation directions and the shear stress vectors, and then uses the mapping relationship to solve wall shear stress vectors from SSLCC color images. Experimental results show that the proposed approach can solve wall shear stress vectors using two or more SSLCC images, and even using only one image for symmetrical flow field. The accuracy of the approach using four or more observations is found to be comparable to that of the traditional multi-view Gauss curve fitting approach;the accuracy is slightly reduced when using two or fewer observations. The computational efficiency is significantly improved when compared with the traditional Gauss curve fitting approach, and the wall shear stress vectors can be solved in nearly real time. The learning-based approach has no strict requirements on illumination direction and observation directions and is therefore more flexible to use in practical wind tunnel measurement when compared with traditional liquid crystal-based methods.
基金the National Natural Science Foundation of China(No.61890954)。
文摘Quality control is of vital importance in compressing three-dimensional(3D)medical imaging data.Optimal com-pression parameters need to be determined based on the specific quality requirement.In high efficiency video coding(HEVC),regarded as the state-of-the-art compression tool,the quantization parameter(QP)plays a dominant role in controlling quality.The direct application of a video-based scheme in predicting the ideal parameters for 3D medical image compression cannot guarantee satisfactory results.In this paper we propose a learning-based parameter prediction scheme to achieve efficient quality control.Its kernel is a support vector regression(SVR)based learning model that is capable of predicting the optimal QP from both vid-eo-based and structural image features extracted directly from raw data,avoiding time-consuming processes such as pre-encoding and iteration,which are often needed in existing techniques.Experimental results on several datasets verify that our approach outperforms current video-based quality control methods.
基金This work was jointly supported by the National Natural Science Foundation of China(Grant Nos.41975137,42175012,and 41475097)the National Key Research and Development Program(Grant No.2018YFF0300103).
文摘Since the Beijing 2022 Winter Olympics was the first Winter Olympics in history held in continental winter monsoon climate conditions across complex terrain areas,there is a deficiency of relevant research,operational techniques,and experience.This made providing meteorological services for this event particularly challenging.The China Meteorological Administration(CMA)Earth System Modeling and Prediction Centre,achieved breakthroughs in research on short-and medium-term deterministic and ensemble numerical predictions.Several key technologies crucial for precise winter weather services during the Winter Olympics were developed.A comprehensive framework,known as the Operational System for High-Precision Weather Forecasting for the Winter Olympics,was established.Some of these advancements represent the highest level of capabilities currently available in China.The meteorological service provided to the Beijing 2022 Games also exceeded previous Winter Olympic Games in both variety and quality.This included achievements such as the“100-meter level,minute level”downscaled spatiotemporal resolution and forecasts spanning 1 to 15 days.Around 30 new technologies and over 60 kinds of products that align with the requirements of the Winter Olympics Organizing Committee were developed,and many of these techniques have since been integrated into the CMA’s operational national forecasting systems.These accomplishments were facilitated by a dedicated weather forecasting and research initiative,in conjunction with the preexisting real-time operational forecasting systems of the CMA.This program represents one of the five subprograms of the WMO’s high-impact weather forecasting demonstration project(SMART2022),and continues to play an important role in their Regional Association(RA)II Research Development Project(Hangzhou RDP).Therefore,the research accomplishments and meteorological service experiences from this program will be carried forward into forthcoming highimpact weather forecasting activities.This article provides an overview and assessment of this program and the operational national forecasting systems.
基金supported in part by the Tianjin Technology Innovation Guidance Special Fund Project under Grant No.21YDTPJC00850in part by the National Natural Science Foundation of China under Grant No.41906161in part by the Natural Science Foundation of Tianjin under Grant No.21JCQNJC00650。
文摘With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection ability of a single vehicle limits the SLAM performance in wide areas.Thereby,cooperative SLAM using multiple vehicles has become an important research direction.The key factor of cooperative SLAM is timely and efficient sonar image transmission among underwater vehicles.However,the limited bandwidth of underwater acoustic channels contradicts a large amount of sonar image data.It is essential to compress the images before transmission.Recently,deep neural networks have great value in image compression by virtue of the powerful learning ability of neural networks,but the existing sonar image compression methods based on neural network usually focus on the pixel-level information without the semantic-level information.In this paper,we propose a novel underwater acoustic transmission scheme called UAT-SSIC that includes semantic segmentation-based sonar image compression(SSIC)framework and the joint source-channel codec,to improve the accuracy of the semantic information of the reconstructed sonar image at the receiver.The SSIC framework consists of Auto-Encoder structure-based sonar image compression network,which is measured by a semantic segmentation network's residual.Considering that sonar images have the characteristics of blurred target edges,the semantic segmentation network used a special dilated convolution neural network(DiCNN)to enhance segmentation accuracy by expanding the range of receptive fields.The joint source-channel codec with unequal error protection is proposed that adjusts the power level of the transmitted data,which deal with sonar image transmission error caused by the serious underwater acoustic channel.Experiment results demonstrate that our method preserves more semantic information,with advantages over existing methods at the same compression ratio.It also improves the error tolerance and packet loss resistance of transmission.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R263)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4340237DSR36The authors are thankful to the Deanship of Scientific Research at Najran University for funding thiswork under theResearch Groups Funding program grant code(NU/RG/SERC/11/7).
文摘Arabic is one of the most spoken languages across the globe.However,there are fewer studies concerning Sentiment Analysis(SA)in Arabic.In recent years,the detected sentiments and emotions expressed in tweets have received significant interest.The substantial role played by the Arab region in international politics and the global economy has urged the need to examine the sentiments and emotions in the Arabic language.Two common models are available:Machine Learning and lexicon-based approaches to address emotion classification problems.With this motivation,the current research article develops a Teaching and Learning Optimization with Machine Learning Based Emotion Recognition and Classification(TLBOML-ERC)model for Sentiment Analysis on tweets made in the Arabic language.The presented TLBOML-ERC model focuses on recognising emotions and sentiments expressed in Arabic tweets.To attain this,the proposed TLBOMLERC model initially carries out data pre-processing and a Continuous Bag Of Words(CBOW)-based word embedding process.In addition,Denoising Autoencoder(DAE)model is also exploited to categorise different emotions expressed in Arabic tweets.To improve the efficacy of the DAE model,the Teaching and Learning-based Optimization(TLBO)algorithm is utilized to optimize the parameters.The proposed TLBOML-ERC method was experimentally validated with the help of an Arabic tweets dataset.The obtained results show the promising performance of the proposed TLBOML-ERC model on Arabic emotion classification.
文摘Abnormal high blood pressure or hypertension is still the leading risk factor for death and disability worldwide.This paper presents a new intelligent networked control of medical drug infusion system to regulate the mean arterial blood pressure for hypertensive patients with different health status conditions.The infusion of vasoactive drugs to patients endures various issues,such as variation of sensitivity and noise,which require effective and powerful systems to ensure robustness and good performance.The developed intelligent networked system is composed of a hybrid control scheme of interval type-2 fuzzy(IT2F)logic and teaching-learning-based optimization(TLBO)algorithm.This networked IT2F control is capable of managing the uncertain sensitivity of the patient to anti-hypertensive drugs successfully.To avoid the manual selection of control parameter values,the TLBO algorithm is mainly used to automatically find the best parameter values of the networked IT2F controller.The simulation results showed that the optimized networked IT2F achieved a good performance under external disturbances.A comparative study has also been conducted to emphasize the outperformance of the developed controller against traditional PID and type-1 fuzzy controllers.Moreover,the comparative evaluation demonstrated that the performance of the developed networked IT2F controller is superior to other control strategies in previous studies to handle unknown patients’sensitivity to infused vasoactive drugs in a noisy environment.
文摘The similarity metric in traditional content based 3D model retrieval method mainly refers the distance metric algorithm used in 2D image retrieval. But this method will limit the matching breadth. This paper proposes a new retrieval matching method based on case learning to enlarge the retrieval matching scope. In this method, the shortest path in Graph theory is used to analyze the similarity how the nodes on the path between query model and matched model effect. Then, the label propagation method and k nearest-neighbor method based on case learning is studied and used to improve the retrieval efficiency based on the existing feature extraction.
基金supported by the Researchers Supporting Project Number(RSP-2020/250),King Saud University,Riyadh,Saudi Arabia.
文摘The COVID-19 pandemic has caused hundreds of thousands of deaths,millions of infections worldwide,and the loss of trillions of dollars for many large economies.It poses a grave threat to the human population with an excessive number of patients constituting an unprecedented challenge with which health systems have to cope.Researchers from many domains have devised diverse approaches for the timely diagnosis of COVID-19 to facilitate medical responses.In the same vein,a wide variety of research studies have investigated underlying medical conditions for indicators suggesting the severity and mortality of,and role of age groups and gender on,the probability of COVID-19 infection.This study aimed to review,analyze,and critically appraise published works that report on various factors to explain their relationship with COVID-19.Such studies span a wide range,including descriptive analyses,ratio analyses,cohort,prospective and retrospective studies.Various studies that describe indicators to determine the probability of infection among the general population,as well as the risk factors associated with severe illness and mortality,are critically analyzed and these ndings are discussed in detail.A comprehensive analysis was conducted on research studies that investigated the perceived differences in vulnerability of different age groups and genders to severe outcomes of COVID-19.Studies incorporating important demographic,health,and socioeconomic characteristics are highlighted to emphasize their importance.Predominantly,the lack of an appropriated dataset that contains demographic,personal health,and socioeconomic information implicates the efcacy and efciency of the discussed methods.Results are overstated on the part of both exclusion of quarantined and patients with mild symptoms and inclusion of the data from hospitals where the majority of the cases are potentially ill.
文摘Intrusion detection system(IDS)techniques are used in cybersecurity to protect and safeguard sensitive assets.The increasing network security risks can be mitigated by implementing effective IDS methods as a defense mechanism.The proposed research presents an IDS model based on the methodology of the adaptive fuzzy k-nearest neighbor(FKNN)algorithm.Using this method,two parameters,i.e.,the neighborhood size(k)and fuzzy strength parameter(m)were characterized by implementing the particle swarm optimization(PSO).In addition to being used for FKNN parametric optimization,PSO is also used for selecting the conditional feature subsets for detection.To proficiently regulate the indigenous and comprehensive search skill of the PSO approach,two control parameters containing the time-varying inertia weight(TVIW)and time-varying acceleration coefficients(TVAC)were applied to the system.In addition,continuous and binary PSO algorithms were both executed on a multi-core platform.The proposed IDS model was compared with other state-of-the-art classifiers.The results of the proposed methodology are superior to the rest of the techniques in terms of the classification accuracy,precision,recall,and f-score.The results showed that the proposed methods gave the highest performance scores compared to the other conventional algorithms in detecting all the attack types in two datasets.Moreover,the proposed method was able to obtain a large number of true positives and negatives,with minimal number of false positives and negatives.
文摘In the recent years spam became as a big problem of Internet and electronic communication. There developed a lot of techniques to fight them. In this paper the overview of existing e-mail spam filtering methods is given. The classification, evaluation, and comparison of traditional and learning-based methods are provided. Some personal anti-spam products are tested and compared. The statement for new approach in spam filtering technique is considered.