期刊文献+
共找到23篇文章
< 1 2 >
每页显示 20 50 100
Asynchronous Learning-Based Output Feedback Sliding Mode Control for Semi-Markov Jump Systems: A Descriptor Approach
1
作者 Zheng Wu Yiyun Zhao +3 位作者 Fanbiao Li Tao Yang Yang Shi Weihua Gui 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第6期1358-1369,共12页
This paper presents an asynchronous output-feed-back control strategy of semi-Markovian systems via sliding mode-based learning technique.Compared with most literature results that require exact prior knowledge of sys... This paper presents an asynchronous output-feed-back control strategy of semi-Markovian systems via sliding mode-based learning technique.Compared with most literature results that require exact prior knowledge of system state and mode information,an asynchronous output-feedback sliding sur-face is adopted in the case of incompletely available state and non-synchronization phenomenon.The holonomic dynamics of the sliding mode are characterized by a descriptor system in which the switching surface is regarded as the fast subsystem and the system dynamics are viewed as the slow subsystem.Based upon the co-occurrence of two subsystems,the sufficient stochastic admissibility criterion of the holonomic dynamics is derived by utilizing the characteristics of cumulative distribution functions.Furthermore,a recursive learning controller is formulated to guarantee the reachability of the sliding manifold and realize the chattering reduction of the asynchronous switching and sliding motion.Finally,the proposed theoretical method is substantia-ted through two numerical simulations with the practical contin-uous stirred tank reactor and F-404 aircraft engine model,respectively. 展开更多
关键词 Asynchronous switching learning-based control output feedback semi-Markovian jump systems sliding mode con-trol(SMC).
下载PDF
Deep learning-based magnetic resonance imaging reconstruction for improving the image quality of reduced-field-of-view diffusionweighted imaging of the pancreas 被引量:1
2
作者 Yukihisa Takayama Keisuke Sato +3 位作者 Shinji Tanaka Ryo Murayama Nahoko Goto Kengo Yoshimitsu 《World Journal of Radiology》 2023年第12期338-349,共12页
BACKGROUND It has been reported that deep learning-based reconstruction(DLR)can reduce image noise and artifacts,thereby improving the signal-to-noise ratio and image sharpness.However,no previous studies have evaluat... BACKGROUND It has been reported that deep learning-based reconstruction(DLR)can reduce image noise and artifacts,thereby improving the signal-to-noise ratio and image sharpness.However,no previous studies have evaluated the efficacy of DLR in improving image quality in reduced-field-of-view(reduced-FOV)diffusionweighted imaging(DWI)[field-of-view optimized and constrained undistorted single-shot(FOCUS)]of the pancreas.We hypothesized that a combination of these techniques would improve DWI image quality without prolonging the scan time but would influence the apparent diffusion coefficient calculation.AIM To evaluate the efficacy of DLR for image quality improvement of FOCUS of the pancreas.METHODS This was a retrospective study evaluated 37 patients with pancreatic cystic lesions who underwent magnetic resonance imaging between August 2021 and October 2021.We evaluated three types of FOCUS examinations:FOCUS with DLR(FOCUS-DLR+),FOCUS without DLR(FOCUS-DLR−),and conventional FOCUS(FOCUS-conv).The three types of FOCUS and their apparent diffusion coefficient(ADC)maps were compared qualitatively and quantitatively.RESULTS FOCUS-DLR+(3.62,average score of two radiologists)showed significantly better qualitative scores for image noise than FOCUS-DLR−(2.62)and FOCUS-conv(2.88)(P<0.05).Furthermore,FOCUS-DLR+showed the highest contrast ratio and 600 s/mm^(2)(0.72±0.08 and 0.68±0.08)and FOCUS-DLR−showed the highest CR between cystic lesions and the pancreatic parenchyma for the b-values of 0 and 600 s/mm2(0.62±0.21 and 0.62±0.21)(P<0.05),respectively.FOCUS-DLR+provided significantly higher ADCs of the pancreas and lesion(1.44±0.24 and 3.00±0.66)compared to FOCUS-DLR−(1.39±0.22 and 2.86±0.61)and significantly lower ADCs compared to FOCUS-conv(1.84±0.45 and 3.32±0.70)(P<0.05),respectively.CONCLUSION This study evaluated the efficacy of DLR for image quality improvement in reduced-FOV DWI of the pancreas.DLR can significantly denoise images without prolonging the scan time or decreasing the spatial resolution.The denoising level of DWI can be controlled to make the images appear more natural to the human eye.However,this study revealed that DLR did not ameliorate pancreatic distortion.Additionally,physicians should pay attention to the interpretation of ADCs after DLR application because ADCs are significantly changed by DLR. 展开更多
关键词 Deep learning-based reconstruction Magnetic resonance imaging Reduced field-of-view Diffusion-weighted imaging PANCREAS
下载PDF
Machine Learning-Based Models for Magnetic Resonance Imaging(MRI)-Based Brain Tumor Classification
3
作者 Abdullah A.Asiri Bilal Khan +5 位作者 Fazal Muhammad Shams ur Rahman Hassan A.Alshamrani Khalaf A.Alshamrani Muhammad Irfan Fawaz F.Alqhtani 《Intelligent Automation & Soft Computing》 SCIE 2023年第4期299-312,共14页
In the medical profession,recent technological advancements play an essential role in the early detection and categorization of many diseases that cause mortality.The technique rising on daily basis for detecting illn... In the medical profession,recent technological advancements play an essential role in the early detection and categorization of many diseases that cause mortality.The technique rising on daily basis for detecting illness in magnetic resonance through pictures is the inspection of humans.Automatic(computerized)illness detection in medical imaging has found you the emergent region in several medical diagnostic applications.Various diseases that cause death need to be identified through such techniques and technologies to overcome the mortality ratio.The brain tumor is one of the most common causes of death.Researchers have already proposed various models for the classification and detection of tumors,each with its strengths and weaknesses,but there is still a need to improve the classification process with improved efficiency.However,in this study,we give an in-depth analysis of six distinct machine learning(ML)algorithms,including Random Forest(RF),Naïve Bayes(NB),Neural Networks(NN),CN2 Rule Induction(CN2),Support Vector Machine(SVM),and Decision Tree(Tree),to address this gap in improving accuracy.On the Kaggle dataset,these strategies are tested using classification accuracy,the area under the Receiver Operating Characteristic(ROC)curve,precision,recall,and F1 Score(F1).The training and testing process is strengthened by using a 10-fold cross-validation technique.The results show that SVM outperforms other algorithms,with 95.3%accuracy. 展开更多
关键词 MRI images brain tumor machine learning-based classification
下载PDF
An Overview and Experimental Study of Learning-Based Optimization Algorithms for the Vehicle Routing Problem 被引量:2
4
作者 Bingjie Li Guohua Wu +2 位作者 Yongming He Mingfeng Fan Witold Pedrycz 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第7期1115-1138,共24页
The vehicle routing problem(VRP)is a typical discrete combinatorial optimization problem,and many models and algorithms have been proposed to solve the VRP and its variants.Although existing approaches have contribute... The vehicle routing problem(VRP)is a typical discrete combinatorial optimization problem,and many models and algorithms have been proposed to solve the VRP and its variants.Although existing approaches have contributed significantly to the development of this field,these approaches either are limited in problem size or need manual intervention in choosing parameters.To solve these difficulties,many studies have considered learning-based optimization(LBO)algorithms to solve the VRP.This paper reviews recent advances in this field and divides relevant approaches into end-to-end approaches and step-by-step approaches.We performed a statistical analysis of the reviewed articles from various aspects and designed three experiments to evaluate the performance of four representative LBO algorithms.Finally,we conclude the applicable types of problems for different LBO algorithms and suggest directions in which researchers can improve LBO algorithms. 展开更多
关键词 End-to-end approaches learning-based optimization(LBO)algorithms reinforcement learning step-by-step approaches vehicle routing problem(VRP)
下载PDF
DLBT:Deep Learning-Based Transformer to Generate Pseudo-Code from Source Code
5
作者 Walaa Gad Anas Alokla +2 位作者 Waleed Nazih Mustafa Aref Abdel-badeeh Salem 《Computers, Materials & Continua》 SCIE EI 2022年第2期3117-3132,共16页
Understanding the content of the source code and its regular expression is very difficult when they are written in an unfamiliar language.Pseudo-code explains and describes the content of the code without using syntax... Understanding the content of the source code and its regular expression is very difficult when they are written in an unfamiliar language.Pseudo-code explains and describes the content of the code without using syntax or programming language technologies.However,writing Pseudo-code to each code instruction is laborious.Recently,neural machine translation is used to generate textual descriptions for the source code.In this paper,a novel deep learning-based transformer(DLBT)model is proposed for automatic Pseudo-code generation from the source code.The proposed model uses deep learning which is based on Neural Machine Translation(NMT)to work as a language translator.The DLBT is based on the transformer which is an encoder-decoder structure.There are three major components:tokenizer and embeddings,transformer,and post-processing.Each code line is tokenized to dense vector.Then transformer captures the relatedness between the source code and the matching Pseudo-code without the need of Recurrent Neural Network(RNN).At the post-processing step,the generated Pseudo-code is optimized.The proposed model is assessed using a real Python dataset,which contains more than 18,800 lines of a source code written in Python.The experiments show promising performance results compared with other machine translation methods such as Recurrent Neural Network(RNN).The proposed DLBT records 47.32,68.49 accuracy and BLEU performance measures,respectively. 展开更多
关键词 Natural language processing long short-term memory neural machine translation pseudo-code generation deep learning-based transformer
下载PDF
Efficient Computation Offloading of IoT-Based Workflows Using Discrete Teaching Learning-Based Optimization
6
作者 Mohamed K.Hussein Mohamed H.Mousa 《Computers, Materials & Continua》 SCIE EI 2022年第11期3685-3703,共19页
As the Internet of Things(IoT)and mobile devices have rapidly proliferated,their computationally intensive applications have developed into complex,concurrent IoT-based workflows involving multiple interdependent task... As the Internet of Things(IoT)and mobile devices have rapidly proliferated,their computationally intensive applications have developed into complex,concurrent IoT-based workflows involving multiple interdependent tasks.By exploiting its low latency and high bandwidth,mobile edge computing(MEC)has emerged to achieve the high-performance computation offloading of these applications to satisfy the quality-of-service requirements of workflows and devices.In this study,we propose an offloading strategy for IoT-based workflows in a high-performance MEC environment.The proposed task-based offloading strategy consists of an optimization problem that includes task dependency,communication costs,workflow constraints,device energy consumption,and the heterogeneous characteristics of the edge environment.In addition,the optimal placement of workflow tasks is optimized using a discrete teaching learning-based optimization(DTLBO)metaheuristic.Extensive experimental evaluations demonstrate that the proposed offloading strategy is effective at minimizing the energy consumption of mobile devices and reducing the execution times of workflows compared to offloading strategies using different metaheuristics,including particle swarm optimization and ant colony optimization. 展开更多
关键词 High-performance computing internet of things(IoT) mobile edge computing(MEC) WORKFLOWS computation offloading teaching learning-based optimization
下载PDF
Multi-criteria user selection scheme for learning-based multiuser MIMO cognitive radio networks
7
作者 王妮炜 费泽松 +2 位作者 邢成文 倪吉庆 匡镜明 《Journal of Beijing Institute of Technology》 EI CAS 2015年第2期240-245,共6页
For multiuser multiple-input-multiple-output (MIMO) cognitive radio (CR) networks a four-stage transmiision structure is proposed. In learning stage, the learning-based algorithm with low overhead and high flexibi... For multiuser multiple-input-multiple-output (MIMO) cognitive radio (CR) networks a four-stage transmiision structure is proposed. In learning stage, the learning-based algorithm with low overhead and high flexibility is exploited to estimate the channel state information ( CSI ) between primary (PR) terminals and CR terminals. By using channel training in the second stage of CR frame, the channels between CR terminals can be achieved. In the third stage, a multi-criteria user selection scheme is proposed to choose the best user set for service. In data transmission stage, the total capacity maximization problem is solved with the interference constraint of PR terminals. Finally, simulation results show that the multi-criteria user selection scheme, which has the ability of changing the weights of criterions, is more flexible than the other three traditional schemes and achieves a tradeoff between user fairness and system performance. 展开更多
关键词 learning-base multiple-input-multiple-output MIMO cognitive radio CR network MULTIUSER
下载PDF
A novel improved teaching and learning-based-optimization algorithm and its application in a large-scale inventory control system
8
作者 Zhixiang Chen 《International Journal of Intelligent Computing and Cybernetics》 EI 2023年第3期443-501,共59页
Purpose–The purpose of this paper is to propose a novel improved teaching and learning-based algorithm(TLBO)to enhance its convergence ability and solution accuracy,making it more suitable for solving large-scale opt... Purpose–The purpose of this paper is to propose a novel improved teaching and learning-based algorithm(TLBO)to enhance its convergence ability and solution accuracy,making it more suitable for solving large-scale optimization issues.Design/methodology/approach–Utilizing multiple cooperation mechanisms in teaching and learning processes,an improved TBLO named CTLBO(collectivism teaching-learning-based optimization)is developed.This algorithm introduces a new preparation phase before the teaching and learning phases and applies multiple teacher–learner cooperation strategies in teaching and learning processes.Applying modularizationidea,based on the configuration structure of operators ofCTLBO,six variants ofCTLBOare constructed.Foridentifying the best configuration,30 general benchmark functions are tested.Then,three experiments using CEC2020(2020 IEEE Conference on Evolutionary Computation)-constrained optimization problems are conducted to compare CTLBO with other algorithms.At last,a large-scale industrial engineering problem is taken as the application case.Findings–Experiment with 30 general unconstrained benchmark functions indicates that CTLBO-c is the best configuration of all variants of CTLBO.Three experiments using CEC2020-constrained optimization problems show that CTLBO is one powerful algorithm for solving large-scale constrained optimization problems.The application case of industrial engineering problem shows that CTLBO and its variant CTLBO-c can effectively solve the large-scale real problem,while the accuracies of TLBO and other meta-heuristic algorithm are far lower than CLTBO and CTLBO-c,revealing that CTLBO and its variants can far outperform other algorithms.CTLBO is an excellent algorithm for solving large-scale complex optimization issues.Originality/value–The innovation of this paper lies in the improvement strategies in changing the original TLBO with two-phase teaching–learning mechanism to a new algorithm CTLBO with three-phase multiple cooperation teaching–learning mechanism,self-learning mechanism in teaching and group teaching mechanism.CTLBO has important application value in solving large-scale optimization problems. 展开更多
关键词 Teaching and learning-based optimization Group-individual multi-mode cooperation Performance-based group teaching Teacher self-learning Team learning
原文传递
Learning-based adaptive optimal output regulation of linear and nonlinear systems:an overview 被引量:2
9
作者 Weinan Gao Zhong-Ping Jiang 《Control Theory and Technology》 EI CSCD 2022年第1期1-19,共19页
This paper reviews recent developments in learning-based adaptive optimal output regulation that aims to solve the problem of adaptive and optimal asymptotic tracking with disturbance rejection.The proposed framework ... This paper reviews recent developments in learning-based adaptive optimal output regulation that aims to solve the problem of adaptive and optimal asymptotic tracking with disturbance rejection.The proposed framework aims to bring together two separate topics—output regulation and adaptive dynamic programming—that have been under extensive investigation due to their broad applications in modern control engineering.Under this framework,one can solve optimal output regulation problems of linear,partially linear,nonlinear,and multi-agent systems in a data-driven manner.We will also review some practical applications based on this framework,such as semi-autonomous vehicles,connected and autonomous vehicles,and nonlinear oscillators. 展开更多
关键词 Adaptive optimal output regulation Adaptive dynamic programming Reinforcement learning learning-based control
原文传递
A learning-based approach for solving shear stress vector distribution from shear-sensitive liquid crystal coating images 被引量:1
10
作者 Jisong ZHAO Jinming ZHANG Boqiao WANG 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2022年第4期55-65,共11页
A learning-based approach for solving wall shear stresses from Shear-Sensitive Liquid Crystal Coating(SSLCC) color images is presented in this paper. The approach is able to learn and establish the mapping relationshi... A learning-based approach for solving wall shear stresses from Shear-Sensitive Liquid Crystal Coating(SSLCC) color images is presented in this paper. The approach is able to learn and establish the mapping relationship between the SSLCC color-change responses in different observation directions and the shear stress vectors, and then uses the mapping relationship to solve wall shear stress vectors from SSLCC color images. Experimental results show that the proposed approach can solve wall shear stress vectors using two or more SSLCC images, and even using only one image for symmetrical flow field. The accuracy of the approach using four or more observations is found to be comparable to that of the traditional multi-view Gauss curve fitting approach;the accuracy is slightly reduced when using two or fewer observations. The computational efficiency is significantly improved when compared with the traditional Gauss curve fitting approach, and the wall shear stress vectors can be solved in nearly real time. The learning-based approach has no strict requirements on illumination direction and observation directions and is therefore more flexible to use in practical wind tunnel measurement when compared with traditional liquid crystal-based methods. 展开更多
关键词 Shear stress Measurement Shear-sensitive liquid crystal learning-based approach Calibration
原文传递
Learning-based parameter prediction for quality control in three-dimensional medical image compression
11
作者 Yuxuan HOU Zhong REN +1 位作者 Yubo TAO Wei CHEN 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2021年第9期1169-1178,共10页
Quality control is of vital importance in compressing three-dimensional(3D)medical imaging data.Optimal com-pression parameters need to be determined based on the specific quality requirement.In high efficiency video ... Quality control is of vital importance in compressing three-dimensional(3D)medical imaging data.Optimal com-pression parameters need to be determined based on the specific quality requirement.In high efficiency video coding(HEVC),regarded as the state-of-the-art compression tool,the quantization parameter(QP)plays a dominant role in controlling quality.The direct application of a video-based scheme in predicting the ideal parameters for 3D medical image compression cannot guarantee satisfactory results.In this paper we propose a learning-based parameter prediction scheme to achieve efficient quality control.Its kernel is a support vector regression(SVR)based learning model that is capable of predicting the optimal QP from both vid-eo-based and structural image features extracted directly from raw data,avoiding time-consuming processes such as pre-encoding and iteration,which are often needed in existing techniques.Experimental results on several datasets verify that our approach outperforms current video-based quality control methods. 展开更多
关键词 Medical image compression High efficiency video coding(HEVC) Quality control learning-based
原文传递
Scientific Advances and Weather Services of the China Meteorological Administration’s National Forecasting Systems during the Beijing 2022 Winter Olympics
12
作者 Guo DENG Xueshun SHEN +23 位作者 Jun DU Jiandong GONG Hua TONG Liantang DENG Zhifang XU Jing CHEN Jian SUN Yong WANG Jiangkai HU Jianjie WANG Mingxuan CHEN Huiling YUAN Yutao ZHANG Hongqi LI Yuanzhe WANG Li GAO Li SHENG Da LI Li LI Hao WANG Ying ZHAO Yinglin LI Zhili LIU Wenhua GUO 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第5期767-776,共10页
Since the Beijing 2022 Winter Olympics was the first Winter Olympics in history held in continental winter monsoon climate conditions across complex terrain areas,there is a deficiency of relevant research,operational... Since the Beijing 2022 Winter Olympics was the first Winter Olympics in history held in continental winter monsoon climate conditions across complex terrain areas,there is a deficiency of relevant research,operational techniques,and experience.This made providing meteorological services for this event particularly challenging.The China Meteorological Administration(CMA)Earth System Modeling and Prediction Centre,achieved breakthroughs in research on short-and medium-term deterministic and ensemble numerical predictions.Several key technologies crucial for precise winter weather services during the Winter Olympics were developed.A comprehensive framework,known as the Operational System for High-Precision Weather Forecasting for the Winter Olympics,was established.Some of these advancements represent the highest level of capabilities currently available in China.The meteorological service provided to the Beijing 2022 Games also exceeded previous Winter Olympic Games in both variety and quality.This included achievements such as the“100-meter level,minute level”downscaled spatiotemporal resolution and forecasts spanning 1 to 15 days.Around 30 new technologies and over 60 kinds of products that align with the requirements of the Winter Olympics Organizing Committee were developed,and many of these techniques have since been integrated into the CMA’s operational national forecasting systems.These accomplishments were facilitated by a dedicated weather forecasting and research initiative,in conjunction with the preexisting real-time operational forecasting systems of the CMA.This program represents one of the five subprograms of the WMO’s high-impact weather forecasting demonstration project(SMART2022),and continues to play an important role in their Regional Association(RA)II Research Development Project(Hangzhou RDP).Therefore,the research accomplishments and meteorological service experiences from this program will be carried forward into forthcoming highimpact weather forecasting activities.This article provides an overview and assessment of this program and the operational national forecasting systems. 展开更多
关键词 Beijing Winter Olympic Games CMA national forecasting system data assimilation ensemble forecast bias correction and downscaling machine learning-based fusion methods
下载PDF
A semantic segmentation-based underwater acoustic image transmission framework for cooperative SLAM
13
作者 Jiaxu Li Guangyao Han +1 位作者 Shuai Chang Xiaomei Fu 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第3期339-351,共13页
With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection abil... With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection ability of a single vehicle limits the SLAM performance in wide areas.Thereby,cooperative SLAM using multiple vehicles has become an important research direction.The key factor of cooperative SLAM is timely and efficient sonar image transmission among underwater vehicles.However,the limited bandwidth of underwater acoustic channels contradicts a large amount of sonar image data.It is essential to compress the images before transmission.Recently,deep neural networks have great value in image compression by virtue of the powerful learning ability of neural networks,but the existing sonar image compression methods based on neural network usually focus on the pixel-level information without the semantic-level information.In this paper,we propose a novel underwater acoustic transmission scheme called UAT-SSIC that includes semantic segmentation-based sonar image compression(SSIC)framework and the joint source-channel codec,to improve the accuracy of the semantic information of the reconstructed sonar image at the receiver.The SSIC framework consists of Auto-Encoder structure-based sonar image compression network,which is measured by a semantic segmentation network's residual.Considering that sonar images have the characteristics of blurred target edges,the semantic segmentation network used a special dilated convolution neural network(DiCNN)to enhance segmentation accuracy by expanding the range of receptive fields.The joint source-channel codec with unequal error protection is proposed that adjusts the power level of the transmitted data,which deal with sonar image transmission error caused by the serious underwater acoustic channel.Experiment results demonstrate that our method preserves more semantic information,with advantages over existing methods at the same compression ratio.It also improves the error tolerance and packet loss resistance of transmission. 展开更多
关键词 Semantic segmentation Sonar image transmission learning-based compression
下载PDF
Parameter Tuned Machine Learning Based Emotion Recognition on Arabic Twitter Data
14
作者 Ibrahim M.Alwayle Badriyya B.Al-onazi +5 位作者 Jaber S.Alzahrani Khaled M.Alalayah Khadija M.Alaidarous Ibrahim Abdulrab Ahmed Mahmoud Othman Abdelwahed Motwakel 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期3423-3438,共16页
Arabic is one of the most spoken languages across the globe.However,there are fewer studies concerning Sentiment Analysis(SA)in Arabic.In recent years,the detected sentiments and emotions expressed in tweets have rece... Arabic is one of the most spoken languages across the globe.However,there are fewer studies concerning Sentiment Analysis(SA)in Arabic.In recent years,the detected sentiments and emotions expressed in tweets have received significant interest.The substantial role played by the Arab region in international politics and the global economy has urged the need to examine the sentiments and emotions in the Arabic language.Two common models are available:Machine Learning and lexicon-based approaches to address emotion classification problems.With this motivation,the current research article develops a Teaching and Learning Optimization with Machine Learning Based Emotion Recognition and Classification(TLBOML-ERC)model for Sentiment Analysis on tweets made in the Arabic language.The presented TLBOML-ERC model focuses on recognising emotions and sentiments expressed in Arabic tweets.To attain this,the proposed TLBOMLERC model initially carries out data pre-processing and a Continuous Bag Of Words(CBOW)-based word embedding process.In addition,Denoising Autoencoder(DAE)model is also exploited to categorise different emotions expressed in Arabic tweets.To improve the efficacy of the DAE model,the Teaching and Learning-based Optimization(TLBO)algorithm is utilized to optimize the parameters.The proposed TLBOML-ERC method was experimentally validated with the help of an Arabic tweets dataset.The obtained results show the promising performance of the proposed TLBOML-ERC model on Arabic emotion classification. 展开更多
关键词 Arabic language Twitter data machine learning teaching and learning-based optimization sentiment analysis emotion classification
下载PDF
Intelligent Networked Control of Vasoactive Drug Infusion for Patients with Uncertain Sensitivity
15
作者 Mohamed Esmail Karar Amged Sayed A.Mahmoud 《Computer Systems Science & Engineering》 SCIE EI 2023年第10期721-739,共19页
Abnormal high blood pressure or hypertension is still the leading risk factor for death and disability worldwide.This paper presents a new intelligent networked control of medical drug infusion system to regulate the ... Abnormal high blood pressure or hypertension is still the leading risk factor for death and disability worldwide.This paper presents a new intelligent networked control of medical drug infusion system to regulate the mean arterial blood pressure for hypertensive patients with different health status conditions.The infusion of vasoactive drugs to patients endures various issues,such as variation of sensitivity and noise,which require effective and powerful systems to ensure robustness and good performance.The developed intelligent networked system is composed of a hybrid control scheme of interval type-2 fuzzy(IT2F)logic and teaching-learning-based optimization(TLBO)algorithm.This networked IT2F control is capable of managing the uncertain sensitivity of the patient to anti-hypertensive drugs successfully.To avoid the manual selection of control parameter values,the TLBO algorithm is mainly used to automatically find the best parameter values of the networked IT2F controller.The simulation results showed that the optimized networked IT2F achieved a good performance under external disturbances.A comparative study has also been conducted to emphasize the outperformance of the developed controller against traditional PID and type-1 fuzzy controllers.Moreover,the comparative evaluation demonstrated that the performance of the developed networked IT2F controller is superior to other control strategies in previous studies to handle unknown patients’sensitivity to infused vasoactive drugs in a noisy environment. 展开更多
关键词 Intelligent medical systems TELEMEDICINE fuzzy control teaching learning-based optimization
下载PDF
A Retrieval Matching Method Based Case Learning for 3D Model 被引量:1
16
作者 Zhi Liu Qihua Chen Caihong Xu 《Journal of Software Engineering and Applications》 2012年第7期467-471,共5页
The similarity metric in traditional content based 3D model retrieval method mainly refers the distance metric algorithm used in 2D image retrieval. But this method will limit the matching breadth. This paper proposes... The similarity metric in traditional content based 3D model retrieval method mainly refers the distance metric algorithm used in 2D image retrieval. But this method will limit the matching breadth. This paper proposes a new retrieval matching method based on case learning to enlarge the retrieval matching scope. In this method, the shortest path in Graph theory is used to analyze the similarity how the nodes on the path between query model and matched model effect. Then, the label propagation method and k nearest-neighbor method based on case learning is studied and used to improve the retrieval efficiency based on the existing feature extraction. 展开更多
关键词 CASE learning-based RETRIEVAL MATCHING SIMILARITY MATCHING
下载PDF
Prediction Models for COVID-19 Integrating Age Groups, Gender, and Underlying Conditions
17
作者 Imran Ashraf Waleed SAlnumay +3 位作者 Rashid Ali Soojung Hur Ali Kashif Bashir Yousaf Bin Zikria 《Computers, Materials & Continua》 SCIE EI 2021年第6期3009-3044,共36页
The COVID-19 pandemic has caused hundreds of thousands of deaths,millions of infections worldwide,and the loss of trillions of dollars for many large economies.It poses a grave threat to the human population with an e... The COVID-19 pandemic has caused hundreds of thousands of deaths,millions of infections worldwide,and the loss of trillions of dollars for many large economies.It poses a grave threat to the human population with an excessive number of patients constituting an unprecedented challenge with which health systems have to cope.Researchers from many domains have devised diverse approaches for the timely diagnosis of COVID-19 to facilitate medical responses.In the same vein,a wide variety of research studies have investigated underlying medical conditions for indicators suggesting the severity and mortality of,and role of age groups and gender on,the probability of COVID-19 infection.This study aimed to review,analyze,and critically appraise published works that report on various factors to explain their relationship with COVID-19.Such studies span a wide range,including descriptive analyses,ratio analyses,cohort,prospective and retrospective studies.Various studies that describe indicators to determine the probability of infection among the general population,as well as the risk factors associated with severe illness and mortality,are critically analyzed and these ndings are discussed in detail.A comprehensive analysis was conducted on research studies that investigated the perceived differences in vulnerability of different age groups and genders to severe outcomes of COVID-19.Studies incorporating important demographic,health,and socioeconomic characteristics are highlighted to emphasize their importance.Predominantly,the lack of an appropriated dataset that contains demographic,personal health,and socioeconomic information implicates the efcacy and efciency of the discussed methods.Results are overstated on the part of both exclusion of quarantined and patients with mild symptoms and inclusion of the data from hospitals where the majority of the cases are potentially ill. 展开更多
关键词 COVID-19 age&gender vulnerability for COVID-19 machine learning-based prognosis COVID-19 vulnerability psychological factors prediction of COVID-19
下载PDF
Intrusion Detection System Using FKNN and Improved PSO
18
作者 Raniyah Wazirali 《Computers, Materials & Continua》 SCIE EI 2021年第5期1429-1445,共17页
Intrusion detection system(IDS)techniques are used in cybersecurity to protect and safeguard sensitive assets.The increasing network security risks can be mitigated by implementing effective IDS methods as a defense m... Intrusion detection system(IDS)techniques are used in cybersecurity to protect and safeguard sensitive assets.The increasing network security risks can be mitigated by implementing effective IDS methods as a defense mechanism.The proposed research presents an IDS model based on the methodology of the adaptive fuzzy k-nearest neighbor(FKNN)algorithm.Using this method,two parameters,i.e.,the neighborhood size(k)and fuzzy strength parameter(m)were characterized by implementing the particle swarm optimization(PSO).In addition to being used for FKNN parametric optimization,PSO is also used for selecting the conditional feature subsets for detection.To proficiently regulate the indigenous and comprehensive search skill of the PSO approach,two control parameters containing the time-varying inertia weight(TVIW)and time-varying acceleration coefficients(TVAC)were applied to the system.In addition,continuous and binary PSO algorithms were both executed on a multi-core platform.The proposed IDS model was compared with other state-of-the-art classifiers.The results of the proposed methodology are superior to the rest of the techniques in terms of the classification accuracy,precision,recall,and f-score.The results showed that the proposed methods gave the highest performance scores compared to the other conventional algorithms in detecting all the attack types in two datasets.Moreover,the proposed method was able to obtain a large number of true positives and negatives,with minimal number of false positives and negatives. 展开更多
关键词 FKNN PSO approach machine learning-based cybersecurity intrusion detection
下载PDF
Survey on Spam Filtering Techniques
19
作者 Saadat Nazirova 《Communications and Network》 2011年第3期153-160,共8页
In the recent years spam became as a big problem of Internet and electronic communication. There developed a lot of techniques to fight them. In this paper the overview of existing e-mail spam filtering methods is giv... In the recent years spam became as a big problem of Internet and electronic communication. There developed a lot of techniques to fight them. In this paper the overview of existing e-mail spam filtering methods is given. The classification, evaluation, and comparison of traditional and learning-based methods are provided. Some personal anti-spam products are tested and compared. The statement for new approach in spam filtering technique is considered. 展开更多
关键词 E-MAIL SPAM Unsolicited BULK MESSAGES FILTERING Traditional METHODS learning-based METHODS Classification
下载PDF
A survey on facial image deblurring
20
作者 Bingnan Wang Fanjiang Xu Quan Zheng 《Computational Visual Media》 SCIE EI CSCD 2024年第1期3-25,共23页
When a facial image is blurred,it significantly affects high-level vision tasks such as face recognition.The purpose of facial image deblurring is to recover a clear image from a blurry input image,which can improve t... When a facial image is blurred,it significantly affects high-level vision tasks such as face recognition.The purpose of facial image deblurring is to recover a clear image from a blurry input image,which can improve the recognition accuracy,etc.However,general deblurring methods do not perform well on facial images.Therefore,some face deblurring methods have been proposed to improve performance by adding semantic or structural information as specific priors according to the characteristics of the facial images.In this paper,we survey and summarize recently published methods for facial image deblurring,most of which are based on deep learning.First,we provide a brief introduction to the modeling of image blurring.Next,we summarize face deblurring methods into two categories:model-based methods and deep learning-based methods.Furthermore,we summarize the datasets,loss functions,and performance evaluation metrics commonly used in the neural network training process.We show the performance of classical methods on these datasets and metrics and provide a brief discussion on the differences between model-based and learning-based methods.Finally,we discuss the current challenges and possible future research directions. 展开更多
关键词 facial image deblurring MODEL-BASED deep learning-based semantic or structural prior
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部