期刊文献+
共找到29篇文章
< 1 2 >
每页显示 20 50 100
Optimal Synergic Deep Learning for COVID-19 Classification Using Chest X-Ray Images
1
作者 JoséEscorcia-Gutierrez Margarita Gamarra +3 位作者 Roosvel Soto-Diaz Safa Alsafari Ayman Yafoz Romany F.Mansour 《Computers, Materials & Continua》 SCIE EI 2023年第6期5255-5270,共16页
A chest radiology scan can significantly aid the early diagnosis and management of COVID-19 since the virus attacks the lungs.Chest X-ray(CXR)gained much interest after the COVID-19 outbreak thanks to its rapid imagin... A chest radiology scan can significantly aid the early diagnosis and management of COVID-19 since the virus attacks the lungs.Chest X-ray(CXR)gained much interest after the COVID-19 outbreak thanks to its rapid imaging time,widespread availability,low cost,and portability.In radiological investigations,computer-aided diagnostic tools are implemented to reduce intra-and inter-observer variability.Using lately industrialized Artificial Intelligence(AI)algorithms and radiological techniques to diagnose and classify disease is advantageous.The current study develops an automatic identification and classification model for CXR pictures using Gaussian Fil-tering based Optimized Synergic Deep Learning using Remora Optimization Algorithm(GF-OSDL-ROA).This method is inclusive of preprocessing and classification based on optimization.The data is preprocessed using Gaussian filtering(GF)to remove any extraneous noise from the image’s edges.Then,the OSDL model is applied to classify the CXRs under different severity levels based on CXR data.The learning rate of OSDL is optimized with the help of ROA for COVID-19 diagnosis showing the novelty of the work.OSDL model,applied in this study,was validated using the COVID-19 dataset.The experiments were conducted upon the proposed OSDL model,which achieved a classification accuracy of 99.83%,while the current Convolutional Neural Network achieved less classification accuracy,i.e.,98.14%. 展开更多
关键词 Artificial intelligence chest X-ray COVID-19 optimized synergic deep learning PREPROCESSING public health
下载PDF
Spectrum Sensing Using Optimized Deep Learning Techniquesin Reconfigurable Embedded Systems
2
作者 Priyesh Kumar PonniyinSelvan 《Intelligent Automation & Soft Computing》 SCIE 2023年第5期2041-2054,共14页
The exponential growth of Internet of Things(IoT)and 5G networks has resulted in maximum users,and the role of cognitive radio has become pivotal in handling the crowded users.In this scenario,cognitive radio techniqu... The exponential growth of Internet of Things(IoT)and 5G networks has resulted in maximum users,and the role of cognitive radio has become pivotal in handling the crowded users.In this scenario,cognitive radio techniques such as spectrum sensing,spectrum sharing and dynamic spectrum access will become essential components in Wireless IoT communication.IoT devices must learn adaptively to the environment and extract the spectrum knowledge and inferred spectrum knowledge by appropriately changing communication parameters such as modulation index,frequency bands,coding rate etc.,to accommodate the above characteristics.Implementing the above learning methods on the embedded chip leads to high latency,high power consumption and more chip area utilisation.To overcome the problems mentioned above,we present DEEP HOLE Radio sys-tems,the intelligent system enabling the spectrum knowledge extraction from the unprocessed samples by the optimized deep learning models directly from the Radio Frequency(RF)environment.DEEP HOLE Radio provides(i)an opti-mized deep learning framework with a good trade-off between latency,power and utilization.(ii)Complete Hardware-Software architecture where the SoC’s coupled with radio transceivers for maximum performance.The experimentation has been carried out using GNURADIO software interfaced with Zynq-7000 devices mounting on ESP8266 radio transceivers with inbuilt Omni direc-tional antennas.The whole spectrum of knowledge has been extracted using GNU radio.These extracted features are used to train the proposed optimized deep learning models,which run parallel on Zynq-SoC 7000,consuming less area,power,latency and less utilization area.The proposed framework has been evaluated and compared with the existing frameworks such as RFLearn,Long Term Short Memory(LSTM),Convolutional Neural Networks(CNN)and Deep Neural Networks(DNN).The outcome shows that the proposed framework has outperformed the existing framework regarding the area,power and time.More-over,the experimental results show that the proposed framework decreases the delay,power and area by 15%,20%25%concerning the existing RFlearn and other hardware constraint frameworks. 展开更多
关键词 Internet of things cognitive radio spectrum sharing optimized deep learning framework GNU radio RF learn
下载PDF
B^(2)C^(3)NetF^(2):Breast cancer classification using an end‐to‐end deep learning feature fusion and satin bowerbird optimization controlled Newton Raphson feature selection
3
作者 Mamuna Fatima Muhammad Attique Khan +2 位作者 Saima Shaheen Nouf Abdullah Almujally Shui‐Hua Wang 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第4期1374-1390,共17页
Currently,the improvement in AI is mainly related to deep learning techniques that are employed for the classification,identification,and quantification of patterns in clinical images.The deep learning models show mor... Currently,the improvement in AI is mainly related to deep learning techniques that are employed for the classification,identification,and quantification of patterns in clinical images.The deep learning models show more remarkable performance than the traditional methods for medical image processing tasks,such as skin cancer,colorectal cancer,brain tumour,cardiac disease,Breast cancer(BrC),and a few more.The manual diagnosis of medical issues always requires an expert and is also expensive.Therefore,developing some computer diagnosis techniques based on deep learning is essential.Breast cancer is the most frequently diagnosed cancer in females with a rapidly growing percentage.It is estimated that patients with BrC will rise to 70%in the next 20 years.If diagnosed at a later stage,the survival rate of patients with BrC is shallow.Hence,early detection is essential,increasing the survival rate to 50%.A new framework for BrC classification is presented that utilises deep learning and feature optimization.The significant steps of the presented framework include(i)hybrid contrast enhancement of acquired images,(ii)data augmentation to facilitate better learning of the Convolutional Neural Network(CNN)model,(iii)a pre‐trained ResNet‐101 model is utilised and modified according to selected dataset classes,(iv)deep transfer learning based model training for feature extraction,(v)the fusion of features using the proposed highly corrected function‐controlled canonical correlation analysis approach,and(vi)optimal feature selection using the modified Satin Bowerbird Optimization controlled Newton Raphson algorithm that finally classified using 10 machine learning classifiers.The experiments of the proposed framework have been carried out using the most critical and publicly available dataset,such as CBISDDSM,and obtained the best accuracy of 94.5%along with improved computation time.The comparison depicts that the presented method surpasses the current state‐ofthe‐art approaches. 展开更多
关键词 artificial intelligence artificial neural network deep learning medical image processing multi‐objective optimization
下载PDF
Hybrid Gene Selection Methods for High-Dimensional Lung Cancer Data Using Improved Arithmetic Optimization Algorithm
4
作者 Mutasem K.Alsmadi 《Computers, Materials & Continua》 SCIE EI 2024年第6期5175-5200,共26页
Lung cancer is among the most frequent cancers in the world,with over one million deaths per year.Classification is required for lung cancer diagnosis and therapy to be effective,accurate,and reliable.Gene expression ... Lung cancer is among the most frequent cancers in the world,with over one million deaths per year.Classification is required for lung cancer diagnosis and therapy to be effective,accurate,and reliable.Gene expression microarrays have made it possible to find genetic biomarkers for cancer diagnosis and prediction in a high-throughput manner.Machine Learning(ML)has been widely used to diagnose and classify lung cancer where the performance of ML methods is evaluated to identify the appropriate technique.Identifying and selecting the gene expression patterns can help in lung cancer diagnoses and classification.Normally,microarrays include several genes and may cause confusion or false prediction.Therefore,the Arithmetic Optimization Algorithm(AOA)is used to identify the optimal gene subset to reduce the number of selected genes.Which can allow the classifiers to yield the best performance for lung cancer classification.In addition,we proposed a modified version of AOA which can work effectively on the high dimensional dataset.In the modified AOA,the features are ranked by their weights and are used to initialize the AOA population.The exploitation process of AOA is then enhanced by developing a local search algorithm based on two neighborhood strategies.Finally,the efficiency of the proposed methods was evaluated on gene expression datasets related to Lung cancer using stratified 4-fold cross-validation.The method’s efficacy in selecting the optimal gene subset is underscored by its ability to maintain feature proportions between 10%to 25%.Moreover,the approach significantly enhances lung cancer prediction accuracy.For instance,Lung_Harvard1 achieved an accuracy of 97.5%,Lung_Harvard2 and Lung_Michigan datasets both achieved 100%,Lung_Adenocarcinoma obtained an accuracy of 88.2%,and Lung_Ontario achieved an accuracy of 87.5%.In conclusion,the results indicate the potential promise of the proposed modified AOA approach in classifying microarray cancer data. 展开更多
关键词 Lung cancer gene selection improved arithmetic optimization algorithm and machine learning
下载PDF
Learned Distributed Query Optimizer:Architecture and Challenges
5
作者 GAO Jun HAN Yinjun +2 位作者 LIN Yang MIAO Hao XU Mo 《ZTE Communications》 2024年第2期49-54,共6页
The query processing in distributed database management systems(DBMS)faces more challenges,such as more operators,and more factors in cost models and meta-data,than that in a single-node DMBS,in which query optimizati... The query processing in distributed database management systems(DBMS)faces more challenges,such as more operators,and more factors in cost models and meta-data,than that in a single-node DMBS,in which query optimization is already an NP-hard problem.Learned query optimizers(mainly in the single-node DBMS)receive attention due to its capability to capture data distributions and flexible ways to avoid hard-craft rules in refinement and adaptation to new hardware.In this paper,we focus on extensions of learned query optimizers to distributed DBMSs.Specifically,we propose one possible but general architecture of the learned query optimizer in the distributed context and highlight differences from the learned optimizer in the single-node ones.In addition,we discuss the challenges and possible solutions. 展开更多
关键词 distributed query processing query optimization learned query optimizer
下载PDF
Deep Optimal VGG16 Based COVID-19 Diagnosis Model
6
作者 M.Buvana K.Muthumayil +3 位作者 S.Senthil kumar Jamel Nebhen Sultan S.Alshamrani Ihsan Ali 《Computers, Materials & Continua》 SCIE EI 2022年第1期43-58,共16页
Coronavirus(COVID-19)outbreak was first identified in Wuhan,China in December 2019.It was tagged as a pandemic soon by the WHO being a serious public medical conditionworldwide.In spite of the fact that the virus can ... Coronavirus(COVID-19)outbreak was first identified in Wuhan,China in December 2019.It was tagged as a pandemic soon by the WHO being a serious public medical conditionworldwide.In spite of the fact that the virus can be diagnosed by qRT-PCR,COVID-19 patients who are affected with pneumonia and other severe complications can only be diagnosed with the help of Chest X-Ray(CXR)and Computed Tomography(CT)images.In this paper,the researchers propose to detect the presence of COVID-19 through images using Best deep learning model with various features.Impressive features like Speeded-Up Robust Features(SURF),Features from Accelerated Segment Test(FAST)and Scale-Invariant Feature Transform(SIFT)are used in the test images to detect the presence of virus.The optimal features are extracted from the images utilizing DeVGGCovNet(Deep optimal VGG16)model through optimal learning rate.This task is accomplished by exceptional mating conduct of Black Widow spiders.In this strategy,cannibalism is incorporated.During this phase,fitness outcomes are rejected and are not satisfied by the proposed model.The results acquired from real case analysis demonstrate the viability of DeVGGCovNet technique in settling true issues using obscure and testing spaces.VGG16 model identifies the imagewhich has a place with which it is dependent on the distinctions in images.The impact of the distinctions on labels during training stage is studied and predicted for test images.The proposed model was compared with existing state-of-the-art models and the results from the proposed model for disarray grid estimates like Sen,Spec,Accuracy and F1 score were promising. 展开更多
关键词 COVID 19 multi-feature extraction vgg16 optimal learning rate
下载PDF
Autonomous Maneuver Decisions via Transfer Learning Pigeon-Inspired Optimization for UCAVs in Dogfight Engagements 被引量:6
7
作者 Wanying Ruan Haibin Duan Yimin Deng 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第9期1639-1657,共19页
This paper proposes an autonomous maneuver decision method using transfer learning pigeon-inspired optimization(TLPIO)for unmanned combat aerial vehicles(UCAVs)in dogfight engagements.Firstly,a nonlinear F-16 aircraft... This paper proposes an autonomous maneuver decision method using transfer learning pigeon-inspired optimization(TLPIO)for unmanned combat aerial vehicles(UCAVs)in dogfight engagements.Firstly,a nonlinear F-16 aircraft model and automatic control system are constructed by a MATLAB/Simulink platform.Secondly,a 3-degrees-of-freedom(3-DOF)aircraft model is used as a maneuvering command generator,and the expanded elemental maneuver library is designed,so that the aircraft state reachable set can be obtained.Then,the game matrix is composed with the air combat situation evaluation function calculated according to the angle and range threats.Finally,a key point is that the objective function to be optimized is designed using the game mixed strategy,and the optimal mixed strategy is obtained by TLPIO.Significantly,the proposed TLPIO does not initialize the population randomly,but adopts the transfer learning method based on Kullback-Leibler(KL)divergence to initialize the population,which improves the search accuracy of the optimization algorithm.Besides,the convergence and time complexity of TLPIO are discussed.Comparison analysis with other classical optimization algorithms highlights the advantage of TLPIO.In the simulation of air combat,three initial scenarios are set,namely,opposite,offensive and defensive conditions.The effectiveness performance of the proposed autonomous maneuver decision method is verified by simulation results. 展开更多
关键词 Autonomous maneuver decisions dogfight engagement game mixed strategy transfer learning pigeon-inspired optimization(TLPIO) unmanned combat aerial vehicle(UCAV)
下载PDF
Optimized Cognitive Learning Model for Energy Efficient Fog-BAN-IoT Networks
8
作者 S.Kalpana C.Annadurai 《Computer Systems Science & Engineering》 SCIE EI 2022年第12期1027-1040,共14页
In Internet of Things (IoT), large amount of data are processed andcommunicated through different network technologies. Wireless Body Area Networks (WBAN) plays pivotal role in the health care domain with an integrat... In Internet of Things (IoT), large amount of data are processed andcommunicated through different network technologies. Wireless Body Area Networks (WBAN) plays pivotal role in the health care domain with an integration ofIoT and Artificial Intelligence (AI). The amalgamation of above mentioned toolshas taken the new peak in terms of diagnosis and treatment process especially inthe pandemic period. But the real challenges such as low latency, energy consumption high throughput still remains in the dark side of the research. This paperproposes a novel optimized cognitive learning based BAN model based on FogIoT technology as a real-time health monitoring systems with the increased network-life time. Energy and latency aware features of BAN have been extractedand used to train the proposed fog based learning algorithm to achieve low energyconsumption and low-latency scheduling algorithm. To test the proposed network,Fog-IoT-BAN test bed has been developed with the battery driven MICOTTboards interfaced with the health care sensors using Micro Python programming.The extensive experimentation is carried out using the above test beds and variousparameters such as accuracy, precision, recall, F1score and specificity has beencalculated along with QoS (quality of service) parameters such as latency, energyand throughput. To prove the superiority of the proposed framework, the performance of the proposed learning based framework has been compared with theother state-of-art classical learning frameworks and other existing Fog-BAN networks such as WORN, DARE, L-No-DEAF networks. Results proves the proposed framework has outperformed the other classical learning models in termsof accuracy and high False Alarm Rate (FAR), energy efficiency and latency. 展开更多
关键词 Fog-IoT-BAN optimized learning model internet of things micott worn DARE l-deaf networks quality of service
下载PDF
An Efficient Machine Learning Based Precoding Algorithm for Millimeter-Wave Massive MIMO
9
作者 Waleed Shahjehan Abid Ullah +3 位作者 Syed Waqar Shah Ayman A.Aly Bassem F.Felemban Wonjong Noh 《Computers, Materials & Continua》 SCIE EI 2022年第6期5399-5411,共13页
Millimeter wave communication works in the 30–300 GHz frequency range,and can obtain a very high bandwidth,which greatly improves the transmission rate of the communication system and becomes one of the key technolog... Millimeter wave communication works in the 30–300 GHz frequency range,and can obtain a very high bandwidth,which greatly improves the transmission rate of the communication system and becomes one of the key technologies of fifth-generation(5G).The smaller wavelength of the millimeter wave makes it possible to assemble a large number of antennas in a small aperture.The resulting array gain can compensate for the path loss of the millimeter wave.Utilizing this feature,the millimeter wave massive multiple-input multiple-output(MIMO)system uses a large antenna array at the base station.It enables the transmission of multiple data streams,making the system have a higher data transmission rate.In the millimeter wave massive MIMO system,the precoding technology uses the state information of the channel to adjust the transmission strategy at the transmitting end,and the receiving end performs equalization,so that users can better obtain the antenna multiplexing gain and improve the system capacity.This paper proposes an efficient algorithm based on machine learning(ML)for effective system performance in mmwave massive MIMO systems.The main idea is to optimize the adaptive connection structure to maximize the received signal power of each user and correlate the RF chain and base station antenna.Simulation results show that,the proposed algorithm effectively improved the system performance in terms of spectral efficiency and complexity as compared with existing algorithms. 展开更多
关键词 MIMO phased array precoding scheme machine learning optimization
下载PDF
Hyperparameter Tuning for Deep Neural Networks Based Optimization Algorithm 被引量:3
10
作者 D.Vidyabharathi V.Mohanraj 《Intelligent Automation & Soft Computing》 SCIE 2023年第6期2559-2573,共15页
For training the present Neural Network(NN)models,the standard technique is to utilize decaying Learning Rates(LR).While the majority of these techniques commence with a large LR,they will decay multiple times over ti... For training the present Neural Network(NN)models,the standard technique is to utilize decaying Learning Rates(LR).While the majority of these techniques commence with a large LR,they will decay multiple times over time.Decaying has been proved to enhance generalization as well as optimization.Other parameters,such as the network’s size,the number of hidden layers,drop-outs to avoid overfitting,batch size,and so on,are solely based on heuristics.This work has proposed Adaptive Teaching Learning Based(ATLB)Heuristic to identify the optimal hyperparameters for diverse networks.Here we consider three architec-tures Recurrent Neural Networks(RNN),Long Short Term Memory(LSTM),Bidirectional Long Short Term Memory(BiLSTM)of Deep Neural Networks for classification.The evaluation of the proposed ATLB is done through the various learning rate schedulers Cyclical Learning Rate(CLR),Hyperbolic Tangent Decay(HTD),and Toggle between Hyperbolic Tangent Decay and Triangular mode with Restarts(T-HTR)techniques.Experimental results have shown the performance improvement on the 20Newsgroup,Reuters Newswire and IMDB dataset. 展开更多
关键词 Deep learning deep neural network(DNN) learning rates(LR) recurrent neural network(RNN) cyclical learning rate(CLR) hyperbolic tangent decay(HTD) toggle between hyperbolic tangent decay and triangular mode with restarts(T-HTR) teaching learning based optimization(TLBO)
下载PDF
Learning to optimize:A tutorial for continuous and mixed-integer optimization 被引量:1
11
作者 Xiaohan Chen Jialin Liu Wotao Yin 《Science China Mathematics》 SCIE CSCD 2024年第6期1191-1262,共72页
Learning to optimize(L2O)stands at the intersection of traditional optimization and machine learning,utilizing the capabilities of machine learning to enhance conventional optimization techniques.As real-world optimiz... Learning to optimize(L2O)stands at the intersection of traditional optimization and machine learning,utilizing the capabilities of machine learning to enhance conventional optimization techniques.As real-world optimization problems frequently share common structures,L2O provides a tool to exploit these structures for better or faster solutions.This tutorial dives deep into L2O techniques,introducing how to accelerate optimization algorithms,promptly estimate the solutions,or even reshape the optimization problem itself,making it more adaptive to real-world applications.By considering the prerequisites for successful applications of L2O and the structure of the optimization problems at hand,this tutorial provides a comprehensive guide for practitioners and researchers alike. 展开更多
关键词 AI for mathematics(AI4Math) learning to optimize algorithm unrolling plug-and-play methods differentiable programming machine learning for combinatorial optimization(ML4CO)
原文传递
Online payment fraud:from anomaly detection to risk management
12
作者 Paolo Vanini Sebastiano Rossi +1 位作者 Ermin Zvizdic Thomas Domenig 《Financial Innovation》 2023年第1期1788-1812,共25页
Online banking fraud occurs whenever a criminal can seize accounts and transfer funds from an individual’s online bank account.Successfully preventing this requires the detection of as many fraudsters as possible,wit... Online banking fraud occurs whenever a criminal can seize accounts and transfer funds from an individual’s online bank account.Successfully preventing this requires the detection of as many fraudsters as possible,without producing too many false alarms.This is a challenge for machine learning owing to the extremely imbalanced data and complexity of fraud.In addition,classical machine learning methods must be extended,minimizing expected financial losses.Finally,fraud can only be combated systematically and economically if the risks and costs in payment channels are known.We define three models that overcome these challenges:machine learning-based fraud detection,economic optimization of machine learning results,and a risk model to predict the risk of fraud while considering countermeasures.The models were tested utilizing real data.Our machine learning model alone reduces the expected and unexpected losses in the three aggregated payment channels by 15%compared to a benchmark consisting of static if-then rules.Optimizing the machine-learning model further reduces the expected losses by 52%.These results hold with a low false positive rate of 0.4%.Thus,the risk framework of the three models is viable from a business and risk perspective. 展开更多
关键词 Payment fraud risk management Anomaly detection Ensemble models Integration of machine learning and statistical risk modelling Economic optimization machine learning outputs
下载PDF
Incremental Face Clustering with Optimal Summary Learning Via Graph Convolutional Network 被引量:4
13
作者 Xuan Zhao Zhongdao Wang +2 位作者 Lei Gao Yali Li Shengjin Wang 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2021年第4期536-547,共12页
In this study, we address the problems encountered by incremental face clustering. Without the benefit of having observed the entire data distribution, incremental face clustering is more challenging than static datas... In this study, we address the problems encountered by incremental face clustering. Without the benefit of having observed the entire data distribution, incremental face clustering is more challenging than static dataset clustering. Conventional methods rely on the statistical information of previous clusters to improve the efficiency of incremental clustering;thus, error accumulation may occur. Therefore, this study proposes to predict the summaries of previous data directly from data distribution via supervised learning. Moreover, an efficient framework to cluster previous summaries with new data is explored. Although learning summaries from original data costs more than those from previous clusters, the entire framework consumes just a little bit more time because clustering current data and generating summaries for new data share most of the calculations. Experiments show that the proposed approach significantly outperforms the existing incremental face clustering methods, as evidenced by the improvement of average F-score from 0.644 to 0.762. Compared with state-of-the-art static face clustering methods, our method can yield comparable accuracy while consuming much less time. 展开更多
关键词 incremental face clustering supervised learning Graph Convolutional Network(GCN) optimal summary learning
原文传递
Optimal power flow calculation in AC/DC hybrid power system based on adaptive simplified human learning optimization algorithm 被引量:3
14
作者 Jia CAO Zheng YAN +2 位作者 Xiaoyuan XU Guangyu HE Shaowei HUANG 《Journal of Modern Power Systems and Clean Energy》 SCIE EI 2016年第4期690-701,共12页
This paper employs an efficacious analytical tool,adaptive simplified human learning optimization(ASHLO)algorithm,to solve optimal power flow(OPF)problem in AC/DC hybrid power system,considering valve-point loading ef... This paper employs an efficacious analytical tool,adaptive simplified human learning optimization(ASHLO)algorithm,to solve optimal power flow(OPF)problem in AC/DC hybrid power system,considering valve-point loading effects of generators,carbon tax,and prohibited operating zones of generators,respectively.ASHLO algorithm,involves random learning operator,individual learning operator,social learning operator and adaptive strategies.To compare and analyze the computation performance of the ASHLO method,the proposed ASHLO method and other heuristic intelligent optimization methods are employed to solve OPF problem on the modified IEEE 30-bus and 118-bus AC/DC hybrid test system.Numerical results indicate that the ASHLO method has good convergent property and robustness.Meanwhile,the impacts of wind speeds and locations of HVDC transmission line integrated into the AC network on the OPF results are systematically analyzed. 展开更多
关键词 Adaptive simplified human learning optimization algorithm optimal power flow AC/DC hybrid power system Valve-point loading effects of generators Carbon tax Prohibited operating zones
原文传递
Hybrid Metaheuristics Based License Plate Character Recognition in Smart City
15
作者 Esam A.Al.Qaralleh Fahad Aldhaban +2 位作者 Halah Nasseif Bassam A.Y.Alqaralleh Tamer AbuKhalil 《Computers, Materials & Continua》 SCIE EI 2022年第9期5727-5740,共14页
Recent technological advancements have been used to improve the quality of living in smart cities.At the same time,automated detection of vehicles can be utilized to reduce crime rate and improve public security.On th... Recent technological advancements have been used to improve the quality of living in smart cities.At the same time,automated detection of vehicles can be utilized to reduce crime rate and improve public security.On the other hand,the automatic identification of vehicle license plate(LP)character becomes an essential process to recognize vehicles in real time scenarios,which can be achieved by the exploitation of optimal deep learning(DL)approaches.In this article,a novel hybrid metaheuristic optimization based deep learning model for automated license plate character recognition(HMODL-ALPCR)technique has been presented for smart city environments.The major intention of the HMODL-ALPCR technique is to detect LPs and recognize the characters that exist in them.For effective LP detection process,mask regional convolutional neural network(Mask-RCNN)model is applied and the Inception with Residual Network(ResNet)-v2 as the baseline network.In addition,hybrid sunflower optimization with butterfly optimization algorithm(HSFO-BOA)is utilized for the hyperparameter tuning of the Inception-ResNetv2 model.Finally,Tesseract based character recognition model is applied to effectively recognize the characters present in the LPs.The experimental result analysis of the HMODL-ALPCR technique takes place against the benchmark dataset and the experimental outcomes pointed out the improved efficacy of the HMODL-ALPCR technique over the recent methods. 展开更多
关键词 Smart city license plate recognition optimal deep learning metaheuristic algorithms parameter tuning
下载PDF
Run-to-run product quality control of batch processes
16
作者 贾立 施继平 +1 位作者 程大帅 邱铭森 《Journal of Shanghai University(English Edition)》 CAS 2009年第4期267-269,共3页
Batch processes have been increasingly used in the production of low volume and high value added products. Consequently, optimization control in batch processes is crucial in order to derive the maximum benefit. In th... Batch processes have been increasingly used in the production of low volume and high value added products. Consequently, optimization control in batch processes is crucial in order to derive the maximum benefit. In this paper, a run-to-run product quality control based on iterative learning optimization control is developed. Moreover, a rigorous theorem is proposed and proven in this paper, which states that the tracking error under the optimal iterative learning control (ILC) law can converge to zero. In this paper, a typical nonlinear batch continuous stirred tank reactor (CSTR) is considered, and the results show that the performance of trajectory tracking is gradually improved by the ILC. 展开更多
关键词 iterative learning optimization control tracking error batch processes
下载PDF
Joint Participant Selection and Learning Optimization for Federated Learning of Multiple Models in Edge Cloud
17
作者 Xinliang Wei Jiyao Liu Yu Wang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2023年第4期754-772,共19页
To overcome the limitations of long latency and privacy concerns from cloud computing,edge computing along with distributed machine learning such as federated learning(FL),has gained much attention and popularity in a... To overcome the limitations of long latency and privacy concerns from cloud computing,edge computing along with distributed machine learning such as federated learning(FL),has gained much attention and popularity in academia and industry.Most existing work on FL over the edge mainly focuses on optimizing the training of one shared global model in edge systems.However,with the increasing applications of FL in edge systems,there could be multiple FL models from different applications concurrently being trained in the shared edge cloud.Such concurrent training of these FL models can lead to edge resource competition(for both computing and network resources),and further affect the FL training performance of each other.Therefore,in this paper,considering a multi-model FL scenario,we formulate a joint participant selection and learning optimization problem in a shared edge cloud.This joint optimization aims to determine FL participants and the learning schedule for each FL model such that the total training cost of all FL models in the edge cloud is minimized.We propose a multi-stage optimization framework by decoupling the original problem into two or three subproblems that can be solved respectively and iteratively.Extensive evaluation has been conducted with realworld FL datasets and models.The results have shown that our proposed algorithms can reduce the total cost efficiently compared with prior algorithms. 展开更多
关键词 edge computing federated learning(FL) participant selection learning optimization
原文传递
An Experimental Investigation into the Amalgamated Al2O3-40% TiO2 Atmospheric Plasma Spray Coating Process on EN24 Substrate and Parameter Optimization Using TLBO
18
作者 Thankam Sreekumar Rajesh Ravipudi Venkata Rao 《Journal of Materials Science and Chemical Engineering》 2016年第6期51-65,共15页
Surface coating is a critical procedure in the case of maintenance engineering. Ceramic coating of the wear areas is of the best practice which substantially enhances the Mean Time between Failure (MTBF). EN24 is a co... Surface coating is a critical procedure in the case of maintenance engineering. Ceramic coating of the wear areas is of the best practice which substantially enhances the Mean Time between Failure (MTBF). EN24 is a commercial grade alloy which is used for various industrial applications like sleeves, nuts, bolts, shafts, etc. EN24 is having comparatively low corrosion resistance, and ceramic coating of the wear and corroding areas of such parts is a best followed practice which highly improves the frequent failures. The coating quality mainly depends on the coating thickness, surface roughness and coating hardness which finally decides the operability. This paper describes an experimental investigation to effectively optimize the Atmospheric Plasma Spray process input parameters of Al<sub>2</sub>O<sub>3</sub>-40% TiO<sub>2</sub> coatings to get the best quality of coating on EN24 alloy steel substrate. The experiments are conducted with an Orthogonal Array (OA) design of experiments (DoE). In the current experiment, critical input parameters are considered and some of the vital output parameters are monitored accordingly and separate mathematical models are generated using regression analysis. The Analytic Hierarchy Process (AHP) method is used to generate weights for the individual objective functions and based on that, a combined objective function is made. An advanced optimization method, Teaching-Learning-Based Optimization algorithm (TLBO), is practically utilized to the combined objective function to optimize the values of input parameters to get the best output parameters. Confirmation tests are also conducted and their output results are compared with predicted values obtained through mathematical models. The dominating effects of Al<sub>2</sub>O<sub>3</sub>-40% TiO<sub>2</sub> spray parameters on output parameters: surface roughness, coating thickness and coating hardness are discussed in detail. It is concluded that the input parameters variation directly affects the characteristics of output parameters and any number of input as well as output parameters can be easily optimized using the current approach. 展开更多
关键词 Atmospheric Plasma Spray (APS) EN24 Design of Experiments (DOE) Teaching learning Based Optimization (TLBO) Analytic Hierarchy Process (AHP) Al2O3-40% TiO2
下载PDF
Parameter Optimization of Amalgamated Al2O3-40% TiO2 Atmospheric Plasma Spray Coating on SS304 Substrate Using TLBO Algorithm
19
作者 Thankam Sreekumar Rajesh Ravipudi Venkata Rao 《Journal of Surface Engineered Materials and Advanced Technology》 2016年第3期89-105,共17页
SS304 is a commercial grade stainless steel which is used for various engineering applications like shafts, guides, jigs, fixtures, etc. Ceramic coating of the wear areas of such parts is a regular practice which sign... SS304 is a commercial grade stainless steel which is used for various engineering applications like shafts, guides, jigs, fixtures, etc. Ceramic coating of the wear areas of such parts is a regular practice which significantly enhances the Mean Time Between Failure (MTBF). The final coating quality depends mainly on the coating thickness, surface roughness and hardness which ultimately decides the life. This paper presents an experimental study to effectively optimize the Atmospheric Plasma Spray (APS) process input parameters of Al<sub>2</sub>O<sub>3</sub>-40% TiO2 ceramic coatings to get the best quality of coating on commercial SS304 substrate. The experiments are conducted with a three-level L<sub>18</sub> Orthogonal Array (OA) Design of Experiments (DoE). Critical input parameters considered are: spray nozzle distance, substrate rotating speed, current of the arc, carrier gas flow and coating powder flow rate. The surface roughness, coating thickness and hardness are considered as the output parameters. Mathematical models are generated using regression analysis for individual output parameters. The Analytic Hierarchy Process (AHP) method is applied to generate weights for the individual objective functions and a combined objective function is generated. An advanced optimization method, Teaching-Learning-Based Optimization algorithm (TLBO), is applied to the combined objective function to optimize the values of input parameters to get the best output parameters and confirmation tests are conducted based on that. The significant effects of spray parameters on surface roughness, coating thickness and coating hardness are studied in detail. 展开更多
关键词 Atmospheric Plasma Spray (APS) Coating SS304 Steel Teaching learning Based Optimization (TLBO) Design of Experiments (DoE) Analytic Hierarchy Process (AHP) Al2O2-40% TiO3
下载PDF
Gradient-based algorithms for multi-objective bi-level optimization 被引量:1
20
作者 Xinmin Yang Wei Yao +2 位作者 Haian Yin Shangzhi Zeng Jin Zhang 《Science China Mathematics》 SCIE CSCD 2024年第6期1419-1438,共20页
Multi-objective bi-level optimization(MOBLO)addresses nested multi-objective optimization problems common in a range of applications.However,its multi-objective and hierarchical bi-level nature makes it notably comple... Multi-objective bi-level optimization(MOBLO)addresses nested multi-objective optimization problems common in a range of applications.However,its multi-objective and hierarchical bi-level nature makes it notably complex.Gradient-based MOBLO algorithms have recently grown in popularity,as they effectively solve crucial machine learning problems like meta-learning,neural architecture search,and reinforcement learning.Unfortunately,these algorithms depend on solving a sequence of approximation subproblems with high accuracy,resulting in adverse time and memory complexity that lowers their numerical efficiency.To address this issue,we propose a gradient-based algorithm for MOBLO,called gMOBA,which has fewer hyperparameters to tune,making it both simple and efficient.Additionally,we demonstrate the theoretical validity by accomplishing the desirable Pareto stationarity.Numerical experiments confirm the practical efficiency of the proposed method and verify the theoretical results.To accelerate the convergence of gMOBA,we introduce a beneficial L2O(learning to optimize)neural network(called L2O-gMOBA)implemented as the initialization phase of our gMOBA algorithm.Comparative results of numerical experiments are presented to illustrate the performance of L2O-gMOBA. 展开更多
关键词 MULTI-OBJECTIVE bi-level optimization convergence analysis Pareto stationary learning to optimize
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部