期刊文献+
共找到837篇文章
< 1 2 42 >
每页显示 20 50 100
Prediction of lime utilization ratio of dephosphorization in BOF steelmaking based on online sequential extreme learning machine with forgetting mechanism
1
作者 Runhao Zhang Jian Yang +1 位作者 Han Sun Wenkui Yang 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS CSCD 2024年第3期508-517,共10页
The machine learning models of multiple linear regression(MLR),support vector regression(SVR),and extreme learning ma-chine(ELM)and the proposed ELM models of online sequential ELM(OS-ELM)and OS-ELM with forgetting me... The machine learning models of multiple linear regression(MLR),support vector regression(SVR),and extreme learning ma-chine(ELM)and the proposed ELM models of online sequential ELM(OS-ELM)and OS-ELM with forgetting mechanism(FOS-ELM)are applied in the prediction of the lime utilization ratio of dephosphorization in the basic oxygen furnace steelmaking process.The ELM model exhibites the best performance compared with the models of MLR and SVR.OS-ELM and FOS-ELM are applied for sequential learning and model updating.The optimal number of samples in validity term of the FOS-ELM model is determined to be 1500,with the smallest population mean absolute relative error(MARE)value of 0.058226 for the population.The variable importance analysis reveals lime weight,initial P content,and hot metal weight as the most important variables for the lime utilization ratio.The lime utilization ratio increases with the decrease in lime weight and the increases in the initial P content and hot metal weight.A prediction system based on FOS-ELM is applied in actual industrial production for one month.The hit ratios of the predicted lime utilization ratio in the error ranges of±1%,±3%,and±5%are 61.16%,90.63%,and 94.11%,respectively.The coefficient of determination,MARE,and root mean square error are 0.8670,0.06823,and 1.4265,respectively.The system exhibits desirable performance for applications in actual industrial pro-duction. 展开更多
关键词 basic oxygen furnace steelmaking machine learning lime utilization ratio DEPHOSPHORIZATION online sequential extreme learning machine forgetting mechanism
下载PDF
Parallel Inference for Real-Time Machine Learning Applications
2
作者 Sultan Al Bayyat Ammar Alomran +3 位作者 Mohsen Alshatti Ahmed Almousa Rayyan Almousa Yasir Alguwaifli 《Journal of Computer and Communications》 2024年第1期139-146,共8页
Hyperparameter tuning is a key step in developing high-performing machine learning models, but searching large hyperparameter spaces requires extensive computation using standard sequential methods. This work analyzes... Hyperparameter tuning is a key step in developing high-performing machine learning models, but searching large hyperparameter spaces requires extensive computation using standard sequential methods. This work analyzes the performance gains from parallel versus sequential hyperparameter optimization. Using scikit-learn’s Randomized SearchCV, this project tuned a Random Forest classifier for fake news detection via randomized grid search. Setting n_jobs to -1 enabled full parallelization across CPU cores. Results show the parallel implementation achieved over 5× faster CPU times and 3× faster total run times compared to sequential tuning. However, test accuracy slightly dropped from 99.26% sequentially to 99.15% with parallelism, indicating a trade-off between evaluation efficiency and model performance. Still, the significant computational gains allow more extensive hyperparameter exploration within reasonable timeframes, outweighing the small accuracy decrease. Further analysis could better quantify this trade-off across different models, tuning techniques, tasks, and hardware. 展开更多
关键词 Machine learning Models Computational Efficiency parallel Computing Systems Random Forest Inference Hyperparameter Tuning Python Frameworks (TensorFlow PyTorch Scikit-Learn) High-Performance Computing
下载PDF
Parallel Learning:Overview and Perspective for Computational Learning Across Syn2Real and Sim2Real 被引量:7
3
作者 Qinghai Miao Yisheng Lv +2 位作者 Min Huang Xiao Wang Fei-Yue Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第3期603-631,共29页
The virtual-to-real paradigm,i.e.,training models on virtual data and then applying them to solve real-world problems,has attracted more and more attention from various domains by successfully alleviating the data sho... The virtual-to-real paradigm,i.e.,training models on virtual data and then applying them to solve real-world problems,has attracted more and more attention from various domains by successfully alleviating the data shortage problem in machine learning.To summarize the advances in recent years,this survey comprehensively reviews the literature,from the viewport of parallel intelligence.First,an extended parallel learning framework is proposed to cover main domains including computer vision,natural language processing,robotics,and autonomous driving.Second,a multi-dimensional taxonomy is designed to organize the literature in a hierarchical structure.Third,the related virtual-toreal works are analyzed and compared according to the three principles of parallel learning known as description,prediction,and prescription,which cover the methods for constructing virtual worlds,generating labeled data,domain transferring,model training and testing,as well as optimizing the strategies to guide the task-oriented data generator for better learning performance.Key issues remained in virtual-to-real are discussed.Furthermore,the future research directions from the viewpoint of parallel learning are suggested. 展开更多
关键词 Machine learning parallel learning parallel systems sim-to-real syn-to-real virtual-to-real
下载PDF
Autonomous air combat decision-making of UAV based on parallel self-play reinforcement learning 被引量:1
4
作者 Bo Li Jingyi Huang +4 位作者 Shuangxia Bai Zhigang Gan Shiyang Liang Neretin Evgeny Shouwen Yao 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第1期64-81,共18页
Aiming at addressing the problem of manoeuvring decision-making in UAV air combat,this study establishes a one-to-one air combat model,defines missile attack areas,and uses the non-deterministic policy Soft-Actor-Crit... Aiming at addressing the problem of manoeuvring decision-making in UAV air combat,this study establishes a one-to-one air combat model,defines missile attack areas,and uses the non-deterministic policy Soft-Actor-Critic(SAC)algorithm in deep reinforcement learning to construct a decision model to realize the manoeuvring process.At the same time,the complexity of the proposed algorithm is calculated,and the stability of the closed-loop system of air combat decision-making controlled by neural network is analysed by the Lyapunov function.This study defines the UAV air combat process as a gaming process and proposes a Parallel Self-Play training SAC algorithm(PSP-SAC)to improve the generalisation performance of UAV control decisions.Simulation results have shown that the proposed algorithm can realize sample sharing and policy sharing in multiple combat environments and can significantly improve the generalisation ability of the model compared to independent training. 展开更多
关键词 air combat decision deep reinforcement learning parallel self-play SAC algorithm UAV
下载PDF
Scheduling an Energy-Aware Parallel Machine System with Deteriorating and Learning Effects Considering Multiple Optimization Objectives and Stochastic Processing Time
5
作者 Lei Wang Yuxin Qi 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第4期325-339,共15页
Currently,energy conservation draws wide attention in industrial manufacturing systems.In recent years,many studies have aimed at saving energy consumption in the process of manufacturing and scheduling is regarded as... Currently,energy conservation draws wide attention in industrial manufacturing systems.In recent years,many studies have aimed at saving energy consumption in the process of manufacturing and scheduling is regarded as an effective approach.This paper puts forwards a multi-objective stochastic parallel machine scheduling problem with the consideration of deteriorating and learning effects.In it,the real processing time of jobs is calculated by using their processing speed and normal processing time.To describe this problem in a mathematical way,amultiobjective stochastic programming model aiming at realizing makespan and energy consumption minimization is formulated.Furthermore,we develop a multi-objective multi-verse optimization combined with a stochastic simulation method to deal with it.In this approach,the multi-verse optimization is adopted to find favorable solutions from the huge solution domain,while the stochastic simulation method is employed to assess them.By conducting comparison experiments on test problems,it can be verified that the developed approach has better performance in coping with the considered problem,compared to two classic multi-objective evolutionary algorithms. 展开更多
关键词 Energy consumption optimization parallel machine scheduling multi-objective optimization deteriorating and learning effects stochastic simulation
下载PDF
PT-MIL:Parallel transformer based on multi-instance learning for osteoporosis detection in panoramic oral radiography
6
作者 黄欣然 YANG Hongjie +2 位作者 CHEN Hu ZHANG Yi 廖培希 《中国体视学与图像分析》 2023年第4期410-418,共9页
Osteoporosis is a systemic disease characterized by low bone mass,impaired bone microstruc-ture,increased bone fragility,and a higher risk of fractures.It commonly affects postmenopausal women and the elderly.Orthopan... Osteoporosis is a systemic disease characterized by low bone mass,impaired bone microstruc-ture,increased bone fragility,and a higher risk of fractures.It commonly affects postmenopausal women and the elderly.Orthopantomography,also known as panoramic radiography,is a widely used imaging technique in dental examinations due to its low cost and easy accessibility.Previous studies have shown that the mandibular cortical index(MCI)derived from orthopantomography can serve as an important indicator of osteoporosis risk.To address this,this study proposes a parallel Transformer network based on multiple instance learning.By introducing parallel modules that alleviate optimization issues and integrating multiple-instance learning with the Transformer architecture,our model effectively extracts information from image patches.Our model achieves an accuracy of 86%and an AUC score of 0.963 on an osteoporosis dataset,which demonstrates its promising and competitive performance. 展开更多
关键词 parallel transformer multiple instance learning weakly-supervised classification
下载PDF
ON EQUIVALENCE BETWEEN THE SEQUENTIAL CIRCUITS IN SERIES AND IN PARALLEL
7
作者 姚天忠 胡铮浩 《苏州大学学报(自然科学版)》 CAS 1990年第2期181-186,共6页
Propose the sequential circuits with the ternary D-ffs in series^1. Discuss the equivalence between the sequential circuits with the p-valued flip-flops in series and in parallel as a part of studying the multiple val... Propose the sequential circuits with the ternary D-ffs in series^1. Discuss the equivalence between the sequential circuits with the p-valued flip-flops in series and in parallel as a part of studying the multiple valued logic circuits. 展开更多
关键词 时序电路 逻辑电路 时序机 有限状态机
下载PDF
Parallel Learning:a Perspective and a Framework 被引量:28
8
作者 Li Li Yilun Lin +1 位作者 Nanning Zheng Fei-Yue Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2017年第3期389-395,共7页
The development of machine learning in complex system is hindered by two problems nowadays.The first problem is the inefficiency of exploration in state and action space,which leads to the data-hungry of some state-of... The development of machine learning in complex system is hindered by two problems nowadays.The first problem is the inefficiency of exploration in state and action space,which leads to the data-hungry of some state-of-art data-driven algorithm.The second problem is the lack of a general theory which can be used to analyze and implement a complex learning system.In this paper,we proposed a general methods that can address both two issues.We combine the concepts of descriptive learning,predictive learning,and prescriptive learning into a uniform framework,so as to build a parallel system allowing learning system improved by self-boosting.Formulating a new perspective of data,knowledge and action,we provide a new methodology called parallel learning to design machine learning system for real-world problems. 展开更多
关键词 Descriptive learning machine learning parallel learning parallel systems predictive learning prescriptive learning
下载PDF
Parallel Reinforcement Learning:A Framework and Case Study 被引量:8
9
作者 Teng Liu Bin Tian +3 位作者 Yunfeng Ai Li Li Dongpu Cao Fei-Yue Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2018年第4期827-835,共9页
In this paper, a new machine learning framework is developed for complex system control, called parallel reinforcement learning. To overcome data deficiency of current data-driven algorithms, a parallel system is buil... In this paper, a new machine learning framework is developed for complex system control, called parallel reinforcement learning. To overcome data deficiency of current data-driven algorithms, a parallel system is built to improve complex learning system by self-guidance. Based on the Markov chain(MC) theory, we combine the transfer learning, predictive learning, deep learning and reinforcement learning to tackle the data and action processes and to express the knowledge. Parallel reinforcement learning framework is formulated and several case studies for real-world problems are finally introduced. 展开更多
关键词 Deep learning machine learning parallel reinforcement learning parallel system predictive learning transfer learning
下载PDF
Parallel Reinforcement Learning-Based Energy Efficiency Improvement for a Cyber-Physical System 被引量:14
10
作者 Teng Liu Bin Tian +1 位作者 Yunfeng Ai Fei-Yue Wang 《IEEE/CAA Journal of Automatica Sinica》 EI CSCD 2020年第2期617-626,共10页
As a complex and critical cyber-physical system(CPS),the hybrid electric powertrain is significant to mitigate air pollution and improve fuel economy.Energy management strategy(EMS)is playing a key role to improve the... As a complex and critical cyber-physical system(CPS),the hybrid electric powertrain is significant to mitigate air pollution and improve fuel economy.Energy management strategy(EMS)is playing a key role to improve the energy efficiency of this CPS.This paper presents a novel bidirectional long shortterm memory(LSTM)network based parallel reinforcement learning(PRL)approach to construct EMS for a hybrid tracked vehicle(HTV).This method contains two levels.The high-level establishes a parallel system first,which includes a real powertrain system and an artificial system.Then,the synthesized data from this parallel system is trained by a bidirectional LSTM network.The lower-level determines the optimal EMS using the trained action state function in the model-free reinforcement learning(RL)framework.PRL is a fully data-driven and learning-enabled approach that does not depend on any prediction and predefined rules.Finally,real vehicle testing is implemented and relevant experiment data is collected and calibrated.Experimental results validate that the proposed EMS can achieve considerable energy efficiency improvement by comparing with the conventional RL approach and deep RL. 展开更多
关键词 Bidirectional long short-term memory(LSTM)network cyber-physical system(CPS) energy management parallel system reinforcement learning(RL)
下载PDF
Novel Sequential Neural Network Learning Algorithm for Function Approximation 被引量:1
11
作者 康怀祺 史彩成 +1 位作者 何佩琨 李晓琼 《Journal of Beijing Institute of Technology》 EI CAS 2007年第2期197-200,共4页
A novel sequential neural network learning algorithm for function approximation is presented. The multi-step-ahead output predictor of the stochastic time series is introduced to the growing and pruning network for co... A novel sequential neural network learning algorithm for function approximation is presented. The multi-step-ahead output predictor of the stochastic time series is introduced to the growing and pruning network for constructing network structure. And the network parameters are adjusted by the proportional differential filter (PDF) rather than EKF when the network growing criteria are not met. Experimental results show that the proposed algorithm can obtain a more compact network along with a smaller error in mean square sense than other typical sequential learning algorithms. 展开更多
关键词 sequential learning PREDICTOR proportional differential filter (PDF) neural network
下载PDF
Efficient and High-quality Recommendations via Momentum-incorporated Parallel Stochastic Gradient Descent-Based Learning 被引量:1
12
作者 Xin Luo Wen Qin +2 位作者 Ani Dong Khaled Sedraoui MengChu Zhou 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第2期402-411,共10页
A recommender system(RS)relying on latent factor analysis usually adopts stochastic gradient descent(SGD)as its learning algorithm.However,owing to its serial mechanism,an SGD algorithm suffers from low efficiency and... A recommender system(RS)relying on latent factor analysis usually adopts stochastic gradient descent(SGD)as its learning algorithm.However,owing to its serial mechanism,an SGD algorithm suffers from low efficiency and scalability when handling large-scale industrial problems.Aiming at addressing this issue,this study proposes a momentum-incorporated parallel stochastic gradient descent(MPSGD)algorithm,whose main idea is two-fold:a)implementing parallelization via a novel datasplitting strategy,and b)accelerating convergence rate by integrating momentum effects into its training process.With it,an MPSGD-based latent factor(MLF)model is achieved,which is capable of performing efficient and high-quality recommendations.Experimental results on four high-dimensional and sparse matrices generated by industrial RS indicate that owing to an MPSGD algorithm,an MLF model outperforms the existing state-of-the-art ones in both computational efficiency and scalability. 展开更多
关键词 Big data industrial application industrial data latent factor analysis machine learning parallel algorithm recommender system(RS) stochastic gradient descent(SGD)
下载PDF
Multi-task Coalition Parallel Formation Strategy Based on Reinforcement Learning 被引量:6
13
作者 JIANG Jian-Guo SU Zhao-Pin +1 位作者 QI Mei-Bin ZHANG Guo-Fu 《自动化学报》 EI CSCD 北大核心 2008年第3期349-352,共4页
代理人联盟是代理人协作和合作的一种重要方式。形成一个联盟,代理人能提高他们的能力解决问题并且获得更多的实用程序。在这份报纸,新奇多工联盟平行形成策略被介绍,并且多工联盟形成的过程是一个 Markov 决定过程的结论理论上被证... 代理人联盟是代理人协作和合作的一种重要方式。形成一个联盟,代理人能提高他们的能力解决问题并且获得更多的实用程序。在这份报纸,新奇多工联盟平行形成策略被介绍,并且多工联盟形成的过程是一个 Markov 决定过程的结论理论上被证明。而且,学习的加强被用来解决多工联盟平行的代理人行为策略,和这个过程形成被描述。在多工面向的领域,策略罐头有效地并且平行形式多工联盟。 展开更多
关键词 强化学习 多任务合并 平行排列 马尔可夫决策过程
下载PDF
A Distributed Algorithm for Parallel Multi-task Allocation Based on Profit Sharing Learning 被引量:7
14
作者 SU Zhao-Pin JIANG Jian-Guo +1 位作者 LIANG Chang-Yong ZHANG Guo-Fu 《自动化学报》 EI CSCD 北大核心 2011年第7期865-872,共8页
经由联盟形成的任务分配是在多代理人系统(妈) 的几应用程序域的基本研究挑战,例如资源分配,灾难反应管理等等。怎么以一种分布式的方式分配许多未解决的任务到一些代理人,主要处理。在这篇论文,我们在自我组织、自我学习的代理人... 经由联盟形成的任务分配是在多代理人系统(妈) 的几应用程序域的基本研究挑战,例如资源分配,灾难反应管理等等。怎么以一种分布式的方式分配许多未解决的任务到一些代理人,主要处理。在这篇论文,我们在自我组织、自我学习的代理人之中建议一个分布式的平行多工分配算法。处理状况,我们在二维的房间地理上驱散代理人和任务,然后介绍为寻找它的任务由的一个单个代理人的分享学习的利润(PSL ) 不断自我学习。我们也在代理人之中为通讯和协商介绍策略分配真实工作量到每个 tasked 代理人。最后,评估建议算法的有效性,我们把它与 Shehory 和 Krau 被许多研究人员在最近的年里讨论的分布式的任务分配算法作比较。试验性的结果证明建议算法罐头快速为每项任务形成一个解决的联盟。而且,建议算法罐头明确地告诉我们每个 tasked 代理人的真实工作量,并且能因此为实际控制任务提供一本特定、重要的参考书。 展开更多
关键词 自动化系统 自动化技术 ICA 数据处理
下载PDF
Deep Sequential Feature Learning in Clinical Image Classification of Infectious Keratitis
15
作者 Yesheng Xu Ming Kong +7 位作者 Wenjia Xie Runping Duan Zhengqing Fang Yuxiao Lin Qiang Zhu Siliang Tang Fei Wu Yu-Feng Yao 《Engineering》 SCIE EI 2021年第7期1002-1010,共9页
Infectious keratitis is the most common condition of corneal diseases in which a pathogen grows in the cornea leading to inflammation and destruction of the corneal tissues.Infectious keratitis is a medical emergency ... Infectious keratitis is the most common condition of corneal diseases in which a pathogen grows in the cornea leading to inflammation and destruction of the corneal tissues.Infectious keratitis is a medical emergency for which a rapid and accurate diagnosis is needed to ensure prompt and precise treatment to halt the disease progression and to limit the extent of corneal damage;otherwise,it may develop a sight-threatening and even eye-globe-threatening condition.In this paper,we propose a sequentiallevel deep model to effectively discriminate infectious corneal disease via the classification of clinical images.In this approach,we devise an appropriate mechanism to preserve the spatial structures of clinical images and disentangle the informative features for clinical image classification of infectious keratitis.In a comparison,the performance of the proposed sequential-level deep model achieved 80%diagnostic accuracy,far better than the 49.27%±11.5%diagnostic accuracy achieved by 421 ophthalmologists over 120 test images. 展开更多
关键词 Deep learning Corneal disease sequential features Machine learning Long short-term memory
下载PDF
Collaborative Clustering Parallel Reinforcement Learning for Edge-Cloud Digital Twins Manufacturing System
16
作者 Fan Yang Tao Feng +2 位作者 Fangmin Xu Huiwen Jiang Chenglin Zhao 《China Communications》 SCIE CSCD 2022年第8期138-148,共11页
To realize high-accuracy physical-cyber digital twin(DT)mapping in a manufacturing system,a huge amount of data need to be collected and analyzed in real-time.Traditional DTs systems are deployed in cloud or edge serv... To realize high-accuracy physical-cyber digital twin(DT)mapping in a manufacturing system,a huge amount of data need to be collected and analyzed in real-time.Traditional DTs systems are deployed in cloud or edge servers independently,whilst it is hard to apply in real production systems due to the high interaction or execution delay.This results in a low consistency in the temporal dimension of the physical-cyber model.In this work,we propose a novel efficient edge-cloud DT manufacturing system,which is inspired by resource scheduling technology.Specifically,an edge-cloud collaborative DTs system deployment architecture is first constructed.Then,deterministic and uncertainty optimization adaptive strategies are presented to choose a more powerful server for running DT-based applications.We model the adaptive optimization problems as dynamic programming problems and propose a novel collaborative clustering parallel Q-learning(CCPQL)algorithm and prediction-based CCPQL to solve the problems.The proposed approach reduces the total delay with a higher convergence rate.Numerical simulation results are provided to validate the approach,which would have great potential in dynamic and complex industrial internet environments. 展开更多
关键词 edge-cloud collaboration digital twins job shop scheduling parallel reinforcement learning
下载PDF
Unrelated Parallel-Machine Scheduling Problems with General Truncated Job-Dependent Learning Effect
17
作者 Jibo Wang Chou-Jung Hsu 《Journal of Applied Mathematics and Physics》 2016年第1期21-27,共7页
In this paper, we consider scheduling problems with general truncated job-dependent learning effect on unrelated parallel-machine. The objective functions are to minimize total machine load, total completion (waiting)... In this paper, we consider scheduling problems with general truncated job-dependent learning effect on unrelated parallel-machine. The objective functions are to minimize total machine load, total completion (waiting) time, total absolute differences in completion (waiting) times respectively. If the number of machines is fixed, these problems can be solved in  time respectively, where m is the number of machines and n is the number of jobs. 展开更多
关键词 SCHEDULING Unrelated parallel Machines Truncated Job-Dependent learning
下载PDF
Attenuate Class Imbalance Problem for Pneumonia Diagnosis Using Ensemble Parallel Stacked Pre-Trained Models
18
作者 Aswathy Ravikumar Harini Sriraman 《Computers, Materials & Continua》 SCIE EI 2023年第4期891-909,共19页
Pneumonia is an acute lung infection that has caused many fatalitiesglobally. Radiologists often employ chest X-rays to identify pneumoniasince they are presently the most effective imaging method for this purpose.Com... Pneumonia is an acute lung infection that has caused many fatalitiesglobally. Radiologists often employ chest X-rays to identify pneumoniasince they are presently the most effective imaging method for this purpose.Computer-aided diagnosis of pneumonia using deep learning techniques iswidely used due to its effectiveness and performance. In the proposed method,the Synthetic Minority Oversampling Technique (SMOTE) approach is usedto eliminate the class imbalance in the X-ray dataset. To compensate forthe paucity of accessible data, pre-trained transfer learning is used, and anensemble Convolutional Neural Network (CNN) model is developed. Theensemble model consists of all possible combinations of the MobileNetv2,Visual Geometry Group (VGG16), and DenseNet169 models. MobileNetV2and DenseNet169 performed well in the Single classifier model, with anaccuracy of 94%, while the ensemble model (MobileNetV2+DenseNet169)achieved an accuracy of 96.9%. Using the data synchronous parallel modelin Distributed Tensorflow, the training process accelerated performance by98.6% and outperformed other conventional approaches. 展开更多
关键词 Pneumonia prediction distributed deep learning data parallel model ensemble deep learning class imbalance skewed data
下载PDF
Parallel solving model for quantified boolean formula based on machine learning
19
作者 李涛 肖南峰 《Journal of Central South University》 SCIE EI CAS 2013年第11期3156-3165,共10页
A new parallel architecture for quantified boolean formula(QBF)solving was proposed,and the prediction model based on machine learning technology was proposed for how sharing knowledge affects the solving performance ... A new parallel architecture for quantified boolean formula(QBF)solving was proposed,and the prediction model based on machine learning technology was proposed for how sharing knowledge affects the solving performance in QBF parallel solving system,and the experimental evaluation scheme was also designed.It shows that the characterization factor of clause and cube influence the solving performance markedly in our experiment.At the same time,the heuristic machine learning algorithm was applied,support vector machine was chosen to predict the performance of QBF parallel solving system based on clause sharing and cube sharing.The relative error of accuracy for prediction can be controlled in a reasonable range of 20%30%.The results show the important and complex role that knowledge sharing plays in any modern parallel solver.It shows that the parallel solver with machine learning reduces the quantity of knowledge sharing about 30%and saving computational resource but does not reduce the performance of solving system. 展开更多
关键词 机器学习算法 并行求解 公式 量化 模型基 知识共享 预测模型 支持向量机
下载PDF
A Novel Mixed Precision Distributed TPU GAN for Accelerated Learning Curve
20
作者 Aswathy Ravikumar Harini Sriraman 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期563-578,共16页
Deep neural networks are gaining importance and popularity in applications and services.Due to the enormous number of learnable parameters and datasets,the training of neural networks is computationally costly.Paralle... Deep neural networks are gaining importance and popularity in applications and services.Due to the enormous number of learnable parameters and datasets,the training of neural networks is computationally costly.Parallel and distributed computation-based strategies are used to accelerate this training process.Generative Adversarial Networks(GAN)are a recent technological achievement in deep learning.These generative models are computationally expensive because a GAN consists of two neural networks and trains on enormous datasets.Typically,a GAN is trained on a single server.Conventional deep learning accelerator designs are challenged by the unique properties of GAN,like the enormous computation stages with non-traditional convolution layers.This work addresses the issue of distributing GANs so that they can train on datasets distributed over many TPUs(Tensor Processing Unit).Distributed learning training accelerates the learning process and decreases computation time.In this paper,the Generative Adversarial Network is accelerated using the distributed multi-core TPU in distributed data-parallel synchronous model.For adequate acceleration of the GAN network,the data parallel SGD(Stochastic Gradient Descent)model is implemented in multi-core TPU using distributed TensorFlow with mixed precision,bfloat16,and XLA(Accelerated Linear Algebra).The study was conducted on the MNIST dataset for varying batch sizes from 64 to 512 for 30 epochs in distributed SGD in TPU v3 with 128×128 systolic array.An extensive batch technique is implemented in bfloat16 to decrease the storage cost and speed up floating-point computations.The accelerated learning curve for the generator and discriminator network is obtained.The training time was reduced by 79%by varying the batch size from 64 to 512 in multi-core TPU. 展开更多
关键词 Data parallel distributed model generative model learning curve mixed precision
下载PDF
上一页 1 2 42 下一页 到第
使用帮助 返回顶部