The optimization of process parameters in polyolefin production can bring significant economic benefits to the factory.However,due to small data sets,high costs associated with parameter verification cycles,and diffic...The optimization of process parameters in polyolefin production can bring significant economic benefits to the factory.However,due to small data sets,high costs associated with parameter verification cycles,and difficulty in establishing an optimization model,the optimization process is often restricted.To address this issue,we propose using a transfer learning Bayesian optimization strategy to improve the efficiency of parameter optimization while minimizing resource consumption.Specifically,we leverage Gaussian process(GP)regression models to establish an integrated model that incorporates both source and target grade production task data.We then measure the similarity weights of each model by comparing their predicted trends,and utilize these weights to accelerate the solution of optimal process parameters for producing target polyolefin grades.In order to enhance the accuracy of our approach,we acknowledge that measuring similarity in a global search space may not effectively capture local similarity characteristics.Therefore,we propose a novel method for transfer learning optimization that operates within a local space(LSTL-PBO).This method employs partial data acquired through random sampling from the target task data and utilizes Bayesian optimization techniques for model establishment.By focusing on a local search space,we aim to better discern and leverage the inherent similarities between source tasks and the target task.Additionally,we incorporate a parallel concept into our method to address multiple local search spaces simultaneously.By doing so,we can explore different regions of the parameter space in parallel,thereby increasing the chances of finding optimal process parameters.This localized approach allows us to improve the precision and effectiveness of our optimization process.The performance of our method is validated through experiments on benchmark problems,and we discuss the sensitivity of its hyperparameters.The results show that our proposed method can significantly improve the efficiency of process parameter optimization,reduce the dependence on source tasks,and enhance the method's robustness.This has great potential for optimizing processes in industrial environments.展开更多
Coal to ethylene glycol still lacks algorithm optimization achievements for distillation sequencing due to high-dimension and strong nonconvexity characteristics,although there are numerous reports on horizontal compa...Coal to ethylene glycol still lacks algorithm optimization achievements for distillation sequencing due to high-dimension and strong nonconvexity characteristics,although there are numerous reports on horizontal comparisons and process revamping.This scenario triggers the navigation in this paper into the simultaneous optimization of parameters and heat integration of the coal to ethylene glycol distillation scheme and double-effect superstructure by the self-adapting dynamic differential evolution algorithm.To mitigate the influence of the strong nonconvexity,a redistribution strategy is adopted that forcibly expands the population search domain by exerting external influence and then shrinks it again to judge the global optimal solution.After two redistributive operations under the parallel framework,the total annual cost and CO_(2) emissions are 0.61%/1.85%better for the optimized process and 3.74%/14.84%better for the superstructure than the sequential optimization.However,the thermodynamic efficiency of sequential optimization is 11.63%and 10.34%higher than that of simultaneous optimization.This study discloses the unexpected great energy-saving potential for the coal to ethylene glycol process that has long been unknown,as well as the strong ability of the self-adapting dynamic differential evolution algorithm to optimize processes described by the high-dimensional mathematical model.展开更多
Spike neural networks are inspired by animal brains,and outperform traditional neural networks on complicated tasks.However,spike neural networks are usually used on a large scale,and they cannot be computed on commer...Spike neural networks are inspired by animal brains,and outperform traditional neural networks on complicated tasks.However,spike neural networks are usually used on a large scale,and they cannot be computed on commercial,off-the-shelf computers.A parallel architecture is proposed and developed for discrete-event simulations of spike neural networks.Furthermore,mechanisms for both parallelism degree estimation and dynamic load balance are emphasized with theoretical and computational analysis.Simulation results show the effectiveness of the proposed parallelized spike neural network system and its corresponding support components.展开更多
Nowadays, there exist numerous images in the Internet, and with the development ot cloud compuung ano big data applications, many of those images need to be processed for different kinds of applications by using speci...Nowadays, there exist numerous images in the Internet, and with the development ot cloud compuung ano big data applications, many of those images need to be processed for different kinds of applications by using specific image processing algorithms. Meanwhile, there already exist many kinds of image processing algorithms and their variations, while new algorithms are still emerging. Consequently, an ongoing problem is how to improve the efficiency of massive image processing and support the integration of existing implementations of image processing algorithms into the systems. This paper proposes a distributed image processing system named SEIP, which is built on Hadoop, and employs extensible in- node architecture to support various kinds of image processing algorithms on distributed platforms with GPU accelerators. The system also uses a pipeline-based h'amework to accelerate massive image file processing. A demonstration application for image feature extraction is designed. The system is evaluated in a small-scale Hadoop cluster with GPU accelerators, and the experimental results show the usability and efficiency of SEIP.展开更多
基金supported by National Natural Science Foundation of China(62394343)Major Program of Qingyuan Innovation Laboratory(00122002)+1 种基金Major Science and Technology Projects of Longmen Laboratory(231100220600)Shanghai Committee of Science and Technology(23ZR1416000)and Shanghai AI Lab.
文摘The optimization of process parameters in polyolefin production can bring significant economic benefits to the factory.However,due to small data sets,high costs associated with parameter verification cycles,and difficulty in establishing an optimization model,the optimization process is often restricted.To address this issue,we propose using a transfer learning Bayesian optimization strategy to improve the efficiency of parameter optimization while minimizing resource consumption.Specifically,we leverage Gaussian process(GP)regression models to establish an integrated model that incorporates both source and target grade production task data.We then measure the similarity weights of each model by comparing their predicted trends,and utilize these weights to accelerate the solution of optimal process parameters for producing target polyolefin grades.In order to enhance the accuracy of our approach,we acknowledge that measuring similarity in a global search space may not effectively capture local similarity characteristics.Therefore,we propose a novel method for transfer learning optimization that operates within a local space(LSTL-PBO).This method employs partial data acquired through random sampling from the target task data and utilizes Bayesian optimization techniques for model establishment.By focusing on a local search space,we aim to better discern and leverage the inherent similarities between source tasks and the target task.Additionally,we incorporate a parallel concept into our method to address multiple local search spaces simultaneously.By doing so,we can explore different regions of the parameter space in parallel,thereby increasing the chances of finding optimal process parameters.This localized approach allows us to improve the precision and effectiveness of our optimization process.The performance of our method is validated through experiments on benchmark problems,and we discuss the sensitivity of its hyperparameters.The results show that our proposed method can significantly improve the efficiency of process parameter optimization,reduce the dependence on source tasks,and enhance the method's robustness.This has great potential for optimizing processes in industrial environments.
文摘Coal to ethylene glycol still lacks algorithm optimization achievements for distillation sequencing due to high-dimension and strong nonconvexity characteristics,although there are numerous reports on horizontal comparisons and process revamping.This scenario triggers the navigation in this paper into the simultaneous optimization of parameters and heat integration of the coal to ethylene glycol distillation scheme and double-effect superstructure by the self-adapting dynamic differential evolution algorithm.To mitigate the influence of the strong nonconvexity,a redistribution strategy is adopted that forcibly expands the population search domain by exerting external influence and then shrinks it again to judge the global optimal solution.After two redistributive operations under the parallel framework,the total annual cost and CO_(2) emissions are 0.61%/1.85%better for the optimized process and 3.74%/14.84%better for the superstructure than the sequential optimization.However,the thermodynamic efficiency of sequential optimization is 11.63%and 10.34%higher than that of simultaneous optimization.This study discloses the unexpected great energy-saving potential for the coal to ethylene glycol process that has long been unknown,as well as the strong ability of the self-adapting dynamic differential evolution algorithm to optimize processes described by the high-dimensional mathematical model.
基金supported by the National Natural Science Foundation of China (Grant Nos. 61003082,60921062,61005077)
文摘Spike neural networks are inspired by animal brains,and outperform traditional neural networks on complicated tasks.However,spike neural networks are usually used on a large scale,and they cannot be computed on commercial,off-the-shelf computers.A parallel architecture is proposed and developed for discrete-event simulations of spike neural networks.Furthermore,mechanisms for both parallelism degree estimation and dynamic load balance are emphasized with theoretical and computational analysis.Simulation results show the effectiveness of the proposed parallelized spike neural network system and its corresponding support components.
基金The work was supported by the National Natural Science Foundation of China (NSFC) under Grant No. 61133004, the National High Technology Research and Development 863 Program of China under Grant No. 2012AA01A302, and the NSFC Projects of International Cooperation and Exchanges under Grant No. 61361126011.
文摘Nowadays, there exist numerous images in the Internet, and with the development ot cloud compuung ano big data applications, many of those images need to be processed for different kinds of applications by using specific image processing algorithms. Meanwhile, there already exist many kinds of image processing algorithms and their variations, while new algorithms are still emerging. Consequently, an ongoing problem is how to improve the efficiency of massive image processing and support the integration of existing implementations of image processing algorithms into the systems. This paper proposes a distributed image processing system named SEIP, which is built on Hadoop, and employs extensible in- node architecture to support various kinds of image processing algorithms on distributed platforms with GPU accelerators. The system also uses a pipeline-based h'amework to accelerate massive image file processing. A demonstration application for image feature extraction is designed. The system is evaluated in a small-scale Hadoop cluster with GPU accelerators, and the experimental results show the usability and efficiency of SEIP.