An adaptive chaotic gradient descending optimization algorithm for single objective optimization was presented. A local minimum judged by two rules was obtained by an improved mutative-step gradient descending method....An adaptive chaotic gradient descending optimization algorithm for single objective optimization was presented. A local minimum judged by two rules was obtained by an improved mutative-step gradient descending method. A new optimal minimum was obtained to replace the local minimum by mutative-scale chaotic search algorithm whose scales are magnified gradually from a small scale in order to escape local minima. The global optimal value was attained by repeatedly iterating. At last, a BP (back-propagation) neural network model for forecasting slag output in matte converting was established. The algorithm was used to train the weights of the BP neural network model. The simulation results with a training data set of 400 samples show that the training process can be finished within 300 steps to obtain the global optimal value, and escape local minima effectively. An optimization system for operation parameters, which includes the forecasting model, is achieved, in which the output of converter increases by 6.0%, and the amount of the treated cool materials rises by 7.8% in the matte converting process.展开更多
Based on the critical position of the endpoint quality prediction for basic oxygen furnaces (BOFs) in steelmaking, and the latest results in computational intelligence (C1), this paper deals with the development ...Based on the critical position of the endpoint quality prediction for basic oxygen furnaces (BOFs) in steelmaking, and the latest results in computational intelligence (C1), this paper deals with the development of a novel memetic algorithm (MA) for neural network (NN) lcarnmg. Included in this is the integration of extremal optimization (EO) and Levenberg-Marquardt (LM) pradicnt search, and its application in BOF endpoint quality prediction. The fundamental analysis reveals that the proposed EO-LM algorithm may provide superior performance in generalization, computation efficiency, and avoid local minima, compared to traditional NN learning methods. Experimental results with production-scale BOF data show that the proposed method can effectively improve the NN model for BOF endpoint quality prediction.展开更多
In this paper,we propose enhancements to Beetle Antennae search(BAS)algorithm,called BAS-ADAIVL to smoothen the convergence behavior and avoid trapping in localminima for a highly noin-convex objective function.We ach...In this paper,we propose enhancements to Beetle Antennae search(BAS)algorithm,called BAS-ADAIVL to smoothen the convergence behavior and avoid trapping in localminima for a highly noin-convex objective function.We achieve this by adaptively adjusting the step-size in each iteration using the adaptive moment estimation(ADAM)update rule.The proposed algorithm also increases the convergence rate in a narrow valley.A key feature of the ADAM update rule is the ability to adjust the step-size for each dimension separately instead of using the same step-size.Since ADAM is traditionally used with gradient-based optimization algorithms,therefore we first propose a gradient estimation model without the need to differentiate the objective function.Resultantly,it demonstrates excellent performance and fast convergence rate in searching for the optimum of noin-convex functions.The efficiency of the proposed algorithm was tested on three different benchmark problems,including the training of a high-dimensional neural network.The performance is compared with particle swarm optimizer(PSO)and the original BAS algorithm.展开更多
文摘An adaptive chaotic gradient descending optimization algorithm for single objective optimization was presented. A local minimum judged by two rules was obtained by an improved mutative-step gradient descending method. A new optimal minimum was obtained to replace the local minimum by mutative-scale chaotic search algorithm whose scales are magnified gradually from a small scale in order to escape local minima. The global optimal value was attained by repeatedly iterating. At last, a BP (back-propagation) neural network model for forecasting slag output in matte converting was established. The algorithm was used to train the weights of the BP neural network model. The simulation results with a training data set of 400 samples show that the training process can be finished within 300 steps to obtain the global optimal value, and escape local minima effectively. An optimization system for operation parameters, which includes the forecasting model, is achieved, in which the output of converter increases by 6.0%, and the amount of the treated cool materials rises by 7.8% in the matte converting process.
基金Project (No. 60721062) supported by the National Creative Research Groups Science Foundation of China
文摘Based on the critical position of the endpoint quality prediction for basic oxygen furnaces (BOFs) in steelmaking, and the latest results in computational intelligence (C1), this paper deals with the development of a novel memetic algorithm (MA) for neural network (NN) lcarnmg. Included in this is the integration of extremal optimization (EO) and Levenberg-Marquardt (LM) pradicnt search, and its application in BOF endpoint quality prediction. The fundamental analysis reveals that the proposed EO-LM algorithm may provide superior performance in generalization, computation efficiency, and avoid local minima, compared to traditional NN learning methods. Experimental results with production-scale BOF data show that the proposed method can effectively improve the NN model for BOF endpoint quality prediction.
文摘In this paper,we propose enhancements to Beetle Antennae search(BAS)algorithm,called BAS-ADAIVL to smoothen the convergence behavior and avoid trapping in localminima for a highly noin-convex objective function.We achieve this by adaptively adjusting the step-size in each iteration using the adaptive moment estimation(ADAM)update rule.The proposed algorithm also increases the convergence rate in a narrow valley.A key feature of the ADAM update rule is the ability to adjust the step-size for each dimension separately instead of using the same step-size.Since ADAM is traditionally used with gradient-based optimization algorithms,therefore we first propose a gradient estimation model without the need to differentiate the objective function.Resultantly,it demonstrates excellent performance and fast convergence rate in searching for the optimum of noin-convex functions.The efficiency of the proposed algorithm was tested on three different benchmark problems,including the training of a high-dimensional neural network.The performance is compared with particle swarm optimizer(PSO)and the original BAS algorithm.