In this paper,we propose enhancements to Beetle Antennae search(BAS)algorithm,called BAS-ADAIVL to smoothen the convergence behavior and avoid trapping in localminima for a highly noin-convex objective function.We ach...In this paper,we propose enhancements to Beetle Antennae search(BAS)algorithm,called BAS-ADAIVL to smoothen the convergence behavior and avoid trapping in localminima for a highly noin-convex objective function.We achieve this by adaptively adjusting the step-size in each iteration using the adaptive moment estimation(ADAM)update rule.The proposed algorithm also increases the convergence rate in a narrow valley.A key feature of the ADAM update rule is the ability to adjust the step-size for each dimension separately instead of using the same step-size.Since ADAM is traditionally used with gradient-based optimization algorithms,therefore we first propose a gradient estimation model without the need to differentiate the objective function.Resultantly,it demonstrates excellent performance and fast convergence rate in searching for the optimum of noin-convex functions.The efficiency of the proposed algorithm was tested on three different benchmark problems,including the training of a high-dimensional neural network.The performance is compared with particle swarm optimizer(PSO)and the original BAS algorithm.展开更多
文摘In this paper,we propose enhancements to Beetle Antennae search(BAS)algorithm,called BAS-ADAIVL to smoothen the convergence behavior and avoid trapping in localminima for a highly noin-convex objective function.We achieve this by adaptively adjusting the step-size in each iteration using the adaptive moment estimation(ADAM)update rule.The proposed algorithm also increases the convergence rate in a narrow valley.A key feature of the ADAM update rule is the ability to adjust the step-size for each dimension separately instead of using the same step-size.Since ADAM is traditionally used with gradient-based optimization algorithms,therefore we first propose a gradient estimation model without the need to differentiate the objective function.Resultantly,it demonstrates excellent performance and fast convergence rate in searching for the optimum of noin-convex functions.The efficiency of the proposed algorithm was tested on three different benchmark problems,including the training of a high-dimensional neural network.The performance is compared with particle swarm optimizer(PSO)and the original BAS algorithm.