In the contemporary era, the proliferation of information technology has led to an unprecedented surge in data generation, with this data being dispersed across a multitude of mobile devices. Facing these situations a...In the contemporary era, the proliferation of information technology has led to an unprecedented surge in data generation, with this data being dispersed across a multitude of mobile devices. Facing these situations and the training of deep learning model that needs great computing power support, the distributed algorithm that can carry out multi-party joint modeling has attracted everyone’s attention. The distributed training mode relieves the huge pressure of centralized model on computer computing power and communication. However, most distributed algorithms currently work in a master-slave mode, often including a central server for coordination, which to some extent will cause communication pressure, data leakage, privacy violations and other issues. To solve these problems, a decentralized fully distributed algorithm based on deep random weight neural network is proposed. The algorithm decomposes the original objective function into several sub-problems under consistency constraints, combines the decentralized average consensus (DAC) and alternating direction method of multipliers (ADMM), and achieves the goal of joint modeling and training through local calculation and communication of each node. Finally, we compare the proposed decentralized algorithm with several centralized deep neural networks with random weights, and experimental results demonstrate the effectiveness of the proposed algorithm.展开更多
In this paper, a new superlinearly convergent algorithm for nonlinearly constrained optimization problems is presented. The search directions are directly computed by a few formulas, and neither quadratic programming ...In this paper, a new superlinearly convergent algorithm for nonlinearly constrained optimization problems is presented. The search directions are directly computed by a few formulas, and neither quadratic programming nor linear equation need to be sovled. Under mild assumptions, the new algorithm is shown to possess global and superlinear convergence.展开更多
In this paper, an unconstrained optimization method using the nonmonotone second order Goldstein's line search is proposed. By using the negative curvature information from the Hessian,the sequence generated is sh...In this paper, an unconstrained optimization method using the nonmonotone second order Goldstein's line search is proposed. By using the negative curvature information from the Hessian,the sequence generated is shown to converge to a stationary point with the second order optimality conditions. Numerical tests on a set of standard test problems confirm the efficiency of our new method.展开更多
In this paper, a new preference multi-objective optimization algorithm called immune clone algorithm based on reference direction method (RD-ICA) is proposed for solving many-objective optimization problems. First, ...In this paper, a new preference multi-objective optimization algorithm called immune clone algorithm based on reference direction method (RD-ICA) is proposed for solving many-objective optimization problems. First, an intelligent recombination operator, which performs well on the functions comprising many parameters, is introduced into an immune clone algorithm so as to explore the potentially excellent gene segments of all individuals in the antibody pop- ulation. Second, a reference direction method, a very strict ranking based on the desire of decision makers (DMs), is used to guide selection and clone of the active population. Then a light beam search (LBS) is borrowed to pick out a small set of individuals filling the external population. The proposed method has been extensively compared with other recently proposed evolutionary multi-objective optimization (EMO) approaches over DTLZ problems with from 4 to 100 objectives. Experimental results indicate RD-ICA can achieve competitive results.展开更多
文摘In the contemporary era, the proliferation of information technology has led to an unprecedented surge in data generation, with this data being dispersed across a multitude of mobile devices. Facing these situations and the training of deep learning model that needs great computing power support, the distributed algorithm that can carry out multi-party joint modeling has attracted everyone’s attention. The distributed training mode relieves the huge pressure of centralized model on computer computing power and communication. However, most distributed algorithms currently work in a master-slave mode, often including a central server for coordination, which to some extent will cause communication pressure, data leakage, privacy violations and other issues. To solve these problems, a decentralized fully distributed algorithm based on deep random weight neural network is proposed. The algorithm decomposes the original objective function into several sub-problems under consistency constraints, combines the decentralized average consensus (DAC) and alternating direction method of multipliers (ADMM), and achieves the goal of joint modeling and training through local calculation and communication of each node. Finally, we compare the proposed decentralized algorithm with several centralized deep neural networks with random weights, and experimental results demonstrate the effectiveness of the proposed algorithm.
文摘In this paper, a new superlinearly convergent algorithm for nonlinearly constrained optimization problems is presented. The search directions are directly computed by a few formulas, and neither quadratic programming nor linear equation need to be sovled. Under mild assumptions, the new algorithm is shown to possess global and superlinear convergence.
基金This work was supported by the National Natural Science Foundation of China(Grant No.10231060)the Specialized Research Fund of Doctoral Program of Higher Education of China(Grant No.20040319003)
文摘In this paper, an unconstrained optimization method using the nonmonotone second order Goldstein's line search is proposed. By using the negative curvature information from the Hessian,the sequence generated is shown to converge to a stationary point with the second order optimality conditions. Numerical tests on a set of standard test problems confirm the efficiency of our new method.
基金The authors would like to thank the editor and the reviewers for helpful comments that greatly improved the paper. This work was supported by the National Natural Science Foundation of China (Grant Nos. 613731 l 1, 61272279, 61003199 and 61203303) the Fundamental Re- search Funds for the Central Universities (K50511020014, K5051302084, K50510020011, K5051302049 and K5051302023)+1 种基金 the Fund for Foreign Scholars in University Research and Teaching Programs (the 111 Project) (B07048) and the Program for New Century Excellent Talents in University (NCET- 12-0920).
文摘In this paper, a new preference multi-objective optimization algorithm called immune clone algorithm based on reference direction method (RD-ICA) is proposed for solving many-objective optimization problems. First, an intelligent recombination operator, which performs well on the functions comprising many parameters, is introduced into an immune clone algorithm so as to explore the potentially excellent gene segments of all individuals in the antibody pop- ulation. Second, a reference direction method, a very strict ranking based on the desire of decision makers (DMs), is used to guide selection and clone of the active population. Then a light beam search (LBS) is borrowed to pick out a small set of individuals filling the external population. The proposed method has been extensively compared with other recently proposed evolutionary multi-objective optimization (EMO) approaches over DTLZ problems with from 4 to 100 objectives. Experimental results indicate RD-ICA can achieve competitive results.