The distribution of sampling data influences completeness of rule base so that extrapolating missing rules is very difficult. Based on data mining, a self-learning method is developed for identifying fuzzy model and e...The distribution of sampling data influences completeness of rule base so that extrapolating missing rules is very difficult. Based on data mining, a self-learning method is developed for identifying fuzzy model and extrapolating missing rules, by means of confidence measure and the improved gradient descent method. The proposed approach can not only identify fuzzy model, update its parameters and determine optimal output fuzzy sets simultaneously, but also resolve the uncontrollable problem led by the regions that data do not cover. The simulation results show the effectiveness and accuracy of the proposed approach with the classical truck backer-upper control problem verifying.展开更多
A new algorithm to exploit the learning rates of gradient descent method is presented, based on the second-order Taylor expansion of the error energy function with respect to learning rate, at some values decided by &...A new algorithm to exploit the learning rates of gradient descent method is presented, based on the second-order Taylor expansion of the error energy function with respect to learning rate, at some values decided by "award-punish" strategy. Detailed deduction of the algorithm applied to RBF networks is given. Simulation studies show that this algorithm can increase the rate of convergence and improve the performance of the gradient descent method.展开更多
In this work,we develop a stochastic gradient descent method for the computational optimal design of random rough surfaces in thin-film solar cells.We formulate the design problems as random PDE-constrained optimizati...In this work,we develop a stochastic gradient descent method for the computational optimal design of random rough surfaces in thin-film solar cells.We formulate the design problems as random PDE-constrained optimization problems and seek the optimal statistical parameters for the random surfaces.The optimizations at fixed frequency as well as at multiple frequencies and multiple incident angles are investigated.To evaluate the gradient of the objective function,we derive the shape derivatives for the interfaces and apply the adjoint state method to perform the computation.The stochastic gradient descent method evaluates the gradient of the objective function only at a few samples for each iteration,which reduces the computational cost significantly.Various numerical experiments are conducted to illustrate the efficiency of the method and significant increases of the absorptance for the optimal random structures.We also examine the convergence of the stochastic gradient descent algorithm theoretically and prove that the numerical method is convergent under certain assumptions for the random interfaces.展开更多
In this paper,we propose a gradient descent method to estimate the parameters in a Markov chain choice model.Particularly,we derive closed-form formula for the gradient of the log-likelihood function and show the conv...In this paper,we propose a gradient descent method to estimate the parameters in a Markov chain choice model.Particularly,we derive closed-form formula for the gradient of the log-likelihood function and show the convergence of the algorithm.Numerical experiments verify the efficiency of our approach by comparing with the expectation-maximization algorithm.We show that the similar result can be extended to a more general case that one does not have observation of the no-purchase data.展开更多
The gradient descent(GD)method is used to fit the measured data(i.e.,the laser grain-size distribution of the sediments)with a sum of four weighted lognormal functions.The method is calibrated by a series of ideal num...The gradient descent(GD)method is used to fit the measured data(i.e.,the laser grain-size distribution of the sediments)with a sum of four weighted lognormal functions.The method is calibrated by a series of ideal numerical experiments.The numerical results indicate that the GD method not only is easy to operate but also could effectively optimize the parameters of the fitting function with the error decreasing steadily.The method is applied to numerical partitioning of laser grain-size components of a series of Garzêloess samples and three bottom sedimentary samples of submarine turbidity currents modeled in an open channel laboratory flume.The overall fitting results are satisfactory.As a new approach of data fitting,the GD method could also be adapted to solve other optimization problems.展开更多
A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysi...A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysis, it is shown that search directions of the proposed method satisfy the sufficient descent condition, independent of the line search and the objective function convexity. Global convergence of the method is established under an Armijo–type line search condition. Numerical experiments show practical efficiency of the proposed method.展开更多
In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ...In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis.展开更多
Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new de...Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new descent gradient method based on the LS method.It can guarantee the sufficient descent property at each iteration and the global convergence under the strong Wolfe line search.Finally,we also present extensive preliminary numerical experiments to show the efficiency of the proposed method by comparing with the famous PRP^+method.展开更多
In this paper, three new hybrid nonlinear conjugate gradient methods are presented, which produce suf?cient descent search direction at every iteration. This property is independent of any line search or the convexity...In this paper, three new hybrid nonlinear conjugate gradient methods are presented, which produce suf?cient descent search direction at every iteration. This property is independent of any line search or the convexity of the objective function used. Under suitable conditions, we prove that the proposed methods converge globally for general nonconvex functions. The numerical results show that all these three new hybrid methods are efficient for the given test problems.展开更多
基金This project was supported by State Science &Technology Pursuing Project (2001BA204B01) of China and Foundation forUniversity Key Teacher by the Ministry of Education of China.
文摘The distribution of sampling data influences completeness of rule base so that extrapolating missing rules is very difficult. Based on data mining, a self-learning method is developed for identifying fuzzy model and extrapolating missing rules, by means of confidence measure and the improved gradient descent method. The proposed approach can not only identify fuzzy model, update its parameters and determine optimal output fuzzy sets simultaneously, but also resolve the uncontrollable problem led by the regions that data do not cover. The simulation results show the effectiveness and accuracy of the proposed approach with the classical truck backer-upper control problem verifying.
基金Open Foundation of State Key Lab of Transmission of Wide-Band FiberTechnologies of Communication Systems
文摘A new algorithm to exploit the learning rates of gradient descent method is presented, based on the second-order Taylor expansion of the error energy function with respect to learning rate, at some values decided by "award-punish" strategy. Detailed deduction of the algorithm applied to RBF networks is given. Simulation studies show that this algorithm can increase the rate of convergence and improve the performance of the gradient descent method.
基金partially supported by the DOE grant DE-SC0022253the work of JL was partially supported by the NSF grant DMS-1719851 and DMS-2011148.
文摘In this work,we develop a stochastic gradient descent method for the computational optimal design of random rough surfaces in thin-film solar cells.We formulate the design problems as random PDE-constrained optimization problems and seek the optimal statistical parameters for the random surfaces.The optimizations at fixed frequency as well as at multiple frequencies and multiple incident angles are investigated.To evaluate the gradient of the objective function,we derive the shape derivatives for the interfaces and apply the adjoint state method to perform the computation.The stochastic gradient descent method evaluates the gradient of the objective function only at a few samples for each iteration,which reduces the computational cost significantly.Various numerical experiments are conducted to illustrate the efficiency of the method and significant increases of the absorptance for the optimal random structures.We also examine the convergence of the stochastic gradient descent algorithm theoretically and prove that the numerical method is convergent under certain assumptions for the random interfaces.
文摘In this paper,we propose a gradient descent method to estimate the parameters in a Markov chain choice model.Particularly,we derive closed-form formula for the gradient of the log-likelihood function and show the convergence of the algorithm.Numerical experiments verify the efficiency of our approach by comparing with the expectation-maximization algorithm.We show that the similar result can be extended to a more general case that one does not have observation of the no-purchase data.
基金supported by the National Natural Science Foundation of China(Grant Nos.41072176,41371496)the National Science and Technology Supporting Program of China(Grant No.2013BAK05B04)the Fundamental Research Funds for the Central Universities(Grant No.201261006)
文摘The gradient descent(GD)method is used to fit the measured data(i.e.,the laser grain-size distribution of the sediments)with a sum of four weighted lognormal functions.The method is calibrated by a series of ideal numerical experiments.The numerical results indicate that the GD method not only is easy to operate but also could effectively optimize the parameters of the fitting function with the error decreasing steadily.The method is applied to numerical partitioning of laser grain-size components of a series of Garzêloess samples and three bottom sedimentary samples of submarine turbidity currents modeled in an open channel laboratory flume.The overall fitting results are satisfactory.As a new approach of data fitting,the GD method could also be adapted to solve other optimization problems.
基金Supported by Research Council of Semnan University
文摘A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysis, it is shown that search directions of the proposed method satisfy the sufficient descent condition, independent of the line search and the objective function convexity. Global convergence of the method is established under an Armijo–type line search condition. Numerical experiments show practical efficiency of the proposed method.
文摘In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis.
基金Supported by The Youth Project Foundation of Chongqing Three Gorges University(13QN17)Supported by the Fund of Scientific Research in Southeast University(the Support Project of Fundamental Research)
文摘Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new descent gradient method based on the LS method.It can guarantee the sufficient descent property at each iteration and the global convergence under the strong Wolfe line search.Finally,we also present extensive preliminary numerical experiments to show the efficiency of the proposed method by comparing with the famous PRP^+method.
文摘In this paper, three new hybrid nonlinear conjugate gradient methods are presented, which produce suf?cient descent search direction at every iteration. This property is independent of any line search or the convexity of the objective function used. Under suitable conditions, we prove that the proposed methods converge globally for general nonconvex functions. The numerical results show that all these three new hybrid methods are efficient for the given test problems.
基金Projects(52022113,52278546,52108433)supported by the National Natural Science Foundation of ChinaProject(2023QYJC009)supported by the Central South University Research Program of Advanced Interdisciplinary Studies,ChinaProject(2023ZZTS0364)supported by the Fundamental Research Funds for the Central Universities,China。