The gender recognition problem has attracted the attention of the computer vision community due to its importance in many applications(e.g.,sur-veillance and human–computer interaction[HCI]).Images of varying levels ...The gender recognition problem has attracted the attention of the computer vision community due to its importance in many applications(e.g.,sur-veillance and human–computer interaction[HCI]).Images of varying levels of illumination,occlusion,and other factors are captured in uncontrolled environ-ments.Iris and facial recognition technology cannot be used on these images because iris texture is unclear in these instances,and faces may be covered by a scarf,hijab,or mask due to the COVID-19 pandemic.The periocular region is a reliable source of information because it features rich discriminative biometric features.However,most existing gender classification approaches have been designed based on hand-engineered features or validated in controlled environ-ments.Motivated by the superior performance of deep learning,we proposed a new method,PeriGender,inspired by the design principles of the ResNet and DenseNet models,that can classify gender using features from the periocular region.The proposed system utilizes a dense concept in a residual model.Through skip connections,it reuses features on different scales to strengthen dis-criminative features.Evaluations of the proposed system on challenging datasets indicated that it outperformed state-of-the-art methods.It achieved 87.37%,94.90%,94.14%,99.14%,and 95.17%accuracy on the GROUPS,UFPR-Periocular,Ethnic-Ocular,IMP,and UBIPr datasets,respectively,in the open-world(OW)protocol.It further achieved 97.57%and 93.20%accuracy for adult periocular images from the GROUPS dataset in the closed-world(CW)and OW protocols,respectively.The results showed that the middle region between the eyes plays a crucial role in the recognition of masculine features,and feminine features can be identified through the eyebrow,upper eyelids,and corners of the eyes.Furthermore,using a whole region without cropping enhances PeriGender’s learning capability,improving its understanding of both eyes’global structure without discontinuity.展开更多
In this paper, we combine the nonmonotone and adaptive techniques with trust region method for unconstrained minimization problems. We set a new ratio of the actual descent and predicted descent. Then, instead of the ...In this paper, we combine the nonmonotone and adaptive techniques with trust region method for unconstrained minimization problems. We set a new ratio of the actual descent and predicted descent. Then, instead of the monotone sequence, the nonmonotone sequence of function values are employed. With the adaptive technique, the radius of trust region △k can be adjusted automatically to improve the efficiency of trust region methods. By means of the Bunch-Parlett factorization, we construct a method with indefinite dogleg path for solving the trust region subproblem which can handle the indefinite approximate Hessian Bk. The convergence properties of the algorithm are established. Finally, detailed numerical results are reported to show that our algorithm is efficient.展开更多
In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the ...In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient.展开更多
Structure learning of Bayesian networks is a wellresearched but computationally hard task.For learning Bayesian networks,this paper proposes an improved algorithm based on unconstrained optimization and ant colony opt...Structure learning of Bayesian networks is a wellresearched but computationally hard task.For learning Bayesian networks,this paper proposes an improved algorithm based on unconstrained optimization and ant colony optimization(U-ACO-B) to solve the drawbacks of the ant colony optimization(ACO-B).In this algorithm,firstly,an unconstrained optimization problem is solved to obtain an undirected skeleton,and then the ACO algorithm is used to orientate the edges,thus returning the final structure.In the experimental part of the paper,we compare the performance of the proposed algorithm with ACO-B algorithm.The experimental results show that our method is effective and greatly enhance convergence speed than ACO-B algorithm.展开更多
In this paper a hybrid algorithm which combines the pattern search method and the genetic algorithm for unconstrained optimization is presented. The algorithm is a deterministic pattern search algorithm,but in the sea...In this paper a hybrid algorithm which combines the pattern search method and the genetic algorithm for unconstrained optimization is presented. The algorithm is a deterministic pattern search algorithm,but in the search step of pattern search algorithm,the trial points are produced by a way like the genetic algorithm. At each iterate, by reduplication,crossover and mutation, a finite set of points can be used. In theory,the algorithm is globally convergent. The most stir is the numerical results showing that it can find the global minimizer for some problems ,which other pattern search algorithms don't bear.展开更多
A filled function with adjustable parameters is suggested in this paper for finding a global minimum point of a general class of nonlinear programming problems with a bounded and closed domain. This function has two a...A filled function with adjustable parameters is suggested in this paper for finding a global minimum point of a general class of nonlinear programming problems with a bounded and closed domain. This function has two adjustable parameters. We will discuss the properties of the proposed filled function. Conditions on this function and on the values of parameters are given so that the constructed function has the desired properties of traditional filled function.展开更多
In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wol...In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation.展开更多
Many methods have been put forward to solve unconstrained optimization problems,among which conjugate gradient method(CG)is very important.With the increasing emergence of large⁃scale problems,the subspace technology ...Many methods have been put forward to solve unconstrained optimization problems,among which conjugate gradient method(CG)is very important.With the increasing emergence of large⁃scale problems,the subspace technology has become particularly important and widely used in the field of optimization.In this study,a new CG method was put forward,which combined subspace technology and a cubic regularization model.Besides,a special scaled norm in a cubic regularization model was analyzed.Under certain conditions,some significant characteristics of the search direction were given and the convergence of the algorithm was built.Numerical comparisons show that for the 145 test functions under the CUTEr library,the proposed method is better than two classical CG methods and two new subspaces conjugate gradient methods.展开更多
It is well known that the line search methods play a very important role for optimization problems. In this paper a new line search method is proposed for solving unconstrained optimization. Under weak conditions, thi...It is well known that the line search methods play a very important role for optimization problems. In this paper a new line search method is proposed for solving unconstrained optimization. Under weak conditions, this method possesses global convergence and R-linear convergence for nonconvex function and convex function, respectively. Moreover, the given search direction has sufficiently descent property and belongs to a trust region without carrying out any line search rule. Numerical results show that the new method is effective.展开更多
A Class of Collinear Scaling Algorithms for Unconstrained Optimization. An appealing approach to the solution of nonlinear optimization problems based on conic models of the objective function has been in troduced by ...A Class of Collinear Scaling Algorithms for Unconstrained Optimization. An appealing approach to the solution of nonlinear optimization problems based on conic models of the objective function has been in troduced by Davidon (1980). It leads to a broad class of algorithms which can be considered to generalize the existing quasi-Newton methods. One particular member of this class has been deeply discussed by Sorensen (1980), who has proved some interesting theoretical properties. In this paper, we generalize Sorensen’s technique to Spedicato three-parameter family of variable-metric updates. Furthermore, we point out that the collinear scaling three- parameter family is essentially equivalent to the Spedicato three-parameter family. In addition, numerical expriments have been carried out to compare some colliner scaling algorithms with a straightforward implementation of the BFGS quasi-Newton method.展开更多
Two new formulaes of the main parameter βk of the conjugate gradient method are presented, which respectively can be seen as the modifications of method HS and PRP. In comparison with classic conjugate gradient metho...Two new formulaes of the main parameter βk of the conjugate gradient method are presented, which respectively can be seen as the modifications of method HS and PRP. In comparison with classic conjugate gradient methods, the new methods take both available gradient and function value information. Furthermore, their modifications are proposed. These methods are shown to be global convergent under some assumptions. Numerical results are also reported.展开更多
In this paper, a new conjugate gradient formula and its algorithm for solving unconstrained optimization problems are proposed. The given formula satisfies with satisfying the descent condition. Under the Grippo-Lucid...In this paper, a new conjugate gradient formula and its algorithm for solving unconstrained optimization problems are proposed. The given formula satisfies with satisfying the descent condition. Under the Grippo-Lucidi line search, the global convergence property of the given method is discussed. The numerical results show that the new method is efficient for the given test problems.展开更多
In this paper, we propose a retrospective filter trust region algorithm for unconstrained optimization, which is based on the framework of the retrospective trust region method and associated with the technique of the...In this paper, we propose a retrospective filter trust region algorithm for unconstrained optimization, which is based on the framework of the retrospective trust region method and associated with the technique of the multi-dimensional filter. The new algorithm gives a good estimation of trust region radius, relaxes the condition of accepting a trial step for the usual trust region methods. Under reasonable assumptions, we analyze the global convergence of the new method and report the preliminary results of numerical tests. We compare the results with those of the basic trust region algorithm, the filter trust region algorithm and the retrospective trust region algorithm, which shows the effectiveness of the new algorithm.展开更多
In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ...In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis.展开更多
In reference [1], for large scale nonlinear equations , a new ODE solving method was given. This paper is a continuous work. Here has gradient structure i.e. , is a scalar function. The eigenvalues of the Jacobian of;...In reference [1], for large scale nonlinear equations , a new ODE solving method was given. This paper is a continuous work. Here has gradient structure i.e. , is a scalar function. The eigenvalues of the Jacobian of;or the Hessian of , are all real number. So the new method is very suitable for this structure. For quadratic function the convergence was proved and the spectral radius of iteration matrix was given and compared with traditional method. Examples show for large scale problems (dimension ) the new method is very efficient.展开更多
In this paper, we prove the global convergence of the Perry-Shanno’s memoryless quasi-Newton (PSMQN) method with a new inexact line search when applied to nonconvex unconstrained minimization problems. Preliminary nu...In this paper, we prove the global convergence of the Perry-Shanno’s memoryless quasi-Newton (PSMQN) method with a new inexact line search when applied to nonconvex unconstrained minimization problems. Preliminary numerical results show that the PSMQN with the particularly line search conditions are very promising.展开更多
We present an improved method. If we assume that the objective function is twice continuously differentiable and uniformly convex, we discuss global and superlinear convergence of the improved quasi-Newton method.
In this paper, we propose and analyze a non-monotone trust region method with non-monotone line search strategy for unconstrained optimization problems. Unlike the traditional non-monotone trust region method, our alg...In this paper, we propose and analyze a non-monotone trust region method with non-monotone line search strategy for unconstrained optimization problems. Unlike the traditional non-monotone trust region method, our algorithm utilizes non-monotone Wolfe line search to get the next point if a trial step is not adopted. Thus, it can reduce the number of solving sub-problems. Theoretical analysis shows that the new proposed method has a global convergence under some mild conditions.展开更多
In this paper, we propose new variants of Newton’s method based on quadrature formula and power mean for solving nonlinear unconstrained optimization problems. It is proved that the order of convergence of the propos...In this paper, we propose new variants of Newton’s method based on quadrature formula and power mean for solving nonlinear unconstrained optimization problems. It is proved that the order of convergence of the proposed family is three. Numerical comparisons are made to show the performance of the presented methods. Furthermore, numerical experiments demonstrate that the logarithmic mean Newton’s method outperform the classical Newton’s and other variants of Newton’s method. MSC: 65H05.展开更多
In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search directi...In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search direction to more general form, and also obtain the global convergence of corresponding algorithm. The numerical results illustrate that the new algorithm is effective.展开更多
基金The authors are thankful to the Deanship of Scientific Research,King Saud University,Riyadh,Saudi Arabia for funding this work through the Research Group No.RGP-1439-067.
文摘The gender recognition problem has attracted the attention of the computer vision community due to its importance in many applications(e.g.,sur-veillance and human–computer interaction[HCI]).Images of varying levels of illumination,occlusion,and other factors are captured in uncontrolled environ-ments.Iris and facial recognition technology cannot be used on these images because iris texture is unclear in these instances,and faces may be covered by a scarf,hijab,or mask due to the COVID-19 pandemic.The periocular region is a reliable source of information because it features rich discriminative biometric features.However,most existing gender classification approaches have been designed based on hand-engineered features or validated in controlled environ-ments.Motivated by the superior performance of deep learning,we proposed a new method,PeriGender,inspired by the design principles of the ResNet and DenseNet models,that can classify gender using features from the periocular region.The proposed system utilizes a dense concept in a residual model.Through skip connections,it reuses features on different scales to strengthen dis-criminative features.Evaluations of the proposed system on challenging datasets indicated that it outperformed state-of-the-art methods.It achieved 87.37%,94.90%,94.14%,99.14%,and 95.17%accuracy on the GROUPS,UFPR-Periocular,Ethnic-Ocular,IMP,and UBIPr datasets,respectively,in the open-world(OW)protocol.It further achieved 97.57%and 93.20%accuracy for adult periocular images from the GROUPS dataset in the closed-world(CW)and OW protocols,respectively.The results showed that the middle region between the eyes plays a crucial role in the recognition of masculine features,and feminine features can be identified through the eyebrow,upper eyelids,and corners of the eyes.Furthermore,using a whole region without cropping enhances PeriGender’s learning capability,improving its understanding of both eyes’global structure without discontinuity.
基金Supported by the NNSF(10231060 and 10501024)of Chinathe Specialized Research Fund(20040319003)of Doctoral Program of Higher Education of China+1 种基金the Natural Science Grant(BK2006214)of Jiangsu Province of Chinathe Foundation(2004NXY20)of Nanjing Xiaozhuang College.
文摘In this paper, we combine the nonmonotone and adaptive techniques with trust region method for unconstrained minimization problems. We set a new ratio of the actual descent and predicted descent. Then, instead of the monotone sequence, the nonmonotone sequence of function values are employed. With the adaptive technique, the radius of trust region △k can be adjusted automatically to improve the efficiency of trust region methods. By means of the Bunch-Parlett factorization, we construct a method with indefinite dogleg path for solving the trust region subproblem which can handle the indefinite approximate Hessian Bk. The convergence properties of the algorithm are established. Finally, detailed numerical results are reported to show that our algorithm is efficient.
文摘In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient.
基金supported by the National Natural Science Foundation of China (60974082,11171094)the Fundamental Research Funds for the Central Universities (K50510700004)+1 种基金the Foundation and Advanced Technology Research Program of Henan Province (102300410264)the Basic Research Program of the Education Department of Henan Province (2010A110010)
文摘Structure learning of Bayesian networks is a wellresearched but computationally hard task.For learning Bayesian networks,this paper proposes an improved algorithm based on unconstrained optimization and ant colony optimization(U-ACO-B) to solve the drawbacks of the ant colony optimization(ACO-B).In this algorithm,firstly,an unconstrained optimization problem is solved to obtain an undirected skeleton,and then the ACO algorithm is used to orientate the edges,thus returning the final structure.In the experimental part of the paper,we compare the performance of the proposed algorithm with ACO-B algorithm.The experimental results show that our method is effective and greatly enhance convergence speed than ACO-B algorithm.
文摘In this paper a hybrid algorithm which combines the pattern search method and the genetic algorithm for unconstrained optimization is presented. The algorithm is a deterministic pattern search algorithm,but in the search step of pattern search algorithm,the trial points are produced by a way like the genetic algorithm. At each iterate, by reduplication,crossover and mutation, a finite set of points can be used. In theory,the algorithm is globally convergent. The most stir is the numerical results showing that it can find the global minimizer for some problems ,which other pattern search algorithms don't bear.
基金Supported by the National Science Foundation of China(10171118)Supported by the Science Foundation of University of Science and Technology of Henan(2003ZY06)
文摘A filled function with adjustable parameters is suggested in this paper for finding a global minimum point of a general class of nonlinear programming problems with a bounded and closed domain. This function has two adjustable parameters. We will discuss the properties of the proposed filled function. Conditions on this function and on the values of parameters are given so that the constructed function has the desired properties of traditional filled function.
基金Supported by the Fund of Chongqing Education Committee(KJ091104)
文摘In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation.
基金Sponsored by the National Natural Science Foundation of China(Grant No.11901561).
文摘Many methods have been put forward to solve unconstrained optimization problems,among which conjugate gradient method(CG)is very important.With the increasing emergence of large⁃scale problems,the subspace technology has become particularly important and widely used in the field of optimization.In this study,a new CG method was put forward,which combined subspace technology and a cubic regularization model.Besides,a special scaled norm in a cubic regularization model was analyzed.Under certain conditions,some significant characteristics of the search direction were given and the convergence of the algorithm was built.Numerical comparisons show that for the 145 test functions under the CUTEr library,the proposed method is better than two classical CG methods and two new subspaces conjugate gradient methods.
文摘It is well known that the line search methods play a very important role for optimization problems. In this paper a new line search method is proposed for solving unconstrained optimization. Under weak conditions, this method possesses global convergence and R-linear convergence for nonconvex function and convex function, respectively. Moreover, the given search direction has sufficiently descent property and belongs to a trust region without carrying out any line search rule. Numerical results show that the new method is effective.
基金Supported by NNSF of China and NSF of Jiangsu Province
文摘A Class of Collinear Scaling Algorithms for Unconstrained Optimization. An appealing approach to the solution of nonlinear optimization problems based on conic models of the objective function has been in troduced by Davidon (1980). It leads to a broad class of algorithms which can be considered to generalize the existing quasi-Newton methods. One particular member of this class has been deeply discussed by Sorensen (1980), who has proved some interesting theoretical properties. In this paper, we generalize Sorensen’s technique to Spedicato three-parameter family of variable-metric updates. Furthermore, we point out that the collinear scaling three- parameter family is essentially equivalent to the Spedicato three-parameter family. In addition, numerical expriments have been carried out to compare some colliner scaling algorithms with a straightforward implementation of the BFGS quasi-Newton method.
基金supported by the Teaching and Research Award Program for the Outstanding Young Teachers in Higher Education Institutesof Ministry of Educationthe Natural Science Foundation of Inner Mongolia Autonomous Region (2010BS0108)SPH-IMU (Z20090135)
文摘Two new formulaes of the main parameter βk of the conjugate gradient method are presented, which respectively can be seen as the modifications of method HS and PRP. In comparison with classic conjugate gradient methods, the new methods take both available gradient and function value information. Furthermore, their modifications are proposed. These methods are shown to be global convergent under some assumptions. Numerical results are also reported.
文摘In this paper, a new conjugate gradient formula and its algorithm for solving unconstrained optimization problems are proposed. The given formula satisfies with satisfying the descent condition. Under the Grippo-Lucidi line search, the global convergence property of the given method is discussed. The numerical results show that the new method is efficient for the given test problems.
文摘In this paper, we propose a retrospective filter trust region algorithm for unconstrained optimization, which is based on the framework of the retrospective trust region method and associated with the technique of the multi-dimensional filter. The new algorithm gives a good estimation of trust region radius, relaxes the condition of accepting a trial step for the usual trust region methods. Under reasonable assumptions, we analyze the global convergence of the new method and report the preliminary results of numerical tests. We compare the results with those of the basic trust region algorithm, the filter trust region algorithm and the retrospective trust region algorithm, which shows the effectiveness of the new algorithm.
文摘In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis.
文摘In reference [1], for large scale nonlinear equations , a new ODE solving method was given. This paper is a continuous work. Here has gradient structure i.e. , is a scalar function. The eigenvalues of the Jacobian of;or the Hessian of , are all real number. So the new method is very suitable for this structure. For quadratic function the convergence was proved and the spectral radius of iteration matrix was given and compared with traditional method. Examples show for large scale problems (dimension ) the new method is very efficient.
文摘In this paper, we prove the global convergence of the Perry-Shanno’s memoryless quasi-Newton (PSMQN) method with a new inexact line search when applied to nonconvex unconstrained minimization problems. Preliminary numerical results show that the PSMQN with the particularly line search conditions are very promising.
文摘We present an improved method. If we assume that the objective function is twice continuously differentiable and uniformly convex, we discuss global and superlinear convergence of the improved quasi-Newton method.
文摘In this paper, we propose and analyze a non-monotone trust region method with non-monotone line search strategy for unconstrained optimization problems. Unlike the traditional non-monotone trust region method, our algorithm utilizes non-monotone Wolfe line search to get the next point if a trial step is not adopted. Thus, it can reduce the number of solving sub-problems. Theoretical analysis shows that the new proposed method has a global convergence under some mild conditions.
文摘In this paper, we propose new variants of Newton’s method based on quadrature formula and power mean for solving nonlinear unconstrained optimization problems. It is proved that the order of convergence of the proposed family is three. Numerical comparisons are made to show the performance of the presented methods. Furthermore, numerical experiments demonstrate that the logarithmic mean Newton’s method outperform the classical Newton’s and other variants of Newton’s method. MSC: 65H05.
文摘In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search direction to more general form, and also obtain the global convergence of corresponding algorithm. The numerical results illustrate that the new algorithm is effective.