In this paper,we explore bound preserving and high-order accurate local discontinuous Galerkin(LDG)schemes to solve a class of chemotaxis models,including the classical Keller-Segel(KS)model and two other density-depe...In this paper,we explore bound preserving and high-order accurate local discontinuous Galerkin(LDG)schemes to solve a class of chemotaxis models,including the classical Keller-Segel(KS)model and two other density-dependent problems.We use the convex splitting method,the variant energy quadratization method,and the scalar auxiliary variable method coupled with the LDG method to construct first-order temporal accurate schemes based on the gradient flow structure of the models.These semi-implicit schemes are decoupled,energy stable,and can be extended to high accuracy schemes using the semi-implicit spectral deferred correction method.Many bound preserving DG discretizations are only worked on explicit time integration methods and are difficult to get high-order accuracy.To overcome these difficulties,we use the Lagrange multipliers to enforce the implicit or semi-implicit LDG schemes to satisfy the bound constraints at each time step.This bound preserving limiter results in the Karush-Kuhn-Tucker condition,which can be solved by an efficient active set semi-smooth Newton method.Various numerical experiments illustrate the high-order accuracy and the effect of bound preserving.展开更多
A global convergent algorithm is proposed to solve bilevel linear fractional-linear programming, which is a special class of bilevel programming. In our algorithm, replacing the lower level problem by its dual gap equ...A global convergent algorithm is proposed to solve bilevel linear fractional-linear programming, which is a special class of bilevel programming. In our algorithm, replacing the lower level problem by its dual gap equaling to zero, the bilevel linear fractional-linear programming is transformed into a traditional sin- gle level programming problem, which can be transformed into a series of linear fractional programming problem. Thus, the modi- fied convex simplex method is used to solve the infinite linear fractional programming to obtain the global convergent solution of the original bilevel linear fractional-linear programming. Finally, an example demonstrates the feasibility of the proposed algorithm.展开更多
Two non-probabilistic, set-theoretical methods for determining the maximum and minimum impulsive responses of structures to uncertain-but-bounded impulses are presented. They are, respectively, based on the theories o...Two non-probabilistic, set-theoretical methods for determining the maximum and minimum impulsive responses of structures to uncertain-but-bounded impulses are presented. They are, respectively, based on the theories of interval mathematics and convex models. The uncertain-but-bounded impulses are assumed to be a convex set, hyper-rectangle or ellipsoid. For the two non-probabilistic methods, less prior information is required about the uncertain nature of impulses than the probabilistic model. Comparisons between the interval analysis method and the convex model, which are developed as an anti-optimization problem of finding the least favorable impulsive response and the most favorable impulsive response, are made through mathematical analyses and numerical calculations. The results of this study indicate that under the condition of the interval vector being determined from an ellipsoid containing the uncertain impulses, the width of the impulsive responses predicted by the interval analysis method is larger than that by the convex model; under the condition of the ellipsoid being determined from an interval vector containing the uncertain impulses, the width of the interval impulsive responses obtained by the interval analysis method is smaller than that by the convex model.展开更多
In this paper, we present a new hybrid conjugate gradient algorithm for unconstrained optimization. This method is a convex combination of Liu-Storey conjugate gradient method and Fletcher-Reeves conjugate gradient me...In this paper, we present a new hybrid conjugate gradient algorithm for unconstrained optimization. This method is a convex combination of Liu-Storey conjugate gradient method and Fletcher-Reeves conjugate gradient method. We also prove that the search direction of any hybrid conjugate gradient method, which is a convex combination of two conjugate gradient methods, satisfies the famous D-L conjugacy condition and in the same time accords with the Newton direction with the suitable condition. Furthermore, this property doesn't depend on any line search. Next, we also prove that, moduling the value of the parameter t,the Newton direction condition is equivalent to Dai-Liao conjugacy condition.The strong Wolfe line search conditions are used.The global convergence of this new method is proved.Numerical comparisons show that the present hybrid conjugate gradient algorithm is the efficient one.展开更多
An algorithm for solving a class of smooth convex programming is given. Using smooth exact multiplier penalty function, a smooth convex programming is minimized to a minimizing strongly convex function on the compact ...An algorithm for solving a class of smooth convex programming is given. Using smooth exact multiplier penalty function, a smooth convex programming is minimized to a minimizing strongly convex function on the compact set was reduced. Then the strongly convex function with a Newton method on the given compact set was minimized.展开更多
In this paper, on the basis of the logarithmic barrier function and KKT conditions, we propose a combined homotopy infeasible interior-point method (CHIIP) for convex nonlinear programming problems. For any convex n...In this paper, on the basis of the logarithmic barrier function and KKT conditions, we propose a combined homotopy infeasible interior-point method (CHIIP) for convex nonlinear programming problems. For any convex nonlinear programming, without strict convexity for the logarithmic barrier function, we get different solutions of the convex programming in different cases by CHIIP method.展开更多
Two new versions of accelerated first-order methods for minimizing convex composite functions are proposed. In this paper, we first present an accelerated first-order method which chooses the step size 1/ Lk to be 1/ ...Two new versions of accelerated first-order methods for minimizing convex composite functions are proposed. In this paper, we first present an accelerated first-order method which chooses the step size 1/ Lk to be 1/ L0 at the beginning of each iteration and preserves the computational simplicity of the fast iterative shrinkage-thresholding algorithm. The first proposed algorithm is a non-monotone algorithm. To avoid this behavior, we present another accelerated monotone first-order method. The proposed two accelerated first-order methods are proved to have a better convergence rate for minimizing convex composite functions. Numerical results demonstrate the efficiency of the proposed two accelerated first-order methods.展开更多
In this paper, we present a regularized Newton method (M-RNM) with correction for minimizing a convex function whose Hessian matrices may be singular. At every iteration, not only a RNM step is computed but also two c...In this paper, we present a regularized Newton method (M-RNM) with correction for minimizing a convex function whose Hessian matrices may be singular. At every iteration, not only a RNM step is computed but also two correction steps are computed. We show that if the objective function is LC<sup>2</sup>, then the method posses globally convergent. Numerical results show that the new algorithm performs very well.展开更多
文摘In this paper,we explore bound preserving and high-order accurate local discontinuous Galerkin(LDG)schemes to solve a class of chemotaxis models,including the classical Keller-Segel(KS)model and two other density-dependent problems.We use the convex splitting method,the variant energy quadratization method,and the scalar auxiliary variable method coupled with the LDG method to construct first-order temporal accurate schemes based on the gradient flow structure of the models.These semi-implicit schemes are decoupled,energy stable,and can be extended to high accuracy schemes using the semi-implicit spectral deferred correction method.Many bound preserving DG discretizations are only worked on explicit time integration methods and are difficult to get high-order accuracy.To overcome these difficulties,we use the Lagrange multipliers to enforce the implicit or semi-implicit LDG schemes to satisfy the bound constraints at each time step.This bound preserving limiter results in the Karush-Kuhn-Tucker condition,which can be solved by an efficient active set semi-smooth Newton method.Various numerical experiments illustrate the high-order accuracy and the effect of bound preserving.
基金supported by the National Natural Science Foundation of China(70771080)the Special Fund for Basic Scientific Research of Central Colleges+2 种基金China University of Geosciences(Wuhan) (CUG090113)the Research Foundation for Outstanding Young TeachersChina University of Geosciences(Wuhan)(CUGQNW0801)
文摘A global convergent algorithm is proposed to solve bilevel linear fractional-linear programming, which is a special class of bilevel programming. In our algorithm, replacing the lower level problem by its dual gap equaling to zero, the bilevel linear fractional-linear programming is transformed into a traditional sin- gle level programming problem, which can be transformed into a series of linear fractional programming problem. Thus, the modi- fied convex simplex method is used to solve the infinite linear fractional programming to obtain the global convergent solution of the original bilevel linear fractional-linear programming. Finally, an example demonstrates the feasibility of the proposed algorithm.
基金The project supported by the National Outstanding Youth Science Foundation of China (10425208)the National Natural Science Foundation of ChinaInstitute of Engineering Physics of China (10376002) The English text was polished by Keren Wang
文摘Two non-probabilistic, set-theoretical methods for determining the maximum and minimum impulsive responses of structures to uncertain-but-bounded impulses are presented. They are, respectively, based on the theories of interval mathematics and convex models. The uncertain-but-bounded impulses are assumed to be a convex set, hyper-rectangle or ellipsoid. For the two non-probabilistic methods, less prior information is required about the uncertain nature of impulses than the probabilistic model. Comparisons between the interval analysis method and the convex model, which are developed as an anti-optimization problem of finding the least favorable impulsive response and the most favorable impulsive response, are made through mathematical analyses and numerical calculations. The results of this study indicate that under the condition of the interval vector being determined from an ellipsoid containing the uncertain impulses, the width of the impulsive responses predicted by the interval analysis method is larger than that by the convex model; under the condition of the ellipsoid being determined from an interval vector containing the uncertain impulses, the width of the interval impulsive responses obtained by the interval analysis method is smaller than that by the convex model.
文摘In this paper, we present a new hybrid conjugate gradient algorithm for unconstrained optimization. This method is a convex combination of Liu-Storey conjugate gradient method and Fletcher-Reeves conjugate gradient method. We also prove that the search direction of any hybrid conjugate gradient method, which is a convex combination of two conjugate gradient methods, satisfies the famous D-L conjugacy condition and in the same time accords with the Newton direction with the suitable condition. Furthermore, this property doesn't depend on any line search. Next, we also prove that, moduling the value of the parameter t,the Newton direction condition is equivalent to Dai-Liao conjugacy condition.The strong Wolfe line search conditions are used.The global convergence of this new method is proved.Numerical comparisons show that the present hybrid conjugate gradient algorithm is the efficient one.
文摘An algorithm for solving a class of smooth convex programming is given. Using smooth exact multiplier penalty function, a smooth convex programming is minimized to a minimizing strongly convex function on the compact set was reduced. Then the strongly convex function with a Newton method on the given compact set was minimized.
文摘In this paper, on the basis of the logarithmic barrier function and KKT conditions, we propose a combined homotopy infeasible interior-point method (CHIIP) for convex nonlinear programming problems. For any convex nonlinear programming, without strict convexity for the logarithmic barrier function, we get different solutions of the convex programming in different cases by CHIIP method.
基金Sponsored by the National Natural Science Foundation of China(Grant No.11461021)the Natural Science Basic Research Plan in Shaanxi Province of China(Grant No.2017JM1014)
文摘Two new versions of accelerated first-order methods for minimizing convex composite functions are proposed. In this paper, we first present an accelerated first-order method which chooses the step size 1/ Lk to be 1/ L0 at the beginning of each iteration and preserves the computational simplicity of the fast iterative shrinkage-thresholding algorithm. The first proposed algorithm is a non-monotone algorithm. To avoid this behavior, we present another accelerated monotone first-order method. The proposed two accelerated first-order methods are proved to have a better convergence rate for minimizing convex composite functions. Numerical results demonstrate the efficiency of the proposed two accelerated first-order methods.
文摘In this paper, we present a regularized Newton method (M-RNM) with correction for minimizing a convex function whose Hessian matrices may be singular. At every iteration, not only a RNM step is computed but also two correction steps are computed. We show that if the objective function is LC<sup>2</sup>, then the method posses globally convergent. Numerical results show that the new algorithm performs very well.
文摘随着高分辨率对地观测要求的不断提高,合成孔径雷达(Synthetic Aperture Radar,SAR)的应用将越来越广泛。针对高分辨率SAR成像存在数据量大、存储难度高、计算时间长等问题,目前常用的解决方法是在SAR成像模型中引入压缩感知(Compressed Sensing,CS)的方法降低采样率和数据量。通常使用单一的正则化作为约束条件,可以抑制点目标旁瓣,实现点目标特征增强,但是观测场景中可能存在多种目标类型,因此使用单一正则化约束难以满足多种特征增强的要求。本文提出了一种基于复合正则化的稀疏高分辨SAR成像方法,通过压缩感知降低数据量,并使用多种正则化的线性组合作为约束条件,增强观测场景中不同类型目标的特征,实现复杂场景中高分辨率对地观测的要求。该方法在稀疏SAR成像模型中引入非凸正则化和全变分(Total Variation,TV)正则化作为约束条件,减小稀疏重构误差、增强区域目标的特征,降低噪声对成像结果的影响,提高成像质量;采用改进的交替方向乘子法(Alternating Direction Method of Multipliers,ADMM)实现复合正则化约束的求解,减少计算时间、快速重构图像;使用方位距离解耦算子代替观测矩阵及其共轭转置,进一步降低计算复杂度。仿真和实测数据实验表明,本文所提算法可以对点目标和区域目标进行特征增强,减小计算复杂度,提高收敛性能,实现快速高分辨的图像重构。