期刊文献+

为您找到了以下期刊:

共找到5,859篇文章
< 1 2 250 >
每页显示 20 50 100
Learning to optimize:A tutorial for continuous and mixed-integer optimization 被引量:1
1
作者 Xiaohan Chen Jialin Liu Wotao Yin science china mathematics SCIE CSCD 2024年第6期1191-1262,共72页
Learning to optimize(L2O)stands at the intersection of traditional optimization and machine learning,utilizing the capabilities of machine learning to enhance conventional optimization techniques.As real-world optimiz... Learning to optimize(L2O)stands at the intersection of traditional optimization and machine learning,utilizing the capabilities of machine learning to enhance conventional optimization techniques.As real-world optimization problems frequently share common structures,L2O provides a tool to exploit these structures for better or faster solutions.This tutorial dives deep into L2O techniques,introducing how to accelerate optimization algorithms,promptly estimate the solutions,or even reshape the optimization problem itself,making it more adaptive to real-world applications.By considering the prerequisites for successful applications of L2O and the structure of the optimization problems at hand,this tutorial provides a comprehensive guide for practitioners and researchers alike. 展开更多
关键词 AI for mathematics(AI4Math) learning to optimize algorithm unrolling plug-and-play methods differentiable programming machine learning for combinatorial optimization(ML4CO)
原文传递
A dynamical neural network approach for distributionally robust chance-constrained Markov decision process 被引量:1
2
作者 Tian Xia Jia Liu Zhiping Chen science china mathematics SCIE CSCD 2024年第6期1395-1418,共24页
In this paper,we study the distributionally robust joint chance-constrained Markov decision process.Utilizing the logarithmic transformation technique,we derive its deterministic reformulation with bi-convex terms und... In this paper,we study the distributionally robust joint chance-constrained Markov decision process.Utilizing the logarithmic transformation technique,we derive its deterministic reformulation with bi-convex terms under the moment-based uncertainty set.To cope with the non-convexity and improve the robustness of the solution,we propose a dynamical neural network approach to solve the reformulated optimization problem.Numerical results on a machine replacement problem demonstrate the efficiency of the proposed dynamical neural network approach when compared with the sequential convex approximation approach. 展开更多
关键词 Markov decision process chance constraints distributionally robust optimization moment-based ambiguity set dynamical neural network
原文传递
Time-inconsistent stochastic linear-quadratic control problem with indefinite control weight costs 被引量:1
3
作者 Qi Lü Bowen Ma science china mathematics SCIE CSCD 2024年第1期211-236,共26页
A time-inconsistent linear-quadratic optimal control problem for stochastic differential equations is studied.We introduce conditions where the control cost weighting matrix is possibly singular.Under such conditions,... A time-inconsistent linear-quadratic optimal control problem for stochastic differential equations is studied.We introduce conditions where the control cost weighting matrix is possibly singular.Under such conditions,we obtain a family of closed-loop equilibrium strategies via multi-person differential games.This result extends Yong’s work(2017) in the case of stochastic differential equations,where a unique closed-loop equilibrium strategy can be derived under standard conditions(namely,the control cost weighting matrix is uniformly positive definite,and the other weighting coefficients are positive semidefinite). 展开更多
关键词 linear-quadratic optimal control problem time-inconsistent cost functional generalized Riccati equation indefinite control weight cost closed-loop equilibrium strategy
原文传递
The fast method and convergence analysis of the fractional magnetohydrodynamic coupled flow and heat transfer model for the generalized second-grade fluid 被引量:1
4
作者 Xiaoqing Chi Hui Zhang Xiaoyun Jiang science china mathematics SCIE CSCD 2024年第4期919-950,共32页
In this paper,we first establish a new fractional magnetohydrodynamic(MHD)coupled flow and heat transfer model for a generalized second-grade fluid.This coupled model consists of a fractional momentum equation and a h... In this paper,we first establish a new fractional magnetohydrodynamic(MHD)coupled flow and heat transfer model for a generalized second-grade fluid.This coupled model consists of a fractional momentum equation and a heat conduction equation with a generalized form of Fourier law.The second-order fractional backward difference formula is applied to the temporal discretization and the Legendre spectral method is used for the spatial discretization.The fully discrete scheme is proved to be stable and convergent with an accuracy of O(τ^(2)+N-r),whereτis the time step-size and N is the polynomial degree.To reduce the memory requirements and computational cost,a fast method is developed,which is based on a globally uniform approximation of the trapezoidal rule for integrals on the real line.The strict convergence of the numerical scheme with this fast method is proved.We present the results of several numerical experiments to verify the effectiveness of the proposed method.Finally,we simulate the unsteady fractional MHD flow and heat transfer of the generalized second-grade fluid through a porous medium.The effects of the relevant parameters on the velocity and temperature are presented and analyzed in detail. 展开更多
关键词 fractional MHD coupled flow and heat transfer model generalized second-grade fuid fast method convergence analysis numerical simulation
原文传递
NeuroPrim:An attention-based model for solving NP-hard spanning tree problems 被引量:1
5
作者 Yuchen Shi Congying Han Tiande Guo science china mathematics SCIE CSCD 2024年第6期1359-1376,共18页
Spanning tree problems with specialized constraints can be difficult to solve in real-world scenarios,often requiring intricate algorithmic design and exponential time.Recently,there has been growing interest in end-t... Spanning tree problems with specialized constraints can be difficult to solve in real-world scenarios,often requiring intricate algorithmic design and exponential time.Recently,there has been growing interest in end-to-end deep neural networks for solving routing problems.However,such methods typically produce sequences of vertices,which make it difficult to apply them to general combinatorial optimization problems where the solution set consists of edges,as in various spanning tree problems.In this paper,we propose NeuroPrim,a novel framework for solving various spanning tree problems by defining a Markov decision process for general combinatorial optimization problems on graphs.Our approach reduces the action and state space using Prim's algorithm and trains the resulting model using REINFORCE.We apply our framework to three difficult problems on the Euclidean space:the degree-constrained minimum spanning tree problem,the minimum routing cost spanning tree problem and the Steiner tree problem in graphs.Experimental results on literature instances demonstrate that our model outperforms strong heuristics and achieves small optimality gaps of up to 250 vertices.Additionally,we find that our model has strong generalization ability with no significant degradation observed on problem instances as large as 1,000.Our results suggest that our framework can be effective for solving a wide range of combinatorial optimization problems beyond spanning tree problems. 展开更多
关键词 degree-constrained minimum spanning tree problem minimum routing cost spanning tree problem Steiner tree problem in graphs Prim's algorithm reinforcement learning
原文传递
Learning to select the recombination operator for derivative-free optimization 被引量:1
6
作者 Haotian Zhang Jianyong Sun +1 位作者 Thomas Back Zongben Xu science china mathematics SCIE CSCD 2024年第6期1457-1480,共24页
Extensive studies on selecting recombination operators adaptively,namely,adaptive operator selection(AOS),during the search process of an evolutionary algorithm(EA),have shown that AOS is promising for improving EA... Extensive studies on selecting recombination operators adaptively,namely,adaptive operator selection(AOS),during the search process of an evolutionary algorithm(EA),have shown that AOS is promising for improving EA's performance.A variety of heuristic mechanisms for AOS have been proposed in recent decades,which usually contain two main components:the feature extraction and the policy setting.The feature extraction refers to as extracting relevant features from the information collected during the search process.The policy setting means to set a strategy(or policy)on how to select an operator from a pool of operators based on the extracted feature.Both components are designed by hand in existing studies,which may not be efficient for adapting optimization problems.In this paper,a generalized framework is proposed for learning the components of AOS for one of the main streams of EAs,namely,differential evolution(DE).In the framework,the feature extraction is parameterized as a deep neural network(DNN),while a Dirichlet distribution is considered to be the policy.A reinforcement learning method,named policy gradient,is used to train the DNN.As case studies,the proposed framework is applied to two DEs including the classic DE and a recently-proposed DE,which result in two new algorithms named PG-DE and PG-MPEDE,respectively.Experiments on the Congress of Evolutionary Computation(CEC)2018 test suite show that the proposed new algorithms perform significantly better than their counterparts.Finally,we prove theoretically that the considered classic methods are the special cases of the proposed framework. 展开更多
关键词 evolutionary algorithm differential evolution adaptive operator selection reinforcement learning deep learning
原文传递
Optimal decorrelated score subsampling for generalized linear models with massive data 被引量:1
7
作者 Junzhuo Gao Lei Wang Heng Lian science china mathematics SCIE CSCD 2024年第2期405-430,共26页
In this paper, we consider the unified optimal subsampling estimation and inference on the lowdimensional parameter of main interest in the presence of the nuisance parameter for low/high-dimensionalgeneralized linear... In this paper, we consider the unified optimal subsampling estimation and inference on the lowdimensional parameter of main interest in the presence of the nuisance parameter for low/high-dimensionalgeneralized linear models (GLMs) with massive data. We first present a general subsampling decorrelated scorefunction to reduce the influence of the less accurate nuisance parameter estimation with the slow convergencerate. The consistency and asymptotic normality of the resultant subsample estimator from a general decorrelatedscore subsampling algorithm are established, and two optimal subsampling probabilities are derived under theA- and L-optimality criteria to downsize the data volume and reduce the computational burden. The proposedoptimal subsampling probabilities provably improve the asymptotic efficiency of the subsampling schemes in thelow-dimensional GLMs and perform better than the uniform subsampling scheme in the high-dimensional GLMs.A two-step algorithm is further proposed to implement, and the asymptotic properties of the correspondingestimators are also given. Simulations show satisfactory performance of the proposed estimators, and twoapplications to census income and Fashion-MNIST datasets also demonstrate its practical applicability. 展开更多
关键词 A-OPTIMALITY decorrelated score subsampling high-dimensional inference L-optimality massive data
原文传递
Gradient-based algorithms for multi-objective bi-level optimization 被引量:1
8
作者 Xinmin Yang Wei Yao +2 位作者 Haian Yin Shangzhi Zeng Jin Zhang science china mathematics SCIE CSCD 2024年第6期1419-1438,共20页
Multi-objective bi-level optimization(MOBLO)addresses nested multi-objective optimization problems common in a range of applications.However,its multi-objective and hierarchical bi-level nature makes it notably comple... Multi-objective bi-level optimization(MOBLO)addresses nested multi-objective optimization problems common in a range of applications.However,its multi-objective and hierarchical bi-level nature makes it notably complex.Gradient-based MOBLO algorithms have recently grown in popularity,as they effectively solve crucial machine learning problems like meta-learning,neural architecture search,and reinforcement learning.Unfortunately,these algorithms depend on solving a sequence of approximation subproblems with high accuracy,resulting in adverse time and memory complexity that lowers their numerical efficiency.To address this issue,we propose a gradient-based algorithm for MOBLO,called gMOBA,which has fewer hyperparameters to tune,making it both simple and efficient.Additionally,we demonstrate the theoretical validity by accomplishing the desirable Pareto stationarity.Numerical experiments confirm the practical efficiency of the proposed method and verify the theoretical results.To accelerate the convergence of gMOBA,we introduce a beneficial L2O(learning to optimize)neural network(called L2O-gMOBA)implemented as the initialization phase of our gMOBA algorithm.Comparative results of numerical experiments are presented to illustrate the performance of L2O-gMOBA. 展开更多
关键词 MULTI-OBJECTIVE bi-level optimization convergence analysis Pareto stationary learning to optimize
原文传递
Alleviating limit cycling in training GANs with an optimization technique 被引量:1
9
作者 Keke Li Liping Tang Xinmin Yang science china mathematics SCIE CSCD 2024年第6期1287-1316,共30页
In this paper,we undertake further investigation to alleviate the issue of limit cycling behavior in training generative adversarial networks(GANs)through the proposed predictive centripetal acceleration algorithm(PCA... In this paper,we undertake further investigation to alleviate the issue of limit cycling behavior in training generative adversarial networks(GANs)through the proposed predictive centripetal acceleration algorithm(PCAA).Specifically,we first derive the upper and lower complexity bounds of PCAA for a general bilinear game,with the last-iterate convergence rate notably improving upon previous results.Then,we combine PCAA with the adaptive moment estimation algorithm(Adam)to propose PCAA-Adam,for practical training of GANs to enhance their generalization capability.Finally,we validate the effectiveness of the proposed algorithm through experiments conducted on bilinear games,multivariate Gaussian distributions,and the CelebA dataset,respectively. 展开更多
关键词 GANs general bilinear game predictive centripetal acceleration algorithm lower and upper complexity bounds PCAA-Adam
原文传递
Rigidity of non-renormalizable Newton maps 被引量:1
10
作者 Pascale Roesch Yongcheng Yin Jinsong Zeng science china mathematics SCIE CSCD 2024年第4期855-872,共18页
Non-renormalizable Newton maps are rigid.More precisely,we prove that their Julia sets carry no invariant line fields and that a topological conjugacy between them is equivalent to a quasiconformal conjugacy.
关键词 Newton map non-renormalizable RIGIDITY invariant line field
原文传递
Cohomology groups of a new class of Kadison-Singer algebras 被引量:1
11
作者 Guangyu An Xing Cheng Jun Sheng science china mathematics SCIE CSCD 2024年第3期593-606,共14页
Let N be a maximal discrete nest on an infinite-dimensional separable Hilbert space H,ξ=∑^(∞)_(n=1)en/2n be a separating vector for the commutant N',E_(ξ)be the projection from H onto the subspace[Cξ]spanned ... Let N be a maximal discrete nest on an infinite-dimensional separable Hilbert space H,ξ=∑^(∞)_(n=1)en/2n be a separating vector for the commutant N',E_(ξ)be the projection from H onto the subspace[Cξ]spanned by the vectorξ,and Q be the projection from K=H⊕H⊕H onto the closed subspace{(η,η,η)^(T):η∈H}.Suppose that L is the projection lattice generated by the projections(E_(ξ) 0 0 0 0 0 0 0 0),{(E 0 0 0 0 0 0 0 0):E∈N},(I 0 0 0 I 0 0 0 0) and Q.We show that L is a Kadison-Singer lattice with the trivial commutant.Moreover,we prove that every n-th bounded cohomology group H~n(AlgL,B(K))with coefficients in B(K)is trivial for n≥1. 展开更多
关键词 Kadison-Singer algebra Kadison-Singer lattice DERIVATION cohomology group
原文传递
Enhancing cut selection through reinforcement learning 被引量:1
12
作者 Shengchao Wang Liang Chen +1 位作者 Lingfeng Niu Yu-Hong Dai science china mathematics SCIE CSCD 2024年第6期1377-1394,共18页
With the rapid development of artificial intelligence in recent years,applying various learning techniques to solve mixed-integer linear programming(MILP)problems has emerged as a burgeoning research domain.Apart from... With the rapid development of artificial intelligence in recent years,applying various learning techniques to solve mixed-integer linear programming(MILP)problems has emerged as a burgeoning research domain.Apart from constructing end-to-end models directly,integrating learning approaches with some modules in the traditional methods for solving MILPs is also a promising direction.The cutting plane method is one of the fundamental algorithms used in modern MILP solvers,and the selection of appropriate cuts from the candidate cuts subset is crucial for enhancing efficiency.Due to the reliance on expert knowledge and problem-specific heuristics,classical cut selection methods are not always transferable and often limit the scalability and generalizability of the cutting plane method.To provide a more efficient and generalizable strategy,we propose a reinforcement learning(RL)framework to enhance cut selection in the solving process of MILPs.Firstly,we design feature vectors to incorporate the inherent properties of MILP and computational information from the solver and represent MILP instances as bipartite graphs.Secondly,we choose the weighted metrics to approximate the proximity of feasible solutions to the convex hull and utilize the learning method to determine the weights assigned to each metric.Thirdly,a graph convolutional neural network is adopted with a self-attention mechanism to predict the value of weighting factors.Finally,we transform the cut selection process into a Markov decision process and utilize RL method to train the model.Extensive experiments are conducted based on a leading open-source MILP solver SCIP.Results on both general and specific datasets validate the effectiveness and efficiency of our proposed approach. 展开更多
关键词 reinforcement learning mixed-integer linear programming cutting plane method cut selection
原文传递
A unified pre-training and adaptation framework for combinatorial optimization on graphs 被引量:1
13
作者 Ruibin Zeng Minglong Lei +1 位作者 Lingfeng Niu Lan Cheng science china mathematics SCIE CSCD 2024年第6期1439-1456,共18页
Combinatorial optimization(CO)on graphs is a classic topic that has been extensively studied across many scientific and industrial fields.Recently,solving CO problems on graphs through learning methods has attracted g... Combinatorial optimization(CO)on graphs is a classic topic that has been extensively studied across many scientific and industrial fields.Recently,solving CO problems on graphs through learning methods has attracted great attention.Advanced deep learning methods,e.g.,graph neural networks(GNNs),have been used to effectively assist the process of solving COs.However,current frameworks based on GNNs are mainly designed for certain CO problems,thereby failing to consider their transferable and generalizable abilities among different COs on graphs.Moreover,simply using original graphs to model COs only captures the direct correlations among objects,which does not consider the mathematical logicality and properties of COs.In this paper,we propose a unified pre-training and adaptation framework for COs on graphs with the help of the maximum satisfiability(Max-SAT)problem.We first use Max-SAT to bridge different COs on graphs since they can be converted to Max-SAT problems represented by standard formulas and clauses with logical information.Then we further design a pre-training and domain adaptation framework to extract the transferable and generalizable features so that different COs can benefit from them.In the pre-training stage,Max-SAT instances are generated to initialize the parameters of the model.In the fine-tuning stage,instances from CO and Max-SAT problems are used for adaptation so that the transferable ability can be further improved.Numerical experiments on several datasets show that features extracted by our framework exhibit superior transferability and Max-SAT can boost the ability to solve COs on graphs. 展开更多
关键词 combinatorial optimization graph neural networks domain adaptation maximum satisfiability problem
原文传递
Optimal pivot path of the simplex method for linear programming based on reinforcement learning 被引量:1
14
作者 Anqi Li Tiande Guo +2 位作者 Congying Han Bonan Li Haoran Li science china mathematics SCIE CSCD 2024年第6期1263-1286,共24页
Based on the existing pivot rules,the simplex method for linear programming is not polynomial in the worst case.Therefore,the optimal pivot of the simplex method is crucial.In this paper,we propose the optimal rule to... Based on the existing pivot rules,the simplex method for linear programming is not polynomial in the worst case.Therefore,the optimal pivot of the simplex method is crucial.In this paper,we propose the optimal rule to find all the shortest pivot paths of the simplex method for linear programming problems based on Monte Carlo tree search.Specifically,we first propose the SimplexPseudoTree to transfer the simplex method into tree search mode while avoiding repeated basis variables.Secondly,we propose four reinforcement learning models with two actions and two rewards to make the Monte Carlo tree search suitable for the simplex method.Thirdly,we set a new action selection criterion to ameliorate the inaccurate evaluation in the initial exploration.It is proved that when the number of vertices in the feasible region is C_(n)^(m),our method can generate all the shortest pivot paths,which is the polynomial of the number of variables.In addition,we experimentally validate that the proposed schedule can avoid unnecessary search and provide the optimal pivot path.Furthermore,this method can provide the best pivot labels for all kinds of supervised learning methods to solve linear programming problems. 展开更多
关键词 simplex method linear programming pivot rules reinforcement learning
原文传递
A neural branch-and-price for truck scheduling in cross-docks 被引量:1
15
作者 Rahimeh Neamatian Monemi Shahin Gelareh +1 位作者 Nelson Maculan Wei-Kun Chen science china mathematics SCIE CSCD 2024年第6期1341-1358,共18页
In this paper,we address the complex problem of dock-door assignment and truck scheduling within cross-docking operations.This is a problem that requires frequent resolution throughout the operational day,as disruptio... In this paper,we address the complex problem of dock-door assignment and truck scheduling within cross-docking operations.This is a problem that requires frequent resolution throughout the operational day,as disruptions often invalidate the optimal plan.Given the problem's highly combinatorial nature,finding an optimal solution demands significant computational time and resources.However,the distribution of data across problem instances over a lengthy planning horizon remains consistently stable,with minimal concern regarding distribution shift.These factors collectively establish the problem as an ideal candidate for a learn-to-optimize solution strategy.We propose a Dantzig-Wolfe reformulation,solving it via both a conventional branch-and-price approach and a neural branch-and-price approach,the latter of which employs imitation learning.Additionally,we introduce some classes of valid inequalities to enhance and refine the pricing problem through a branch-and-cut scheme.Our computational experiments demonstrate that this methodology is not only feasible but also presents a viable alternative to the traditional branch-and-price algorithms typically utilized for such challenges. 展开更多
关键词 CROSS-DOCKING MILP modeling Dantzig-Wolfe decomposition graph convolutional network
原文传递
Learning to sample initial solution for solving 0-1 discrete optimization problem by local search 被引量:1
16
作者 Xin Liu Jianyong Sun Zongben Xu science china mathematics SCIE CSCD 2024年第6期1317-1340,共24页
Local search methods are convenient alternatives for solving discrete optimization problems(DOPs).These easy-to-implement methods are able to find approximate optimal solutions within a tolerable time limit.It is know... Local search methods are convenient alternatives for solving discrete optimization problems(DOPs).These easy-to-implement methods are able to find approximate optimal solutions within a tolerable time limit.It is known that the quality of the initial solution greatly affects the quality of the approximated solution found by a local search method.In this paper,we propose to take the initial solution as a random variable and learn its preferable probability distribution.The aim is to sample a good initial solution from the learned distribution so that the local search can find a high-quality solution.We develop two different deep network models to deal with DOPs established on set(the knapsack problem)and graph(the maximum clique problem),respectively.The deep neural network learns the representation of an optimization problem instance and transforms the representation to its probability vector.Experimental results show that given the initial solution sampled from the learned probability distribution,a local search method can acquire much better approximate solutions than the randomly-sampled initial solution on the synthesized knapsack instances and the Erd?s-Rényi random graph instances.Furthermore,with sampled initial solutions,a classical genetic algorithm can achieve better solutions than a random initialized population in solving the maximum clique problems on DIMACS instances.Particularly,we emphasize that the developed models can generalize in dimensions and across graphs with various densities,which is an important advantage on generalizing deep-learning-based optimization algorithms. 展开更多
关键词 discrete optimization deep learning graph convolutional network local search
原文传递
An accurate and efficient space-time Galerkin spectral method for the subdiffusion equation
17
作者 Wei Zeng Chuanju Xu science china mathematics SCIE CSCD 2024年第10期2387-2408,共22页
In this paper, we design and analyze a space-time spectral method for the subdiffusion equation.Here, we are facing two difficulties. The first is that the solutions of this equation are usually singular near the init... In this paper, we design and analyze a space-time spectral method for the subdiffusion equation.Here, we are facing two difficulties. The first is that the solutions of this equation are usually singular near the initial time. Consequently, traditional high-order numerical methods in time are inefficient. The second obstacle is that the resulting system of the space-time spectral approach is usually large and time-consuming to solve. We aim at overcoming the first difficulty by proposing a novel approach in time, which is based on variable transformation techniques. Suitable ψ-fractional Sobolev spaces and a new variational framework are introduced to establish the well-posedness of the associated variational problem. This allows us to construct our space-time spectral method using a combination of temporal generalized Jacobi polynomials(GJPs) and spatial Legendre polynomials. For the second difficulty, we propose a fast algorithm to effectively solve the resulting linear system. The fast algorithm makes use of a matrix diagonalization in space and QZ decomposition in time. Our analysis and numerical experiments show that the proposed method is exponentially convergent with respect to the polynomial degrees in both space and time directions, even though the exact solution has very limited regularity. 展开更多
关键词 subdiffusion equations variable transformation Ψ-Sobolev spaces WELL-POSEDNESS space-time Galerkin spectral method error estimate fast algorithm
原文传递
A new class of generalized quasi-variational inequalities with applications to Oseen problems under nonsmooth boundary conditions
18
作者 Shengda Zeng Akhtar A.Khan Stanislaw Migórski science china mathematics SCIE CSCD 2024年第2期315-338,共24页
In this paper, we study a generalized quasi-variational inequality (GQVI for short) with twomultivalued operators and two bifunctions in a Banach space setting. A coupling of the Tychonov fixedpoint principle and the ... In this paper, we study a generalized quasi-variational inequality (GQVI for short) with twomultivalued operators and two bifunctions in a Banach space setting. A coupling of the Tychonov fixedpoint principle and the Katutani-Ky Fan theorem for multivalued maps is employed to prove a new existencetheorem for the GQVI. We also study a nonlinear optimal control problem driven by the GQVI and givesufficient conditions ensuring the existence of an optimal control. Finally, we illustrate the applicability of thetheoretical results in the study of a complicated Oseen problem for non-Newtonian fluids with a nonmonotone andmultivalued slip boundary condition (i.e., a generalized friction constitutive law), a generalized leak boundarycondition, a unilateral contact condition of Signorini’s type and an implicit obstacle effect, in which themultivalued slip boundary condition is described by the generalized Clarke subgradient, and the leak boundarycondition is formulated by the convex subdifferential operator for a convex superpotential. 展开更多
关键词 generalized quasi-variational inequality existence theorem optimal control Kakutani-Ky Fan theorem Oseen problem non-Newtonian fluid nonmonotone slip boundary condition
原文传递
Sharp bilinear decomposition for products of both anisotropic Hardy spaces and their dual spaces with its applications to endpoint boundedness of commutators
19
作者 Jun Liu Dachun Yang Mingdong Zhang science china mathematics SCIE CSCD 2024年第9期2091-2152,共62页
Let a:=(a_(1),...,a_(n))2[1,∞)^(n),p∈(0,1),andα:=1/p-1.For any x∈R^(n)and t∈[0,∞),letΦ_(p)(x,t):={t/1+(t[x]_(a)^(ν))^(1-p)if να■N,t/1+(t[x]_(a)^(ν))^(1-p)[log(e+|x|a)]^(p)if να∈N,let where [·]a:=1+... Let a:=(a_(1),...,a_(n))2[1,∞)^(n),p∈(0,1),andα:=1/p-1.For any x∈R^(n)and t∈[0,∞),letΦ_(p)(x,t):={t/1+(t[x]_(a)^(ν))^(1-p)if να■N,t/1+(t[x]_(a)^(ν))^(1-p)[log(e+|x|a)]^(p)if να∈N,let where [·]a:=1+|·|a,|·|a denotes the anisotropic quasi-homogeneous norm with respect to a,and ν:=a_(1)+…+a_(n).Let H_(a)^(p)(R^(n)),L_(a)^(a)(R^(n)),and H_(a)^(Φ_(p))(R^(n))be,respectively,the anisotropic Hardy space,the anisotropic Campanato space,and the anisotropic Musielak-Orlicz Hardy space associated with Φ_(p) on R^(n).In this article,via first establishing the wavelet characterization of anisotropic Campanato spaces,we prove that for any f∈H_(a)^(p)(R^(n))and g∈L_(a)^(a)(R^(n)),the product of f and g can be decomposed into S(f,g)+T(f,g) in the sense of tempered distributions,where S is a bilinear operator bounded from H_(a)^(p)(R^(n))*L_(a)^(a)(R^(Φ_(p))) to L^(1)(R^(n)) and T is a bilinear operator bounded from H_(a)^(p)(R^(n))*L_(a)^(a)(R^(n)) to H_(a)^(Φ_(p))(R^(n)) .Moreover,this bilinear decomposition is sharp in the dual sense that any y■H_(a)^(Φ_(p))(R^(n)) that fits into the above bilinear decomposition should satisfy(L^(1)(R^(n))+y)*=(L^(1)(R^(n)+H_(a)^(Φ_(p))(R^(n))*.As applications,for any non-constant b∈L_(a)^(a)(R^(n)) and any sublinear operator T satisfying some mild bounded assumptions,we find the largest subspace of H_(a)^(p)(R^(n)),denoted by H_(a,b)^(p)(R^(n)),such that the commutator [b,T] is bounded from H_(a,b)^(p)(R^(n))to L^(1)(R^(n)).In addition,when T is an anisotropic CalderónZygmund operator,the boundedness of [b,T] from H_(a,b)^(p)(R^(n))to L^(1)(R^(n))(or to H_(a)^(1)(R^(n)) is also presented.The key of their proofs is the wavelet characterization of function spaces under consideration. 展开更多
关键词 anisotropic Euclidean space bilinear decomposition Hardy space Campanato space Musielak-Orlicz Hardy space commutator wavelet
原文传递
On the inviscid limit of the compressible Navier-Stokes equations near Onsager's regularity in bounded domains
20
作者 Robin Ming Chen Zhilei Liang +1 位作者 Dehua Wang Runzhang Xu science china mathematics SCIE CSCD 2024年第1期1-22,共22页
The viscous dissipation limit of weak solutions is considered for the Navier-Stokes equations of compressible isentropic flows confined in a bounded domain.We establish a Kato-type criterion for the validity of the in... The viscous dissipation limit of weak solutions is considered for the Navier-Stokes equations of compressible isentropic flows confined in a bounded domain.We establish a Kato-type criterion for the validity of the inviscid limit for the weak solutions of the Navier-Stokes equations in a function space with the regularity index close to Onsager’s critical threshold.In particular,we prove that under such a regularity assumption,if the viscous energy dissipation rate vanishes in a boundary layer of thickness in the order of the viscosity,then the weak solutions of the Navier-Stokes equations converge to a weak admissible solution of the Euler equations.Our approach is based on the commutator estimates and a subtle foliation technique near the boundary of the domain. 展开更多
关键词 inviscid limit Navier-Stokes equations Euler equations weak solutions bounded domain Katotype criterion Onsager’s regularity
原文传递
上一页 1 2 250 下一页 到第
使用帮助 返回顶部