期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Characterizations and Extensions of Lipschitz-α Operators 被引量:3
1
作者 Huai Xin CAO Jian Hua ZHANG zong ben xu 《Acta Mathematica Sinica,English Series》 SCIE CSCD 2006年第3期671-678,共8页
In this work, we prove that a map F from a compact metric space K into a Banach space X over F is a Lipschitz-α operator if and only if for each σ in X^* the map σoF is a Lipschitz-α function on K. In the case th... In this work, we prove that a map F from a compact metric space K into a Banach space X over F is a Lipschitz-α operator if and only if for each σ in X^* the map σoF is a Lipschitz-α function on K. In the case that K = [a, b], we show that a map f from [a, b] into X is a Lipschitz-1 operator if and only if it is absolutely continuous and the map σ→ (σ o f)' is a bounded linear operator from X^* into L^∞([a, b]). When K is a compact subset of a finite interval (a, b) and 0 〈 α ≤ 1, we show that every Lipschitz-α operator f from K into X can be extended as a Lipschitz-α operator F from [a, b] into X with Lα(f) ≤ Lα(F) ≤ 3^1-α Lα(f). A similar extension theorem for a little Lipschitz-α operator is also obtained. 展开更多
关键词 Characterization EXTENSION Lipschitz-α operator
原文传递
Towards a Unified Recurrent Neural Network Theory: The Uniformly Pseudo-Projection-Anti-Monotone Net 被引量:1
2
作者 zong ben xu Chen QIAO 《Acta Mathematica Sinica,English Series》 SCIE CSCD 2011年第2期377-396,共20页
In the past decades, various neural network models have been developed for modeling the behavior of human brain or performing problem-solving through simulating the behavior of human brain. The recurrent neural networ... In the past decades, various neural network models have been developed for modeling the behavior of human brain or performing problem-solving through simulating the behavior of human brain. The recurrent neural networks are the type of neural networks to model or simulate associative memory behavior of human being. A recurrent neural network (RNN) can be generally formalized as a dynamic system associated with two fundamental operators: one is the nonlinear activation operator deduced from the input-output properties of the involved neurons, and the other is the synaptic connections (a matrix) among the neurons. Through carefully examining properties of various activation functions used, we introduce a novel type of monotone operators, the uniformly pseudo-projectionanti-monotone (UPPAM) operators, to unify the various RNN models appeared in the literature. We develop a unified encoding and stability theory for the UPPAM network model when the time is discrete. The established model and theory not only unify but also jointly generalize the most known results of RNNs. The approach has lunched a visible step towards establishment of a unified mathematical theory of recurrent neural networks. 展开更多
关键词 Feedback neural networks essential characteristics uniformly pseudo-projection-anti- monotone net unified theory dynamics
原文传递
A Sharp Nonasymptotic Bound and Phase Diagram of L1/2 Regularization 被引量:1
3
作者 Hai ZHANG zong ben xu +2 位作者 Yao WANG Xiang Yu CHANG Yong LIANG 《Acta Mathematica Sinica,English Series》 SCIE CSCD 2014年第7期1242-1258,共17页
We derive a sharp nonasymptotic bound of parameter estimation of the L1/2 regularization. The bound shows that the solutions of the L1/2 regularization can achieve a loss within logarithmic factor of an ideal mean squ... We derive a sharp nonasymptotic bound of parameter estimation of the L1/2 regularization. The bound shows that the solutions of the L1/2 regularization can achieve a loss within logarithmic factor of an ideal mean squared error and therefore underlies the feasibility and effectiveness of the L1/2 regularization. Interestingly, when applied to compressive sensing, the L1/2 regularization scheme has exhibited a very promising capability of completed recovery from a much less sampling information. As compared with the Lp (0 〈 p 〈 1) penalty, it is appeared that the L1/2 penalty can always yield the most sparse solution among all the Lv penalty when 1/2 〈 p 〈 1, and when 0 〈 p 〈 1/2, the Lp penalty exhibits the similar properties as the L1/2 penalty. This suggests that the L1/2 regularization scheme can be accepted as the best and therefore the representative of all the Lp (0 〈 p 〈 1) regularization schemes. 展开更多
关键词 L1/2 regularization phase diagram compressive sensing
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部