In this work, we prove that a map F from a compact metric space K into a Banach space X over F is a Lipschitz-α operator if and only if for each σ in X^* the map σoF is a Lipschitz-α function on K. In the case th...In this work, we prove that a map F from a compact metric space K into a Banach space X over F is a Lipschitz-α operator if and only if for each σ in X^* the map σoF is a Lipschitz-α function on K. In the case that K = [a, b], we show that a map f from [a, b] into X is a Lipschitz-1 operator if and only if it is absolutely continuous and the map σ→ (σ o f)' is a bounded linear operator from X^* into L^∞([a, b]). When K is a compact subset of a finite interval (a, b) and 0 〈 α ≤ 1, we show that every Lipschitz-α operator f from K into X can be extended as a Lipschitz-α operator F from [a, b] into X with Lα(f) ≤ Lα(F) ≤ 3^1-α Lα(f). A similar extension theorem for a little Lipschitz-α operator is also obtained.展开更多
In the past decades, various neural network models have been developed for modeling the behavior of human brain or performing problem-solving through simulating the behavior of human brain. The recurrent neural networ...In the past decades, various neural network models have been developed for modeling the behavior of human brain or performing problem-solving through simulating the behavior of human brain. The recurrent neural networks are the type of neural networks to model or simulate associative memory behavior of human being. A recurrent neural network (RNN) can be generally formalized as a dynamic system associated with two fundamental operators: one is the nonlinear activation operator deduced from the input-output properties of the involved neurons, and the other is the synaptic connections (a matrix) among the neurons. Through carefully examining properties of various activation functions used, we introduce a novel type of monotone operators, the uniformly pseudo-projectionanti-monotone (UPPAM) operators, to unify the various RNN models appeared in the literature. We develop a unified encoding and stability theory for the UPPAM network model when the time is discrete. The established model and theory not only unify but also jointly generalize the most known results of RNNs. The approach has lunched a visible step towards establishment of a unified mathematical theory of recurrent neural networks.展开更多
We derive a sharp nonasymptotic bound of parameter estimation of the L1/2 regularization. The bound shows that the solutions of the L1/2 regularization can achieve a loss within logarithmic factor of an ideal mean squ...We derive a sharp nonasymptotic bound of parameter estimation of the L1/2 regularization. The bound shows that the solutions of the L1/2 regularization can achieve a loss within logarithmic factor of an ideal mean squared error and therefore underlies the feasibility and effectiveness of the L1/2 regularization. Interestingly, when applied to compressive sensing, the L1/2 regularization scheme has exhibited a very promising capability of completed recovery from a much less sampling information. As compared with the Lp (0 〈 p 〈 1) penalty, it is appeared that the L1/2 penalty can always yield the most sparse solution among all the Lv penalty when 1/2 〈 p 〈 1, and when 0 〈 p 〈 1/2, the Lp penalty exhibits the similar properties as the L1/2 penalty. This suggests that the L1/2 regularization scheme can be accepted as the best and therefore the representative of all the Lp (0 〈 p 〈 1) regularization schemes.展开更多
基金partly supported by NNSF of China(No.19771056,No.69975016,No.10561113)
文摘In this work, we prove that a map F from a compact metric space K into a Banach space X over F is a Lipschitz-α operator if and only if for each σ in X^* the map σoF is a Lipschitz-α function on K. In the case that K = [a, b], we show that a map f from [a, b] into X is a Lipschitz-1 operator if and only if it is absolutely continuous and the map σ→ (σ o f)' is a bounded linear operator from X^* into L^∞([a, b]). When K is a compact subset of a finite interval (a, b) and 0 〈 α ≤ 1, we show that every Lipschitz-α operator f from K into X can be extended as a Lipschitz-α operator F from [a, b] into X with Lα(f) ≤ Lα(F) ≤ 3^1-α Lα(f). A similar extension theorem for a little Lipschitz-α operator is also obtained.
基金Supported by the National Basic Research Program of China (973 Program) (Grant No. 2007CB311002), the National Nature Science Foundation of China (Grant No. 61075054) and the Fundamental Research Funds for the Central Universities (Grant No. xjj20100087)
文摘In the past decades, various neural network models have been developed for modeling the behavior of human brain or performing problem-solving through simulating the behavior of human brain. The recurrent neural networks are the type of neural networks to model or simulate associative memory behavior of human being. A recurrent neural network (RNN) can be generally formalized as a dynamic system associated with two fundamental operators: one is the nonlinear activation operator deduced from the input-output properties of the involved neurons, and the other is the synaptic connections (a matrix) among the neurons. Through carefully examining properties of various activation functions used, we introduce a novel type of monotone operators, the uniformly pseudo-projectionanti-monotone (UPPAM) operators, to unify the various RNN models appeared in the literature. We develop a unified encoding and stability theory for the UPPAM network model when the time is discrete. The established model and theory not only unify but also jointly generalize the most known results of RNNs. The approach has lunched a visible step towards establishment of a unified mathematical theory of recurrent neural networks.
基金supported by National Natural Science Foundation of China(Grant Nos.11171212 and60975036)supported by National Natural Science Foundation of China(Grant No.6175054)
文摘We derive a sharp nonasymptotic bound of parameter estimation of the L1/2 regularization. The bound shows that the solutions of the L1/2 regularization can achieve a loss within logarithmic factor of an ideal mean squared error and therefore underlies the feasibility and effectiveness of the L1/2 regularization. Interestingly, when applied to compressive sensing, the L1/2 regularization scheme has exhibited a very promising capability of completed recovery from a much less sampling information. As compared with the Lp (0 〈 p 〈 1) penalty, it is appeared that the L1/2 penalty can always yield the most sparse solution among all the Lv penalty when 1/2 〈 p 〈 1, and when 0 〈 p 〈 1/2, the Lp penalty exhibits the similar properties as the L1/2 penalty. This suggests that the L1/2 regularization scheme can be accepted as the best and therefore the representative of all the Lp (0 〈 p 〈 1) regularization schemes.