Deep learning algorithms based on neural networks make remarkable achievements in machine fault diagnosis,while the noise mixed in measured signals harms the prediction accuracy of networks.Existing denoising methods ...Deep learning algorithms based on neural networks make remarkable achievements in machine fault diagnosis,while the noise mixed in measured signals harms the prediction accuracy of networks.Existing denoising methods in neural networks,such as using complex network architectures and introducing sparse techniques,always suffer from the difficulty of estimating hyperparameters and the lack of physical interpretability.To address this issue,this paper proposes a novel interpretable denoising layer based on reproducing kernel Hilbert space(RKHS)as the first layer for standard neural networks,with the aim to combine the advantages of both traditional signal processing technology with physical interpretation and network modeling strategy with parameter adaption.By investigating the influencing mechanism of parameters on the regularization procedure in RKHS,the key parameter that dynamically controls the signal smoothness with low computational cost is selected as the only trainable parameter of the proposed layer.Besides,the forward and backward propagation algorithms of the designed layer are formulated to ensure that the selected parameter can be automatically updated together with other parameters in the neural network.Moreover,exponential and piecewise functions are introduced in the weight updating process to keep the trainable weight within a reasonable range and avoid the ill-conditioned problem.Experiment studies verify the effectiveness and compatibility of the proposed layer design method in intelligent fault diagnosis of machinery in noisy environments.展开更多
Consider the design problem for estimation and extrapolation in approximately linear regression models with possible misspecification. The design space is a discrete set consisting of finitely many points, and the mod...Consider the design problem for estimation and extrapolation in approximately linear regression models with possible misspecification. The design space is a discrete set consisting of finitely many points, and the model bias comes from a reproducing kernel Hilbert space. Two different design criteria are proposed by applying the minimax approach for estimating the parameters of the regression response and extrapolating the regression response to points outside of the design space. A simulated annealing algorithm is applied to construct the minimax designs. These minimax designs are compared with the classical D-optimal designs and all-bias extrapolation designs. Numerical results indicate that the simulated annealing algorithm is feasible and the minimax designs are robust against bias caused by model misspecification.展开更多
The spherical approximation between two nested reproducing kernels Hilbert spaces generated from different smooth kernels is investigated. It is shown that the functions of a space can be approximated by that of the s...The spherical approximation between two nested reproducing kernels Hilbert spaces generated from different smooth kernels is investigated. It is shown that the functions of a space can be approximated by that of the subspace with better smoothness. Furthermore, the upper bound of approximation error is given.展开更多
The estimation of high dimensional covariance matrices is an interesting and important research topic for many empirical time series problems such as asset allocation. To solve this dimension dilemma, a factor structu...The estimation of high dimensional covariance matrices is an interesting and important research topic for many empirical time series problems such as asset allocation. To solve this dimension dilemma, a factor structure has often been taken into account. This paper proposes a dynamic factor structure whose factor loadings are generated in reproducing kernel Hilbert space(RKHS), to capture the dynamic feature of the covariance matrix. A simulation study is carried out to demonstrate its performance. Four different conditional variance models are considered for checking the robustness of our method and solving the conditional heteroscedasticity in the empirical study. By exploring the performance among eight introduced model candidates and the market baseline, the empirical study from 2001 to 2017 shows that portfolio allocation based on this dynamic factor structure can significantly reduce the variance, i.e., the risk, of portfolio and thus outperform the market baseline and the ones based on the traditional factor model.展开更多
We consider a gradient iteration algorithm for prediction of functional linear regression under the framework of reproducing kernel Hilbert spaces.In the algorithm,we use an early stopping technique,instead of the cla...We consider a gradient iteration algorithm for prediction of functional linear regression under the framework of reproducing kernel Hilbert spaces.In the algorithm,we use an early stopping technique,instead of the classical Tikhonov regularization,to prevent the iteration from an overfitting function.Under mild conditions,we obtain upper bounds,essentially matching the known minimax lower bounds,for excess prediction risk.An almost sure convergence is also established for the proposed algorithm.展开更多
Complementary-label learning(CLL)aims at finding a classifier via samples with complementary labels.Such data is considered to contain less information than ordinary-label samples.The transition matrix between the tru...Complementary-label learning(CLL)aims at finding a classifier via samples with complementary labels.Such data is considered to contain less information than ordinary-label samples.The transition matrix between the true label and the complementary label,and some loss functions have been developed to handle this problem.In this paper,we show that CLL can be transformed into ordinary classification under some mild conditions,which indicates that the complementary labels can supply enough information in most cases.As an example,an extensive misclassification error analysis was performed for the Kernel Ridge Regression(KRR)method applied to multiple complementary-label learning(MCLL),which demonstrates its superior performance compared to existing approaches.展开更多
We provide a kernel-regularized method to give theory solutions for Neumann boundary value problem on the unit ball. We define the reproducing kernel Hilbert space with the spherical harmonics associated with an inner...We provide a kernel-regularized method to give theory solutions for Neumann boundary value problem on the unit ball. We define the reproducing kernel Hilbert space with the spherical harmonics associated with an inner product defined on both the unit ball and the unit sphere, construct the kernel-regularized learning algorithm from the view of semi-supervised learning and bound the upper bounds for the learning rates. The theory analysis shows that the learning algorithm has better uniform convergence according to the number of samples. The research can be regarded as an application of kernel-regularized semi-supervised learning.展开更多
In this paper,an efficient multi-step scheme is presented based on reproducing kernel Hilbert space(RKHS)theory for solving ordinary stiff differential systems.The solution methodology depends on reproducing kernel fu...In this paper,an efficient multi-step scheme is presented based on reproducing kernel Hilbert space(RKHS)theory for solving ordinary stiff differential systems.The solution methodology depends on reproducing kernel functions to obtain analytic solutions in a uniform formfor a rapidly convergent series in the posed Sobolev space.Using the Gram-Schmidt orthogonality process,complete orthogonal essential functions are obtained in a compact field to encompass Fourier series expansion with the help of kernel properties reproduction.Consequently,by applying the standard RKHS method to each subinterval,approximate solutions that converge uniformly to the exact solutions are obtained.For this purpose,several numerical examples are tested to show proposed algorithm’s superiority,simplicity,and efficiency.The gained results indicate that themulti-step RKHSmethod is suitable for solving linear and nonlinear stiffness systems over an extensive duration and giving highly accurate outcomes.展开更多
By combining the wavelet decomposition with kernel method, a practical approach of universal multiscale wavelet kernels constructed in reproducing kernel Hilbert space (RKHS) is discussed, and an identification sche...By combining the wavelet decomposition with kernel method, a practical approach of universal multiscale wavelet kernels constructed in reproducing kernel Hilbert space (RKHS) is discussed, and an identification scheme using wavelet support vector machines (WSVM) estimator is proposed for nordinear dynamic systems. The good approximating properties of wavelet kernel function enhance the generalization ability of the proposed method, and the comparison of some numerical experimental results between the novel approach and some existing methods is encouraging.展开更多
In this paper, we apply the new algorithm of reproducing kernel method to give the approximate solution to some functional-differential equations. The numerical results demonstrate the accuracy of the proposed algorithm.
In this paper,the weak pre-orthogonal adaptive Fourier decomposition(W-POAFD)method is applied to solve fractional boundary value problems(FBVPs)in the reproducing kernel Hilbert spaces(RKHSs)W_(0)^(4)[0,1] and W^(1)[...In this paper,the weak pre-orthogonal adaptive Fourier decomposition(W-POAFD)method is applied to solve fractional boundary value problems(FBVPs)in the reproducing kernel Hilbert spaces(RKHSs)W_(0)^(4)[0,1] and W^(1)[0,1].The process of the W-POAFD is as follows:(i)choose a dictionary and implement the pre-orthogonalization to all the dictionary elements;(ii)select points in[0,1]by the weak maximal selection principle to determine the corresponding orthonormalized dictionary elements iteratively;(iii)express the analytical solution as a linear combination of these determined dictionary elements.Convergence properties of numerical solutions are also discussed.The numerical experiments are carried out to illustrate the accuracy and efficiency of W-POAFD for solving FBVPs.展开更多
This study introduces a pre-orthogonal adaptive Fourier decomposition(POAFD)to obtain approximations and numerical solutions to the fractional Laplacian initial value problem and the extension problem of Caffarelli an...This study introduces a pre-orthogonal adaptive Fourier decomposition(POAFD)to obtain approximations and numerical solutions to the fractional Laplacian initial value problem and the extension problem of Caffarelli and Silvestre(generalized Poisson equation).As a first step,the method expands the initial data function into a sparse series of the fundamental solutions with fast convergence,and,as a second step,makes use of the semigroup or the reproducing kernel property of each of the expanding entries.Experiments show the effectiveness and efficiency of the proposed series solutions.展开更多
In the realm of large-scale machine learning,it is crucial to explore methods for reducing computational complexity and memory demands while maintaining generalization performance.Additionally,since the collected data...In the realm of large-scale machine learning,it is crucial to explore methods for reducing computational complexity and memory demands while maintaining generalization performance.Additionally,since the collected data may contain some sensitive information,it is also of great significance to study privacy-preserving machine learning algorithms.This paper focuses on the performance of the differentially private stochastic gradient descent(SGD)algorithm based on random features.To begin,the algorithm maps the original data into a lowdimensional space,thereby avoiding the traditional kernel method for large-scale data storage requirement.Subsequently,the algorithm iteratively optimizes parameters using the stochastic gradient descent approach.Lastly,the output perturbation mechanism is employed to introduce random noise,ensuring algorithmic privacy.We prove that the proposed algorithm satisfies the differential privacy while achieving fast convergence rates under some mild conditions.展开更多
The conditional kernel correlation is proposed to measure the relationship between two random variables under covariates for multivariate data.Relying on the framework of reproducing kernel Hilbert spaces,we give the ...The conditional kernel correlation is proposed to measure the relationship between two random variables under covariates for multivariate data.Relying on the framework of reproducing kernel Hilbert spaces,we give the definitions of the conditional kernel covariance and conditional kernel correlation.We also provide their respective sample estimators and give the asymptotic properties,which help us construct a conditional independence test.According to the numerical results,the proposed test is more effective compared to the existing one under the considered scenarios.A real data is further analyzed to illustrate the efficacy of the proposed method.展开更多
We give a survey on the Berezin transform and its applications in operator theory. The focus is on the Bergman space of the unit disk and the Fock space of the complex plane. The Berezin transform is most effective an...We give a survey on the Berezin transform and its applications in operator theory. The focus is on the Bergman space of the unit disk and the Fock space of the complex plane. The Berezin transform is most effective and most successful in the study of Hankel and Toepltiz operators.展开更多
This paper studies the model-robust design problem for general models with an unknown bias or contamination and the correlated errors. The true response function is assumed to be from a reproducing kernel Hilbert spac...This paper studies the model-robust design problem for general models with an unknown bias or contamination and the correlated errors. The true response function is assumed to be from a reproducing kernel Hilbert space and the errors are fitted by the qth order moving average process MA(q), especially the MA(1) errors and the MA(2) errors. In both situations, design criteria are derived in terms of the average expected quadratic loss for the least squares estimation by using a minimax method. A case is studied and the orthogonality of the criteria is proved for this special response. The robustness of the design criteria is discussed through several numerical examples.展开更多
This paper considers online classification learning algorithms for regularized classification schemes with generalized gradient. A novel capacity independent approach is presented. It verifies the strong convergence o...This paper considers online classification learning algorithms for regularized classification schemes with generalized gradient. A novel capacity independent approach is presented. It verifies the strong convergence of sizes and yields satisfactory convergence rates for polynomially decaying step sizes. Compared with the gradient schemes, this al- gorithm needs only less additional assumptions on the loss function and derives a stronger result with respect to the choice of step sizes and the regularization parameters.展开更多
The paper is related to the error analysis of Multicategory Support Vector Machine (MSVM) classifiers based on reproducing kernel Hilbert spaces. We choose the polynomial kernel as Mercer kernel and give the error e...The paper is related to the error analysis of Multicategory Support Vector Machine (MSVM) classifiers based on reproducing kernel Hilbert spaces. We choose the polynomial kernel as Mercer kernel and give the error estimate with De La Vall6e Poussin means. We also introduce the standard estimation of sample error, and derive the explicit learning rate.展开更多
This paper presents learning rates for the least-square regularized regression algorithms with polynomial kernels. The target is the error analysis for the regression problem in learning theory. A regularization schem...This paper presents learning rates for the least-square regularized regression algorithms with polynomial kernels. The target is the error analysis for the regression problem in learning theory. A regularization scheme is given, which yields sharp learning rates. The rates depend on the dimension of polynomial space and polynomial reproducing kernel Hilbert space measured by covering numbers. Meanwhile, we also establish the direct approximation theorem by Bernstein-Durrmeyer operators in $ L_{\rho _X }^2 $ with Borel probability measure.展开更多
The kernel function method in support vector machine(SVM)is an excellent tool for nonlinear classification.How to design a kernel function is difficult for an SVM nonlinear classification problem,even for the polynomi...The kernel function method in support vector machine(SVM)is an excellent tool for nonlinear classification.How to design a kernel function is difficult for an SVM nonlinear classification problem,even for the polynomial kernel function.In this paper,we propose a new kind of polynomial kernel functions,called semi-tensor product kernel(STP-kernel),for an SVM nonlinear classification problem by semi-tensor product of matrix(STP)theory.We have shown the existence of the STP-kernel function and verified that it is just a polynomial kernel.In addition,we have shown the existence of the reproducing kernel Hilbert space(RKHS)associated with the STP-kernel function.Compared to the existing methods,it is much easier to construct the nonlinear feature mapping for an SVM nonlinear classification problem via an STP operator.展开更多
基金Supported by National Natural Science Foundation of China(Grant Nos.12072188,11632011,11702171,11572189,51121063)Shanghai Municipal Natural Science Foundation of China(Grant No.20ZR1425200).
文摘Deep learning algorithms based on neural networks make remarkable achievements in machine fault diagnosis,while the noise mixed in measured signals harms the prediction accuracy of networks.Existing denoising methods in neural networks,such as using complex network architectures and introducing sparse techniques,always suffer from the difficulty of estimating hyperparameters and the lack of physical interpretability.To address this issue,this paper proposes a novel interpretable denoising layer based on reproducing kernel Hilbert space(RKHS)as the first layer for standard neural networks,with the aim to combine the advantages of both traditional signal processing technology with physical interpretation and network modeling strategy with parameter adaption.By investigating the influencing mechanism of parameters on the regularization procedure in RKHS,the key parameter that dynamically controls the signal smoothness with low computational cost is selected as the only trainable parameter of the proposed layer.Besides,the forward and backward propagation algorithms of the designed layer are formulated to ensure that the selected parameter can be automatically updated together with other parameters in the neural network.Moreover,exponential and piecewise functions are introduced in the weight updating process to keep the trainable weight within a reasonable range and avoid the ill-conditioned problem.Experiment studies verify the effectiveness and compatibility of the proposed layer design method in intelligent fault diagnosis of machinery in noisy environments.
基金Supported by National Natural Science Foundation of China(11471216,11301332)E-Institutes of Shanghai Municipal Education Commission(E03004)+1 种基金Central Finance Project(YC-XK-13105)Shanghai Municipal Science and Technology Research Project(14DZ1201902)
文摘Consider the design problem for estimation and extrapolation in approximately linear regression models with possible misspecification. The design space is a discrete set consisting of finitely many points, and the model bias comes from a reproducing kernel Hilbert space. Two different design criteria are proposed by applying the minimax approach for estimating the parameters of the regression response and extrapolating the regression response to points outside of the design space. A simulated annealing algorithm is applied to construct the minimax designs. These minimax designs are compared with the classical D-optimal designs and all-bias extrapolation designs. Numerical results indicate that the simulated annealing algorithm is feasible and the minimax designs are robust against bias caused by model misspecification.
基金the NSFC(60473034)the Science Foundation of Zhejiang Province(Y604003).
文摘The spherical approximation between two nested reproducing kernels Hilbert spaces generated from different smooth kernels is investigated. It is shown that the functions of a space can be approximated by that of the subspace with better smoothness. Furthermore, the upper bound of approximation error is given.
基金supported by National Natural Science Foundation of China under Grant No.11771447。
文摘The estimation of high dimensional covariance matrices is an interesting and important research topic for many empirical time series problems such as asset allocation. To solve this dimension dilemma, a factor structure has often been taken into account. This paper proposes a dynamic factor structure whose factor loadings are generated in reproducing kernel Hilbert space(RKHS), to capture the dynamic feature of the covariance matrix. A simulation study is carried out to demonstrate its performance. Four different conditional variance models are considered for checking the robustness of our method and solving the conditional heteroscedasticity in the empirical study. By exploring the performance among eight introduced model candidates and the market baseline, the empirical study from 2001 to 2017 shows that portfolio allocation based on this dynamic factor structure can significantly reduce the variance, i.e., the risk, of portfolio and thus outperform the market baseline and the ones based on the traditional factor model.
基金supported in part by National Natural Science Foundation of China(Grant No.11871438)supported in part by the HKRGC GRF Nos.12300218,12300519,17201020,17300021,C1013-21GF,C7004-21GFJoint NSFC-RGC N-HKU76921。
文摘We consider a gradient iteration algorithm for prediction of functional linear regression under the framework of reproducing kernel Hilbert spaces.In the algorithm,we use an early stopping technique,instead of the classical Tikhonov regularization,to prevent the iteration from an overfitting function.Under mild conditions,we obtain upper bounds,essentially matching the known minimax lower bounds,for excess prediction risk.An almost sure convergence is also established for the proposed algorithm.
基金Supported by the Indigenous Innovation’s Capability Development Program of Huizhou University(HZU202003,HZU202020)Natural Science Foundation of Guangdong Province(2022A1515011463)+2 种基金the Project of Educational Commission of Guangdong Province(2023ZDZX1025)National Natural Science Foundation of China(12271473)Guangdong Province’s 2023 Education Science Planning Project(Higher Education Special Project)(2023GXJK505)。
文摘Complementary-label learning(CLL)aims at finding a classifier via samples with complementary labels.Such data is considered to contain less information than ordinary-label samples.The transition matrix between the true label and the complementary label,and some loss functions have been developed to handle this problem.In this paper,we show that CLL can be transformed into ordinary classification under some mild conditions,which indicates that the complementary labels can supply enough information in most cases.As an example,an extensive misclassification error analysis was performed for the Kernel Ridge Regression(KRR)method applied to multiple complementary-label learning(MCLL),which demonstrates its superior performance compared to existing approaches.
文摘We provide a kernel-regularized method to give theory solutions for Neumann boundary value problem on the unit ball. We define the reproducing kernel Hilbert space with the spherical harmonics associated with an inner product defined on both the unit ball and the unit sphere, construct the kernel-regularized learning algorithm from the view of semi-supervised learning and bound the upper bounds for the learning rates. The theory analysis shows that the learning algorithm has better uniform convergence according to the number of samples. The research can be regarded as an application of kernel-regularized semi-supervised learning.
文摘In this paper,an efficient multi-step scheme is presented based on reproducing kernel Hilbert space(RKHS)theory for solving ordinary stiff differential systems.The solution methodology depends on reproducing kernel functions to obtain analytic solutions in a uniform formfor a rapidly convergent series in the posed Sobolev space.Using the Gram-Schmidt orthogonality process,complete orthogonal essential functions are obtained in a compact field to encompass Fourier series expansion with the help of kernel properties reproduction.Consequently,by applying the standard RKHS method to each subinterval,approximate solutions that converge uniformly to the exact solutions are obtained.For this purpose,several numerical examples are tested to show proposed algorithm’s superiority,simplicity,and efficiency.The gained results indicate that themulti-step RKHSmethod is suitable for solving linear and nonlinear stiffness systems over an extensive duration and giving highly accurate outcomes.
基金the National 973 Key Fundamental Research Project of China (Grant No.2002CB312200)
文摘By combining the wavelet decomposition with kernel method, a practical approach of universal multiscale wavelet kernels constructed in reproducing kernel Hilbert space (RKHS) is discussed, and an identification scheme using wavelet support vector machines (WSVM) estimator is proposed for nordinear dynamic systems. The good approximating properties of wavelet kernel function enhance the generalization ability of the proposed method, and the comparison of some numerical experimental results between the novel approach and some existing methods is encouraging.
文摘In this paper, we apply the new algorithm of reproducing kernel method to give the approximate solution to some functional-differential equations. The numerical results demonstrate the accuracy of the proposed algorithm.
基金University of Macao Multi-Year Research Grant Ref.No MYRG2016-00053-FST and MYRG2018-00168-FSTthe Science and Technology Development Fund,Macao SAR FDCT/0123/2018/A3.
文摘In this paper,the weak pre-orthogonal adaptive Fourier decomposition(W-POAFD)method is applied to solve fractional boundary value problems(FBVPs)in the reproducing kernel Hilbert spaces(RKHSs)W_(0)^(4)[0,1] and W^(1)[0,1].The process of the W-POAFD is as follows:(i)choose a dictionary and implement the pre-orthogonalization to all the dictionary elements;(ii)select points in[0,1]by the weak maximal selection principle to determine the corresponding orthonormalized dictionary elements iteratively;(iii)express the analytical solution as a linear combination of these determined dictionary elements.Convergence properties of numerical solutions are also discussed.The numerical experiments are carried out to illustrate the accuracy and efficiency of W-POAFD for solving FBVPs.
基金supported by the Science and Technology Development Fund of Macao SAR(FDCT0128/2022/A,0020/2023/RIB1,0111/2023/AFJ,005/2022/ALC)the Shandong Natural Science Foundation of China(ZR2020MA004)+2 种基金the National Natural Science Foundation of China(12071272)the MYRG 2018-00168-FSTZhejiang Provincial Natural Science Foundation of China(LQ23A010014).
文摘This study introduces a pre-orthogonal adaptive Fourier decomposition(POAFD)to obtain approximations and numerical solutions to the fractional Laplacian initial value problem and the extension problem of Caffarelli and Silvestre(generalized Poisson equation).As a first step,the method expands the initial data function into a sparse series of the fundamental solutions with fast convergence,and,as a second step,makes use of the semigroup or the reproducing kernel property of each of the expanding entries.Experiments show the effectiveness and efficiency of the proposed series solutions.
基金supported by Zhejiang Provincial Natural Science Foundation of China(LR20A010001)National Natural Science Foundation of China(12271473 and U21A20426)。
文摘In the realm of large-scale machine learning,it is crucial to explore methods for reducing computational complexity and memory demands while maintaining generalization performance.Additionally,since the collected data may contain some sensitive information,it is also of great significance to study privacy-preserving machine learning algorithms.This paper focuses on the performance of the differentially private stochastic gradient descent(SGD)algorithm based on random features.To begin,the algorithm maps the original data into a lowdimensional space,thereby avoiding the traditional kernel method for large-scale data storage requirement.Subsequently,the algorithm iteratively optimizes parameters using the stochastic gradient descent approach.Lastly,the output perturbation mechanism is employed to introduce random noise,ensuring algorithmic privacy.We prove that the proposed algorithm satisfies the differential privacy while achieving fast convergence rates under some mild conditions.
基金partially supported by Knowledge Innovation Program of Hubei Province(No.2019CFB810)partially supported by NSFC(No.12325110)the CAS Project for Young Scientists in Basic Research(No.YSBR-034)。
文摘The conditional kernel correlation is proposed to measure the relationship between two random variables under covariates for multivariate data.Relying on the framework of reproducing kernel Hilbert spaces,we give the definitions of the conditional kernel covariance and conditional kernel correlation.We also provide their respective sample estimators and give the asymptotic properties,which help us construct a conditional independence test.According to the numerical results,the proposed test is more effective compared to the existing one under the considered scenarios.A real data is further analyzed to illustrate the efficacy of the proposed method.
基金Research partially supported by NNSF of China(11720101003)NSF of Guangdong Province(2018A030313512)+1 种基金Key projects of fundamental research in universities of Guangdong Province(2018KZDXM034)STU Scientific Research Foundation(NTF17009).
文摘We give a survey on the Berezin transform and its applications in operator theory. The focus is on the Bergman space of the unit disk and the Fock space of the complex plane. The Berezin transform is most effective and most successful in the study of Hankel and Toepltiz operators.
基金Supported by NSFC grant(10671129)the Special Funds for Doctoral Authorities of Education Ministry(20060270002)+1 种基金E-Institutes of Shanghai Municipal Education Commission(E03004)Shanghai Leading Academic Discipline Project(S30405)
文摘This paper studies the model-robust design problem for general models with an unknown bias or contamination and the correlated errors. The true response function is assumed to be from a reproducing kernel Hilbert space and the errors are fitted by the qth order moving average process MA(q), especially the MA(1) errors and the MA(2) errors. In both situations, design criteria are derived in terms of the average expected quadratic loss for the least squares estimation by using a minimax method. A case is studied and the orthogonality of the criteria is proved for this special response. The robustness of the design criteria is discussed through several numerical examples.
文摘This paper considers online classification learning algorithms for regularized classification schemes with generalized gradient. A novel capacity independent approach is presented. It verifies the strong convergence of sizes and yields satisfactory convergence rates for polynomially decaying step sizes. Compared with the gradient schemes, this al- gorithm needs only less additional assumptions on the loss function and derives a stronger result with respect to the choice of step sizes and the regularization parameters.
文摘The paper is related to the error analysis of Multicategory Support Vector Machine (MSVM) classifiers based on reproducing kernel Hilbert spaces. We choose the polynomial kernel as Mercer kernel and give the error estimate with De La Vall6e Poussin means. We also introduce the standard estimation of sample error, and derive the explicit learning rate.
文摘This paper presents learning rates for the least-square regularized regression algorithms with polynomial kernels. The target is the error analysis for the regression problem in learning theory. A regularization scheme is given, which yields sharp learning rates. The rates depend on the dimension of polynomial space and polynomial reproducing kernel Hilbert space measured by covering numbers. Meanwhile, we also establish the direct approximation theorem by Bernstein-Durrmeyer operators in $ L_{\rho _X }^2 $ with Borel probability measure.
基金supported by the National Natural Science Foundation of China(61573288)the Key Programs in Shaanxi Province of China(2021JZ-12)and the Yulin Science and Technology Bureau project(2019-89-2).
文摘The kernel function method in support vector machine(SVM)is an excellent tool for nonlinear classification.How to design a kernel function is difficult for an SVM nonlinear classification problem,even for the polynomial kernel function.In this paper,we propose a new kind of polynomial kernel functions,called semi-tensor product kernel(STP-kernel),for an SVM nonlinear classification problem by semi-tensor product of matrix(STP)theory.We have shown the existence of the STP-kernel function and verified that it is just a polynomial kernel.In addition,we have shown the existence of the reproducing kernel Hilbert space(RKHS)associated with the STP-kernel function.Compared to the existing methods,it is much easier to construct the nonlinear feature mapping for an SVM nonlinear classification problem via an STP operator.