摘要
In recently proposed partial oblique projection (POP) learning, a function space is decomposed into two complementary subspaces, so that functions belonging to one of which can be optimally estimated. This paper shows that when the decomposition is specially performed so that the above subspace becomes the largest, a special learning called SPOP learning is obtained and correspondingly an incremental learning is implemented, result of which equals exactly to that of batch learning including novel data. The effectiveness of the method is illustrated by experimental results.
In recently proposed partial oblique projection (POP) learning, a function space is decomposed into two complementary subspaces, so that functions belonging to one of which can be optimally estimated. This paper shows that when the decomposition is specially performed so that the above subspace becomes the largest, a special learning called SPOP learning is obtained and correspondingly an incremental learning is implemented, result of which equals exactly to that of batch learning including novel data. The effectiveness of the method is illustrated by experimental results.