摘要
In this paper, we present a simple but powerful ensemble for robust texture classification. The proposed method uses a single type of feature descriptor, i.e. scale-invariant feature transform (SIFT), and inherits the spirit of the spatial pyramid matching model (SPM). In a flexible way of partitioning the original texture images, our approach can produce sufficient informative local features and thereby form a reliable feature pond or train a new class-specific dictionary. To take full advantage of this feature pond, we develop a group-collaboratively representation-based strategy (GCRS) for the final classification. It is solved by the well-known group lasso. But we go beyond of this and propose a locality-constraint method to speed up this, named local constraint-GCRS (LC-GCRS). Experimental results on three public texture datasets demonstrate the proposed approach achieves competitive outcomes and even outperforms the state-of-the-art methods. Particularly, most of methods cannot work well when only a few samples of each category are available for training, but our approach still achieves very high classification accuracy, e.g. an average accuracy of 92.1% for the Brodatz dataset when only one image is used for training, significantly higher than any other methods.
In this paper, we present a simple but powerful ensemble for robust texture classification. The proposed method uses a single type of feature descriptor, i.e. scale-invariant feature transform (SIFT), and inherits the spirit of the spatial pyramid matching model (SPM). In a flexible way of partitioning the original texture images, our approach can produce sufficient informative local features and thereby form a reliable feature pond or train a new class-specific dictionary. To take full advantage of this feature pond, we develop a group-collaboratively representation-based strategy (GCRS) for the final classification. It is solved by the well-known group lasso. But we go beyond of this and propose a locality-constraint method to speed up this, named local constraint-GCRS (LC-GCRS). Experimental results on three public texture datasets demonstrate the proposed approach achieves competitive outcomes and even outperforms the state-of-the-art methods. Particularly, most of methods cannot work well when only a few samples of each category are available for training, but our approach still achieves very high classification accuracy, e.g. an average accuracy of 92.1% for the Brodatz dataset when only one image is used for training, significantly higher than any other methods.