Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed...Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed a novel Low-Rank and Sparse Representation with Adaptive Neighborhood Regularization(LRSR-ANR)method for HSI classification.In the proposed method,we first represent the hyperspectral data via LRSR since it combines both sparsity and low-rankness to maintain global and local data structures simultaneously.The LRSR is optimized by using a mixed Gauss-Seidel and Jacobian Alternating Direction Method of Multipliers(M-ADMM),which converges faster than ADMM.Then to incorporate the spatial information,an ANR scheme is designed by combining Euclidean and Cosine distance metrics to reduce the mixed pixels within a neighborhood.Lastly,the predicted labels are determined by jointly considering the homogeneous pixels in the classification rule of the minimum reconstruction error.Experimental results based on three popular hyperspectral images demonstrate that the proposed method outperforms other related methods in terms of classification accuracy and generalization performance.展开更多
Negative emotion classification refers to the automatic classification of negative emotion of texts in social networks.Most existing methods are based on deep learning models,facing challenges such as complex structur...Negative emotion classification refers to the automatic classification of negative emotion of texts in social networks.Most existing methods are based on deep learning models,facing challenges such as complex structures and too many hyperparameters.To meet these challenges,in this paper,we propose a method for negative emotion classification utilizing a Robustly Optimized BERT Pretraining Approach(RoBERTa)and p-norm Broad Learning(p-BL).Specifically,there are mainly three contributions in this paper.Firstly,we fine-tune the RoBERTa to adapt it to the task of negative emotion classification.Then,we employ the fine-tuned RoBERTa to extract features of original texts and generate sentence vectors.Secondly,we adopt p-BL to construct a classifier and then predict negative emotions of texts using the classifier.Compared with deep learning models,p-BL has advantages such as a simple structure that is only 3-layer and fewer parameters to be trained.Moreover,it can suppress the adverse effects of more outliers and noise in data by flexibly changing the value of p.Thirdly,we conduct extensive experiments on the public datasets,and the experimental results show that our proposed method outperforms the baseline methods on the tested datasets.展开更多
基金National Natural Foundation of China(No.41971279)Fundamental Research Funds of the Central Universities(No.B200202012)。
文摘Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed a novel Low-Rank and Sparse Representation with Adaptive Neighborhood Regularization(LRSR-ANR)method for HSI classification.In the proposed method,we first represent the hyperspectral data via LRSR since it combines both sparsity and low-rankness to maintain global and local data structures simultaneously.The LRSR is optimized by using a mixed Gauss-Seidel and Jacobian Alternating Direction Method of Multipliers(M-ADMM),which converges faster than ADMM.Then to incorporate the spatial information,an ANR scheme is designed by combining Euclidean and Cosine distance metrics to reduce the mixed pixels within a neighborhood.Lastly,the predicted labels are determined by jointly considering the homogeneous pixels in the classification rule of the minimum reconstruction error.Experimental results based on three popular hyperspectral images demonstrate that the proposed method outperforms other related methods in terms of classification accuracy and generalization performance.
基金This work was partially supported by the National Natural Science Foundation of China(No.61876205)the Ministry of Education of Humanities and Social Science Project(No.19YJAZH128)+1 种基金the Science and Technology Plan Project of Guangzhou(No.201804010433)the Bidding Project of Laboratory of Language Engineering and Computing(No.LEC2017ZBKT001).
文摘Negative emotion classification refers to the automatic classification of negative emotion of texts in social networks.Most existing methods are based on deep learning models,facing challenges such as complex structures and too many hyperparameters.To meet these challenges,in this paper,we propose a method for negative emotion classification utilizing a Robustly Optimized BERT Pretraining Approach(RoBERTa)and p-norm Broad Learning(p-BL).Specifically,there are mainly three contributions in this paper.Firstly,we fine-tune the RoBERTa to adapt it to the task of negative emotion classification.Then,we employ the fine-tuned RoBERTa to extract features of original texts and generate sentence vectors.Secondly,we adopt p-BL to construct a classifier and then predict negative emotions of texts using the classifier.Compared with deep learning models,p-BL has advantages such as a simple structure that is only 3-layer and fewer parameters to be trained.Moreover,it can suppress the adverse effects of more outliers and noise in data by flexibly changing the value of p.Thirdly,we conduct extensive experiments on the public datasets,and the experimental results show that our proposed method outperforms the baseline methods on the tested datasets.