期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Fractional-order Sparse Representation for Image Denoising 被引量:1
1
作者 Leilei Geng Zexuan Ji +1 位作者 yunhao yuan Yilong Yin 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2018年第2期555-563,共9页
Sparse representation models have been shown promising results for image denoising. However, conventional sparse representation-based models cannot obtain satisfactory estimations for sparse coefficients and the dicti... Sparse representation models have been shown promising results for image denoising. However, conventional sparse representation-based models cannot obtain satisfactory estimations for sparse coefficients and the dictionary. To address this weakness, in this paper, we propose a novel fractional-order sparse representation(FSR) model. Specifically, we cluster the image patches into K groups, and calculate the singular values for each clean/noisy patch pair in the wavelet domain. Then the uniform fractional-order parameters are learned for each cluster.Then a novel fractional-order sample space is constructed using adaptive fractional-order parameters in the wavelet domain to obtain more accurate sparse coefficients and dictionary for image denoising. Extensive experimental results show that the proposed model outperforms state-of-the-art sparse representation-based models and the block-matching and 3D filtering algorithm in terms of denoising performance and the computational efficiency. 展开更多
关键词 FRACTIONAL-ORDER image denoising MULTI-SCALE sparse representation
下载PDF
Representation learning via an integrated autoencoder for unsupervised domain adaptation 被引量:1
2
作者 Yi ZHU Xindong WU +2 位作者 Jipeng QIANG yunhao yuan Yun LI 《Frontiers of Computer Science》 SCIE EI CSCD 2023年第5期75-87,共13页
The purpose of unsupervised domain adaptation is to use the knowledge of the source domain whose data distribution is different from that of the target domain for promoting the learning task in the target domain.The k... The purpose of unsupervised domain adaptation is to use the knowledge of the source domain whose data distribution is different from that of the target domain for promoting the learning task in the target domain.The key bottleneck in unsupervised domain adaptation is how to obtain higher-level and more abstract feature representations between source and target domains which can bridge the chasm of domain discrepancy.Recently,deep learning methods based on autoencoder have achieved sound performance in representation learning,and many dual or serial autoencoderbased methods take different characteristics of data into consideration for improving the effectiveness of unsupervised domain adaptation.However,most existing methods of autoencoders just serially connect the features generated by different autoencoders,which pose challenges for the discriminative representation learning and fail to find the real cross-domain features.To address this problem,we propose a novel representation learning method based on an integrated autoencoders for unsupervised domain adaptation,called IAUDA.To capture the inter-and inner-domain features of the raw data,two different autoencoders,which are the marginalized autoencoder with maximum mean discrepancy(mAE)and convolutional autoencoder(CAE)respectively,are proposed to learn different feature representations.After higher-level features are obtained by these two different autoencoders,a sparse autoencoder is introduced to compact these inter-and inner-domain representations.In addition,a whitening layer is embedded for features processed before the mAE to reduce redundant features inside a local area.Experimental results demonstrate the effectiveness of our proposed method compared with several state-of-the-art baseline methods. 展开更多
关键词 unsupervised domain adaptation representation learning marginalized autoencoder convolutional autoen-coder sparse autoencoder
原文传递
Unsupervised statistical text simplification using pre-trained language modeling for initialization
3
作者 Jipeng QIANG Feng ZHANG +3 位作者 Yun LI yunhao yuan Yi ZHU Xindong WU 《Frontiers of Computer Science》 SCIE EI CSCD 2023年第1期81-90,共10页
Unsupervised text simplification has attracted much attention due to the scarcity of high-quality parallel text simplification corpora. Recent an unsupervised statistical text simplification based on phrase-based mach... Unsupervised text simplification has attracted much attention due to the scarcity of high-quality parallel text simplification corpora. Recent an unsupervised statistical text simplification based on phrase-based machine translation system (UnsupPBMT) achieved good performance, which initializes the phrase tables using the similar words obtained by word embedding modeling. Since word embedding modeling only considers the relevance between words, the phrase table in UnsupPBMT contains a lot of dissimilar words. In this paper, we propose an unsupervised statistical text simplification using pre-trained language modeling BERT for initialization. Specifically, we use BERT as a general linguistic knowledge base for predicting similar words. Experimental results show that our method outperforms the state-of-the-art unsupervised text simplification methods on three benchmarks, even outperforms some supervised baselines. 展开更多
关键词 text simplification pre-trained language modeling BERT word embeddings
原文传递
Lexical simplification via single-word generation
4
作者 Jipeng QIANG Yang LI +2 位作者 Yun LI yunhao yuan Yi ZHU 《Frontiers of Computer Science》 SCIE EI CSCD 2023年第6期163-165,共3页
1 Introduction Lexical simplification(LS)aims to simplify a sentence by replacing complex words with simpler words without changing the meaning of the sentence,which can facilitate comprehension of the text for people... 1 Introduction Lexical simplification(LS)aims to simplify a sentence by replacing complex words with simpler words without changing the meaning of the sentence,which can facilitate comprehension of the text for people with non-native speakers and children.Traditional LS methods utilize linguistic databases(e.g.,WordNet)[1]or word embedding models[2]to extract synonyms or high-similar words for the complex word,and then sort them based on their appropriateness in context.Recently,BERT-based LS methods[3,4]entirely or partially mask the complex word of the original sentence,and then feed the sentence into pretrained modeling BERT[5]to obtain the top probability tokens corresponding to the masked word as the substitute candidates.They have made remarkable progress in generating substitutes by making full use of the context information of complex words,that can effectively alleviate the shortcomings of traditional methods. 展开更多
关键词 TOKEN utilize SPEAKERS
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部