期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Characterization of Type p Banach Spaces by the Weak Law of Large Numbers
1
作者 Gan Shi-xin School of Mathematics and Statistics, Wuhan University, Wuhan 430072, Hubei, China 《Wuhan University Journal of Natural Sciences》 EI CAS 2002年第1期14-19,共6页
For weighted sums of the form ?j = 1kn anj Xnj\sum {_{j = 1}^{k_n } } a_{nj} X_{nj} where {a nj , 1 ?j?k n ↑∞,n?1} is a real constant array and {X aj , 1≤j≤k n, n≥1} is a rowwise independent, zero mean, rando... For weighted sums of the form ?j = 1kn anj Xnj\sum {_{j = 1}^{k_n } } a_{nj} X_{nj} where {a nj , 1 ?j?k n ↑∞,n?1} is a real constant array and {X aj , 1≤j≤k n, n≥1} is a rowwise independent, zero mean, random element array in a real separable Banach space of typep, we establishL r convergence theorem and a general weak law of large numbers respectively, conversely, we characterize Banach spaces of typep in terms of convergence inr-th mean and probability for such weighted sums. 展开更多
关键词 Key words Banach space of typep array of random elements weighted sums weak law of large numbers {a nj } uniform integrability L r convergence convergence in probability
下载PDF
Unsupervised WSD by Finding the Predominant Sense Using Context as a Dynamic Thesaurus 被引量:1
2
作者 Javier Tejada-Carcamo Hiram Calvo +1 位作者 Alexander Gelbukh Kazuo Hara 《Journal of Computer Science & Technology》 SCIE EI CSCD 2010年第5期1030-1039,共10页
We present and analyze an unsupervised method for Word Sense Disambiguation(WSD).Our work is based on the method presented by McCarthy et al.in 2004 for finding the predominant sense of each word in the entire corpu... We present and analyze an unsupervised method for Word Sense Disambiguation(WSD).Our work is based on the method presented by McCarthy et al.in 2004 for finding the predominant sense of each word in the entire corpus.Their maximization algorithm allows weighted terms(similar words) from a distributional thesaurus to accumulate a score for each ambiguous word sense,i.e.,the sense with the highest score is chosen based on votes from a weighted list of terms related to the ambiguous word.This list is obtained using the distributional similarity method proposed by Lin Dekang to obtain a thesaurus.In the method of McCarthy et al.,every occurrence of the ambiguous word uses the same thesaurus,regardless of the context where the ambiguous word occurs.Our method accounts for the context of a word when determining the sense of an ambiguous word by building the list of distributed similar words based on the syntactic context of the ambiguous word.We obtain a top precision of 77.54%of accuracy versus 67.10%of the original method tested on SemCor.We also analyze the effect of the number of weighted terms in the tasks of finding the Most Precuent Sense(MFS) and WSD,and experiment with several corpora for building the Word Space Model. 展开更多
关键词 word sense disambiguation word space model semantic similarity text corpus THESAURUS
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部