For weighted sums of the form ?j = 1kn anj Xnj\sum {_{j = 1}^{k_n } } a_{nj} X_{nj} where {a nj , 1 ?j?k n ↑∞,n?1} is a real constant array and {X aj , 1≤j≤k n, n≥1} is a rowwise independent, zero mean, rando...For weighted sums of the form ?j = 1kn anj Xnj\sum {_{j = 1}^{k_n } } a_{nj} X_{nj} where {a nj , 1 ?j?k n ↑∞,n?1} is a real constant array and {X aj , 1≤j≤k n, n≥1} is a rowwise independent, zero mean, random element array in a real separable Banach space of typep, we establishL r convergence theorem and a general weak law of large numbers respectively, conversely, we characterize Banach spaces of typep in terms of convergence inr-th mean and probability for such weighted sums.展开更多
We present and analyze an unsupervised method for Word Sense Disambiguation(WSD).Our work is based on the method presented by McCarthy et al.in 2004 for finding the predominant sense of each word in the entire corpu...We present and analyze an unsupervised method for Word Sense Disambiguation(WSD).Our work is based on the method presented by McCarthy et al.in 2004 for finding the predominant sense of each word in the entire corpus.Their maximization algorithm allows weighted terms(similar words) from a distributional thesaurus to accumulate a score for each ambiguous word sense,i.e.,the sense with the highest score is chosen based on votes from a weighted list of terms related to the ambiguous word.This list is obtained using the distributional similarity method proposed by Lin Dekang to obtain a thesaurus.In the method of McCarthy et al.,every occurrence of the ambiguous word uses the same thesaurus,regardless of the context where the ambiguous word occurs.Our method accounts for the context of a word when determining the sense of an ambiguous word by building the list of distributed similar words based on the syntactic context of the ambiguous word.We obtain a top precision of 77.54%of accuracy versus 67.10%of the original method tested on SemCor.We also analyze the effect of the number of weighted terms in the tasks of finding the Most Precuent Sense(MFS) and WSD,and experiment with several corpora for building the Word Space Model.展开更多
基金Supported by the National Natural Science F oundation of China( No.10 0 710 5 8)
文摘For weighted sums of the form ?j = 1kn anj Xnj\sum {_{j = 1}^{k_n } } a_{nj} X_{nj} where {a nj , 1 ?j?k n ↑∞,n?1} is a real constant array and {X aj , 1≤j≤k n, n≥1} is a rowwise independent, zero mean, random element array in a real separable Banach space of typep, we establishL r convergence theorem and a general weak law of large numbers respectively, conversely, we characterize Banach spaces of typep in terms of convergence inr-th mean and probability for such weighted sums.
基金Supported by the Mexican Government(SNI,SIP-IPN,COFAA-IPN,and PIFI-IPN),CONACYT and the Japanese Government.
文摘We present and analyze an unsupervised method for Word Sense Disambiguation(WSD).Our work is based on the method presented by McCarthy et al.in 2004 for finding the predominant sense of each word in the entire corpus.Their maximization algorithm allows weighted terms(similar words) from a distributional thesaurus to accumulate a score for each ambiguous word sense,i.e.,the sense with the highest score is chosen based on votes from a weighted list of terms related to the ambiguous word.This list is obtained using the distributional similarity method proposed by Lin Dekang to obtain a thesaurus.In the method of McCarthy et al.,every occurrence of the ambiguous word uses the same thesaurus,regardless of the context where the ambiguous word occurs.Our method accounts for the context of a word when determining the sense of an ambiguous word by building the list of distributed similar words based on the syntactic context of the ambiguous word.We obtain a top precision of 77.54%of accuracy versus 67.10%of the original method tested on SemCor.We also analyze the effect of the number of weighted terms in the tasks of finding the Most Precuent Sense(MFS) and WSD,and experiment with several corpora for building the Word Space Model.