One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse ...One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse of dimensionality, a problem which plagues NLP in general given that the feature set for learning starts as a function of the size of the language in question, upwards of hundreds of thousands of terms typically. As such, much of the research and development in NLP in the last two decades has been in finding and optimizing solutions to this problem, to feature selection in NLP effectively. This paper looks at the development of these various techniques, leveraging a variety of statistical methods which rest on linguistic theories that were advanced in the middle of the last century, namely the distributional hypothesis which suggests that words that are found in similar contexts generally have similar meanings. In this survey paper we look at the development of some of the most popular of these techniques from a mathematical as well as data structure perspective, from Latent Semantic Analysis to Vector Space Models to their more modern variants which are typically referred to as word embeddings. In this review of algoriths such as Word2Vec, GloVe, ELMo and BERT, we explore the idea of semantic spaces more generally beyond applicability to NLP.展开更多
The goal of zero-shot recognition is to classify classes it has never seen before, which needs to build a bridge between seen and unseen classes through semantic embedding space. Therefore, semantic embedding space le...The goal of zero-shot recognition is to classify classes it has never seen before, which needs to build a bridge between seen and unseen classes through semantic embedding space. Therefore, semantic embedding space learning plays an important role in zero-shot recognition. Among existing works, semantic embedding space is mainly taken by user-defined attribute vectors. However, the discriminative information included in the user-defined attribute vector is limited. In this paper, we propose to learn an extra latent attribute space automatically to produce a more generalized and discriminative semantic embedded space. To prevent the bias problem, both user-defined attribute vector and latent attribute space are optimized by adversarial learning with auto-encoders. We also propose to reconstruct semantic patterns produced by explanatory graphs, which can make semantic embedding space more sensitive to usefully semantic information and less sensitive to useless information. The proposed method is evaluated on the AwA2 and CUB dataset. These results show that our proposed method achieves superior performance.展开更多
With the rapid development of Web 2.0, more and more people are sharing their opinions about online products, so there is much product review data. However, it is difficult to compare products directly using ratings b...With the rapid development of Web 2.0, more and more people are sharing their opinions about online products, so there is much product review data. However, it is difficult to compare products directly using ratings because many ratings are based on different scales or ratings are even missing. This paper addresses the following question: given textual reviews, how can we automatically determine the semantic orientations of reviewers and then rank different items? Due to the absence of ratings in many reviews, it is difficult to collect sufficient rating data for certain specific categories of products (e.g., movies), but it is easier to find rating data in another different but related category (e.g., books). We refer to this problem as transfer rating, and try to train a better ranking model for items in the interested category with the help of rating data from another related category. Specifically, we developed a ranking-oriented method called TRate for determining the semantic orientations and for ranking different items and formulated it in a regularized algorithm for rating knowledge transfer by bridging the two related categories via a shared latent semantic space. Tests on the Epinion dataset verified its effectiveness.展开更多
文摘One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse of dimensionality, a problem which plagues NLP in general given that the feature set for learning starts as a function of the size of the language in question, upwards of hundreds of thousands of terms typically. As such, much of the research and development in NLP in the last two decades has been in finding and optimizing solutions to this problem, to feature selection in NLP effectively. This paper looks at the development of these various techniques, leveraging a variety of statistical methods which rest on linguistic theories that were advanced in the middle of the last century, namely the distributional hypothesis which suggests that words that are found in similar contexts generally have similar meanings. In this survey paper we look at the development of some of the most popular of these techniques from a mathematical as well as data structure perspective, from Latent Semantic Analysis to Vector Space Models to their more modern variants which are typically referred to as word embeddings. In this review of algoriths such as Word2Vec, GloVe, ELMo and BERT, we explore the idea of semantic spaces more generally beyond applicability to NLP.
文摘The goal of zero-shot recognition is to classify classes it has never seen before, which needs to build a bridge between seen and unseen classes through semantic embedding space. Therefore, semantic embedding space learning plays an important role in zero-shot recognition. Among existing works, semantic embedding space is mainly taken by user-defined attribute vectors. However, the discriminative information included in the user-defined attribute vector is limited. In this paper, we propose to learn an extra latent attribute space automatically to produce a more generalized and discriminative semantic embedded space. To prevent the bias problem, both user-defined attribute vector and latent attribute space are optimized by adversarial learning with auto-encoders. We also propose to reconstruct semantic patterns produced by explanatory graphs, which can make semantic embedding space more sensitive to usefully semantic information and less sensitive to useless information. The proposed method is evaluated on the AwA2 and CUB dataset. These results show that our proposed method achieves superior performance.
基金supported by the National Natural Science Foundation of China (No. 60773061)the National Natural Science Foundation of Jiangsu Province of China (No. BK2008381)+1 种基金supported by the National High-Tech Research and Development (863) Program of China (No.2009AA01Z138)supported by the National Natural Science Foundation of China (No.70771043)
文摘With the rapid development of Web 2.0, more and more people are sharing their opinions about online products, so there is much product review data. However, it is difficult to compare products directly using ratings because many ratings are based on different scales or ratings are even missing. This paper addresses the following question: given textual reviews, how can we automatically determine the semantic orientations of reviewers and then rank different items? Due to the absence of ratings in many reviews, it is difficult to collect sufficient rating data for certain specific categories of products (e.g., movies), but it is easier to find rating data in another different but related category (e.g., books). We refer to this problem as transfer rating, and try to train a better ranking model for items in the interested category with the help of rating data from another related category. Specifically, we developed a ranking-oriented method called TRate for determining the semantic orientations and for ranking different items and formulated it in a regularized algorithm for rating knowledge transfer by bridging the two related categories via a shared latent semantic space. Tests on the Epinion dataset verified its effectiveness.