Document processing in natural language includes retrieval,sentiment analysis,theme extraction,etc.Classical methods for handling these tasks are based on models of probability,semantics and networks for machine learn...Document processing in natural language includes retrieval,sentiment analysis,theme extraction,etc.Classical methods for handling these tasks are based on models of probability,semantics and networks for machine learning.The probability model is loss of semantic information in essential,and it influences the processing accuracy.Machine learning approaches include supervised,unsupervised,and semi-supervised approaches,labeled corpora is necessary for semantics model and supervised learning.The method for achieving a reliably labeled corpus is done manually,it is costly and time-consuming because people have to read each document and annotate the label of each document.Recently,the continuous CBOW model is efficient for learning high-quality distributed vector representations,and it can capture a large number of precise syntactic and semantic word relationships,this model can be easily extended to learn paragraph vector,but it is not precise.Towards these problems,this paper is devoted to developing a new model for learning paragraph vector,we combine the CBOW model and CNNs to establish a new deep learning model.Experimental results show that paragraph vector generated by the new model is better than the paragraph vector generated by CBOW model in semantic relativeness and accuracy.展开更多
Neural Machine Translation(NMT)based system is an important technology for translation applications.However,there is plenty of rooms for the improvement of NMT.In the process of NMT,traditional word vector cannot dist...Neural Machine Translation(NMT)based system is an important technology for translation applications.However,there is plenty of rooms for the improvement of NMT.In the process of NMT,traditional word vector cannot distinguish the same words under different parts of speech(POS).Aiming to alleviate this problem,this paper proposed a new word vector training method based on POS feature.It can efficiently improve the quality of translation by adding POS feature to the training process of word vectors.In the experiments,we conducted extensive experiments to evaluate our methods.The experimental result shows that the proposed method is beneficial to improve the quality of translation from English into Chinese.展开更多
Nowadays,Internet of Things(IoT)is widely deployed and brings great opportunities to change people's daily life.To realize more effective human-computer interaction in the IoT applications,the Question Answering(Q...Nowadays,Internet of Things(IoT)is widely deployed and brings great opportunities to change people's daily life.To realize more effective human-computer interaction in the IoT applications,the Question Answering(QA)systems implanted in the IoT services are supposed to improve the ability to understand natural language.Therefore,the distributed representation of words,which contains more semantic or syntactic information,has been playing a more and more important role in the QA systems.However,learning high-quality distributed word vectors requires lots of storage and computing resources,hence it cannot be deployed on the resource-constrained IoT devices.It is a good choice to outsource the data and computation to the cloud servers.Nevertheless,it could cause privacy risks to directly upload private data to the untrusted cloud.Therefore,realizing the word vector learning process over untrusted cloud servers without privacy leakage is an urgent and challenging task.In this paper,we present a novel efficient word vector learning scheme over encrypted data.We first design a series of arithmetic computation protocols.Then we use two non-colluding cloud servers to implement high-quality word vectors learning over encrypted data.The proposed scheme allows us to perform training word vectors on the remote cloud servers while protecting privacy.Security analysis and experiments over real data sets demonstrate that our scheme is more secure and efficient than existing privacy-preserving word vector learning schemes.展开更多
To acquire non-ferrous metals related news from different countries’internet,we proposed a cross-lingual non-ferrous metals related news recognition method based on CNN with a limited bilingual dictionary.Firstly,con...To acquire non-ferrous metals related news from different countries’internet,we proposed a cross-lingual non-ferrous metals related news recognition method based on CNN with a limited bilingual dictionary.Firstly,considering the lack of related language resources of non-ferrous metals,we use a limited bilingual dictionary and CCA to learn cross-lingual word vector and to represent news in different languages uniformly.Then,to improve the effect of recognition,we use a variant of the CNN to learn recognition features and construct the recognition model.The experimental results show that our proposed method acquires better results.展开更多
Retelling extraction is an important branch of Natural Language Processing(NLP),and high-quality retelling resources are very helpful to improve the performance of machine translation.However,traditional methods based...Retelling extraction is an important branch of Natural Language Processing(NLP),and high-quality retelling resources are very helpful to improve the performance of machine translation.However,traditional methods based on the bilingual parallel corpus often ignore the document background in the process of retelling acquisition and application.In order to solve this problem,we introduce topic model information into the translation mode and propose a topic-based statistical machine translation method to improve the translation performance.In this method,Probabilistic Latent Semantic Analysis(PLSA)is used to obtains the co-occurrence relationship between words and documents by the hybrid matrix decomposition.Then we design a decoder to simplify the decoding process.Experiments show that the proposed method can effectively improve the accuracy of translation.展开更多
With the development of Internet technology,the explosive growth of Internet information presentation has led to difficulty in filtering effective information.Finding a model with high accuracy for text classification...With the development of Internet technology,the explosive growth of Internet information presentation has led to difficulty in filtering effective information.Finding a model with high accuracy for text classification has become a critical problem to be solved by text filtering,especially for Chinese texts.This paper selected the manually calibrated Douban movie website comment data for research.First,a text filtering model based on the BP neural network has been built;Second,based on the Term Frequency-Inverse Document Frequency(TF-IDF)vector space model and the doc2vec method,the text word frequency vector and the text semantic vector were obtained respectively,and the text word frequency vector was linearly reduced by the Principal Component Analysis(PCA)method.Third,the text word frequency vector after dimensionality reduction and the text semantic vector were combined,add the text value degree,and the text synthesis vector was constructed.Experiments show that the model combined with text word frequency vector degree after dimensionality reduction,text semantic vector,and text value has reached the highest accuracy of 84.67%.展开更多
The internet is an abundant source of news every day. Thus, efficient algorithms to extract keywords from the text are important to obtain information quickly. However, the precision and recall of mature keyword extra...The internet is an abundant source of news every day. Thus, efficient algorithms to extract keywords from the text are important to obtain information quickly. However, the precision and recall of mature keyword extraction algorithms need improvement. TextRank, which is derived from the PageRank algorithm, uses word graphs to spread the weight of words. The keyword weight propagation in Text Rank focuses only on word frequency. To improve the performance of the algorithm, we propose Semantic Clustering TextRank(SCTR), a semantic clustering news keyword extraction algorithm based on TextRank. Firstly, the word vectors generated by the Bidirectional Encoder Representation from Transformers(BERT) model are used to perform k-means clustering to represent semantic clustering. Then, the clustering results are used to construct a TextRank weight transfer probability matrix. Finally,iterative calculation of word graphs and extraction of keywords are performed. The test target of this experiment is a Chinese news library. The results of the experiment conducted on this text set show that the SCTR algorithm has greater precision, recall, and F1 value than the traditional TextRank and Term Frequency-Inverse Document Frequency(TF-IDF) algorithms.展开更多
Background:The type Ⅲ secreted effectors(T3SEs)are one of the indispensable proteins in the growth and reproduction of Gram-negative bacteria.In particular,the pathogenesis of Gram-negative bacteria depends on the ty...Background:The type Ⅲ secreted effectors(T3SEs)are one of the indispensable proteins in the growth and reproduction of Gram-negative bacteria.In particular,the pathogenesis of Gram-negative bacteria depends on the type Ⅲ secreted effectors,and by injecting T3SEs into a host cell,the host cell's immunity can be destroyed.The high diversity of T3SE sequences and the lack of defined secretion signals make it difficult to identify and predict.Moreover,the related study of the pathological system associated with T3SE remains a hot topic in bioinformatics.Some computational tools have been developed to meet the growing demand for the recognition of T3SEs and the studies of type Ⅲ secretion systems(T3SS).Although these tools can help biological experiments in certain procedures,there is still room for improvement,even for the current best model,as the existing methods adopt handdesigned feature and traditional machine learning methods.Methods:In this study,we propose a powerful predictor based on deep learning methods,called WEDeepT3.Our work consists mainly of three key steps.First,we train word embedding vectors for protein sequences in a large-scale amino acid sequence database.Second,we combine the word vectors with traditional features extracted from protein sequences,like PSSM,to construct a more comprehensive feature representation.Finally,we construct a deep neural network model in the prediction of type Ⅲ secreted effectors.Results:The feature representation of WEDeepT3 consists of both word embedding and position-specific features.Working together with convolutional neural networks,the new model achieves superior performance to the state-ofthe-art methods,demonstrating the effectiveness of the new feature representation and the powerful learning ability of deep models.Conclusion:WEDeepT3 exploits both semantic information of Ar-mer fragments and evolutional information of protein sequences to accurately difYerentiate between T3SEs and non-T3SEs.WEDeepT3 is available at bcmi.sjtu.edu.cn/~yangyang/WEDeepT3.html.展开更多
Sentiment analysis is one of the most popular fields in NLP,and with the development of computer software and hardware,its application is increasingly extensive.Supervised corpus has a positive effect on model trainin...Sentiment analysis is one of the most popular fields in NLP,and with the development of computer software and hardware,its application is increasingly extensive.Supervised corpus has a positive effect on model training,but these corpus are prohibitively expensive to manually produce.This paper proposes a deep learning sentiment analysis model based on transfer learning.It represents the sentiment and semantics of words and improves the effect of Vietnamese sentiment analysis model by using English corpus.It generated semantic vectors through Word2Vec,an open-source tool,and built sentiment vectors through LSTM with attention mechanism to get sentiment word vector.With the method of sharing parameters,the model was pre-training with English corpus.Finally,the sentiment of the text was classified by stacked Bi-LSTM with attention mechanism,with input of sentiment word vector.Experiments show that the model can effectively improve the performance of Vietnamese sentiment analysis under small language materials.展开更多
Although the popular database systems perform well on query optimization,they still face poor query execution plans when the join operations across multiple tables are complex.Bad execution planning usually results in...Although the popular database systems perform well on query optimization,they still face poor query execution plans when the join operations across multiple tables are complex.Bad execution planning usually results in bad cardinality estimations.The cardinality estimation models in traditional databases cannot provide high-quality estimation,because they are not capable of capturing the correlation between multiple tables in an effective fashion.Recently,the state-of-the-art learning-based cardinality estimation is estimated to work better than the traditional empirical methods.Basically,they used deep neural networks to compute the relationships and correlations of tables.In this paper,we propose a vertical scanning convolutional neural network(abbreviated as VSCNN)to capture the relationships between words in the word vector in order to generate a feature map.The proposed learning-based cardinality estimator converts Structured Query Language(SQL)queries from a sentence to a word vector and we encode table names in the one-hot encoding method and the samples into bitmaps,separately,and then merge them to obtain enough semantic information from data samples.In particular,the feature map obtained by VSCNN contains semantic information including tables,joins,and predicates about SQL queries.Importantly,in order to improve the accuracy of cardinality estimation,we propose the negative sampling method for training the word vector by gradient descent from the base table and compress it into a bitmap.Extensive experiments are conducted and the results show that the estimation quality of q-error of the proposed vertical scanning convolutional neural network based model is reduced by at least 14.6%when compared with the estimators in traditional databases.展开更多
The Product Sensitive Online Dirichlet Allocation model(PSOLDA)proposed in this paper mainly uses the sentiment polarity of topic words in the review text to improve the accuracy of topic evolution.First,we use Latent...The Product Sensitive Online Dirichlet Allocation model(PSOLDA)proposed in this paper mainly uses the sentiment polarity of topic words in the review text to improve the accuracy of topic evolution.First,we use Latent Dirichlet Allocation(LDA)to obtain the distribution of topic words in the current time window.Second,the word2 vec word vector is used as auxiliary information to determine the sentiment polarity and obtain the sentiment polarity distribution of the current topic.Finally,the sentiment polarity changes of the topics in the previous and next time window are mapped to the sentiment factors,and the distribution of topic words in the next time window is controlled through them.The experimental results show that the PSOLDA model decreases the probability distribution by 0.1601,while Online Twitter LDA only increases by 0.0699.The topic evolution method that integrates the sentimental information of topic words proposed in this paper is better than the traditional model.展开更多
基金The authors would like to thank all anonymous reviewers for their suggestions and feedback.This work Supported by the National Natural Science,Foundation of China(No.61379052,61379103)the National Key Research and Development Program(2016YFB1000101)+1 种基金The Natural Science Foundation for Distinguished Young Scholars of Hunan Province(Grant No.14JJ1026)Specialized Research Fund for the Doctoral Program of Higher Education(Grant No.20124307110015).
文摘Document processing in natural language includes retrieval,sentiment analysis,theme extraction,etc.Classical methods for handling these tasks are based on models of probability,semantics and networks for machine learning.The probability model is loss of semantic information in essential,and it influences the processing accuracy.Machine learning approaches include supervised,unsupervised,and semi-supervised approaches,labeled corpora is necessary for semantics model and supervised learning.The method for achieving a reliably labeled corpus is done manually,it is costly and time-consuming because people have to read each document and annotate the label of each document.Recently,the continuous CBOW model is efficient for learning high-quality distributed vector representations,and it can capture a large number of precise syntactic and semantic word relationships,this model can be easily extended to learn paragraph vector,but it is not precise.Towards these problems,this paper is devoted to developing a new model for learning paragraph vector,we combine the CBOW model and CNNs to establish a new deep learning model.Experimental results show that paragraph vector generated by the new model is better than the paragraph vector generated by CBOW model in semantic relativeness and accuracy.
基金This work is supported by the National Natural Science Foundation of China(61872231,61701297).
文摘Neural Machine Translation(NMT)based system is an important technology for translation applications.However,there is plenty of rooms for the improvement of NMT.In the process of NMT,traditional word vector cannot distinguish the same words under different parts of speech(POS).Aiming to alleviate this problem,this paper proposed a new word vector training method based on POS feature.It can efficiently improve the quality of translation by adding POS feature to the training process of word vectors.In the experiments,we conducted extensive experiments to evaluate our methods.The experimental result shows that the proposed method is beneficial to improve the quality of translation from English into Chinese.
基金supported by the National Natural Science Foundation of China under Grant No.61672195,61872372the Open Foundation of State Key Laboratory of Cryptology No.MMKFKT201617the National University of Defense Technology Grant No.ZK19-38.
文摘Nowadays,Internet of Things(IoT)is widely deployed and brings great opportunities to change people's daily life.To realize more effective human-computer interaction in the IoT applications,the Question Answering(QA)systems implanted in the IoT services are supposed to improve the ability to understand natural language.Therefore,the distributed representation of words,which contains more semantic or syntactic information,has been playing a more and more important role in the QA systems.However,learning high-quality distributed word vectors requires lots of storage and computing resources,hence it cannot be deployed on the resource-constrained IoT devices.It is a good choice to outsource the data and computation to the cloud servers.Nevertheless,it could cause privacy risks to directly upload private data to the untrusted cloud.Therefore,realizing the word vector learning process over untrusted cloud servers without privacy leakage is an urgent and challenging task.In this paper,we present a novel efficient word vector learning scheme over encrypted data.We first design a series of arithmetic computation protocols.Then we use two non-colluding cloud servers to implement high-quality word vectors learning over encrypted data.The proposed scheme allows us to perform training word vectors on the remote cloud servers while protecting privacy.Security analysis and experiments over real data sets demonstrate that our scheme is more secure and efficient than existing privacy-preserving word vector learning schemes.
基金The Major Technologies R&D Special Program of Anhui,China(Grant No.16030901060)The National Natural Science Foundation of China(Grant No.61502010)+1 种基金The Natural Science Foundation of Anhui Province(Grant No.1608085QF146)The Natural Science Foundation of China(Grant No.61806004).
文摘To acquire non-ferrous metals related news from different countries’internet,we proposed a cross-lingual non-ferrous metals related news recognition method based on CNN with a limited bilingual dictionary.Firstly,considering the lack of related language resources of non-ferrous metals,we use a limited bilingual dictionary and CCA to learn cross-lingual word vector and to represent news in different languages uniformly.Then,to improve the effect of recognition,we use a variant of the CNN to learn recognition features and construct the recognition model.The experimental results show that our proposed method acquires better results.
基金supported by National Social Science Fund of China(Youth Program):“A Study of Acceptability of Chinese Government Public Signs in the New Era and the Countermeasures of the English Translation”(No.:13CYY010)the Subject Construction and Management Project of Zhejiang Gongshang University:“Research on the Organic Integration Path of Constructing Ideological and Political Training and Design of Mixed Teaching Platform during Epidemic Period”(No.:XKJS2020007)Ministry of Education IndustryUniversity Cooperative Education Program:“Research on the Construction of Cross-border Logistics Marketing Bilingual Course Integration”(NO.:202102494002).
文摘Retelling extraction is an important branch of Natural Language Processing(NLP),and high-quality retelling resources are very helpful to improve the performance of machine translation.However,traditional methods based on the bilingual parallel corpus often ignore the document background in the process of retelling acquisition and application.In order to solve this problem,we introduce topic model information into the translation mode and propose a topic-based statistical machine translation method to improve the translation performance.In this method,Probabilistic Latent Semantic Analysis(PLSA)is used to obtains the co-occurrence relationship between words and documents by the hybrid matrix decomposition.Then we design a decoder to simplify the decoding process.Experiments show that the proposed method can effectively improve the accuracy of translation.
基金Supported by the Sichuan Science and Technology Program (2021YFQ0003).
文摘With the development of Internet technology,the explosive growth of Internet information presentation has led to difficulty in filtering effective information.Finding a model with high accuracy for text classification has become a critical problem to be solved by text filtering,especially for Chinese texts.This paper selected the manually calibrated Douban movie website comment data for research.First,a text filtering model based on the BP neural network has been built;Second,based on the Term Frequency-Inverse Document Frequency(TF-IDF)vector space model and the doc2vec method,the text word frequency vector and the text semantic vector were obtained respectively,and the text word frequency vector was linearly reduced by the Principal Component Analysis(PCA)method.Third,the text word frequency vector after dimensionality reduction and the text semantic vector were combined,add the text value degree,and the text synthesis vector was constructed.Experiments show that the model combined with text word frequency vector degree after dimensionality reduction,text semantic vector,and text value has reached the highest accuracy of 84.67%.
基金supported by the National Key R&D Program of China (No.2018YFE0205502)the National Natural Science Foundation of China (No.61672108)。
文摘The internet is an abundant source of news every day. Thus, efficient algorithms to extract keywords from the text are important to obtain information quickly. However, the precision and recall of mature keyword extraction algorithms need improvement. TextRank, which is derived from the PageRank algorithm, uses word graphs to spread the weight of words. The keyword weight propagation in Text Rank focuses only on word frequency. To improve the performance of the algorithm, we propose Semantic Clustering TextRank(SCTR), a semantic clustering news keyword extraction algorithm based on TextRank. Firstly, the word vectors generated by the Bidirectional Encoder Representation from Transformers(BERT) model are used to perform k-means clustering to represent semantic clustering. Then, the clustering results are used to construct a TextRank weight transfer probability matrix. Finally,iterative calculation of word graphs and extraction of keywords are performed. The test target of this experiment is a Chinese news library. The results of the experiment conducted on this text set show that the SCTR algorithm has greater precision, recall, and F1 value than the traditional TextRank and Term Frequency-Inverse Document Frequency(TF-IDF) algorithms.
基金supported by the National Natural Science Foundation of China(No.61972251).
文摘Background:The type Ⅲ secreted effectors(T3SEs)are one of the indispensable proteins in the growth and reproduction of Gram-negative bacteria.In particular,the pathogenesis of Gram-negative bacteria depends on the type Ⅲ secreted effectors,and by injecting T3SEs into a host cell,the host cell's immunity can be destroyed.The high diversity of T3SE sequences and the lack of defined secretion signals make it difficult to identify and predict.Moreover,the related study of the pathological system associated with T3SE remains a hot topic in bioinformatics.Some computational tools have been developed to meet the growing demand for the recognition of T3SEs and the studies of type Ⅲ secretion systems(T3SS).Although these tools can help biological experiments in certain procedures,there is still room for improvement,even for the current best model,as the existing methods adopt handdesigned feature and traditional machine learning methods.Methods:In this study,we propose a powerful predictor based on deep learning methods,called WEDeepT3.Our work consists mainly of three key steps.First,we train word embedding vectors for protein sequences in a large-scale amino acid sequence database.Second,we combine the word vectors with traditional features extracted from protein sequences,like PSSM,to construct a more comprehensive feature representation.Finally,we construct a deep neural network model in the prediction of type Ⅲ secreted effectors.Results:The feature representation of WEDeepT3 consists of both word embedding and position-specific features.Working together with convolutional neural networks,the new model achieves superior performance to the state-ofthe-art methods,demonstrating the effectiveness of the new feature representation and the powerful learning ability of deep models.Conclusion:WEDeepT3 exploits both semantic information of Ar-mer fragments and evolutional information of protein sequences to accurately difYerentiate between T3SEs and non-T3SEs.WEDeepT3 is available at bcmi.sjtu.edu.cn/~yangyang/WEDeepT3.html.
基金Chinese National Science Foundation(#61763007)the higher education research project of National Ethnic Affairs Commission“Research and Practice on the Training Mode of Applied Innovative SoftwareTalents Base on Collaborative Education and innovation” (17056)the InnovationTeam project of Xiangsihu Youth Scholars of Guangxi University For Nationalities。
文摘Sentiment analysis is one of the most popular fields in NLP,and with the development of computer software and hardware,its application is increasingly extensive.Supervised corpus has a positive effect on model training,but these corpus are prohibitively expensive to manually produce.This paper proposes a deep learning sentiment analysis model based on transfer learning.It represents the sentiment and semantics of words and improves the effect of Vietnamese sentiment analysis model by using English corpus.It generated semantic vectors through Word2Vec,an open-source tool,and built sentiment vectors through LSTM with attention mechanism to get sentiment word vector.With the method of sharing parameters,the model was pre-training with English corpus.Finally,the sentiment of the text was classified by stacked Bi-LSTM with attention mechanism,with input of sentiment word vector.Experiments show that the model can effectively improve the performance of Vietnamese sentiment analysis under small language materials.
基金the CCF-Huawei Database System Innovation Research Plan under Grant No.CCF-HuaweiDBIR2020004Athe National Natural Science Foundation of China under Grant Nos.61772091,61802035,61962006 and 61962038+1 种基金the Sichuan Science and Technology Program under Grant Nos.2021JDJQ0021 and 2020YJ0481the Digital Media Art,Key Laboratory of Sichuan Province,Sichuan Conservatory of Music,Chengdu,China under Grant No.21DMAKL02.
文摘Although the popular database systems perform well on query optimization,they still face poor query execution plans when the join operations across multiple tables are complex.Bad execution planning usually results in bad cardinality estimations.The cardinality estimation models in traditional databases cannot provide high-quality estimation,because they are not capable of capturing the correlation between multiple tables in an effective fashion.Recently,the state-of-the-art learning-based cardinality estimation is estimated to work better than the traditional empirical methods.Basically,they used deep neural networks to compute the relationships and correlations of tables.In this paper,we propose a vertical scanning convolutional neural network(abbreviated as VSCNN)to capture the relationships between words in the word vector in order to generate a feature map.The proposed learning-based cardinality estimator converts Structured Query Language(SQL)queries from a sentence to a word vector and we encode table names in the one-hot encoding method and the samples into bitmaps,separately,and then merge them to obtain enough semantic information from data samples.In particular,the feature map obtained by VSCNN contains semantic information including tables,joins,and predicates about SQL queries.Importantly,in order to improve the accuracy of cardinality estimation,we propose the negative sampling method for training the word vector by gradient descent from the base table and compress it into a bitmap.Extensive experiments are conducted and the results show that the estimation quality of q-error of the proposed vertical scanning convolutional neural network based model is reduced by at least 14.6%when compared with the estimators in traditional databases.
基金Supported by the Opening Project of Shanghai Key Laboratory of Integrated Administration Technologies for Information Security(AGK2019004)Songjiang District Science and Technology Research Project(19SJKJGG83)National Natural Science Foundation of China(61802251)。
文摘The Product Sensitive Online Dirichlet Allocation model(PSOLDA)proposed in this paper mainly uses the sentiment polarity of topic words in the review text to improve the accuracy of topic evolution.First,we use Latent Dirichlet Allocation(LDA)to obtain the distribution of topic words in the current time window.Second,the word2 vec word vector is used as auxiliary information to determine the sentiment polarity and obtain the sentiment polarity distribution of the current topic.Finally,the sentiment polarity changes of the topics in the previous and next time window are mapped to the sentiment factors,and the distribution of topic words in the next time window is controlled through them.The experimental results show that the PSOLDA model decreases the probability distribution by 0.1601,while Online Twitter LDA only increases by 0.0699.The topic evolution method that integrates the sentimental information of topic words proposed in this paper is better than the traditional model.