Background:The medical records of traditional Chinese medicine(TCM)contain numerous synonymous terms with different descriptions,which is not conducive to computer-aided data mining of TCM.However,there is a lack of m...Background:The medical records of traditional Chinese medicine(TCM)contain numerous synonymous terms with different descriptions,which is not conducive to computer-aided data mining of TCM.However,there is a lack of models available to normalize synonymous TCM terms.Therefore,construction of a synonymous term conversion(STC)model for normalizing synonymous TCM terms is necessary.Methods:Based on the neural networks of bidirectional encoder representations from transformers(BERT),four types of TCM STC models were designed:Models based on BERT and text classification,text sequence generation,named entity recognition,and text matching.The superior STC model was selected on the basis of its performance in converting synonymous terms.Moreover,three misjudgment inspection methods for the conversion results of the STC model based on inconsistency were proposed to find incorrect term conversion:Neuron random deactivation,output comparison of multiple isomorphic models,and output comparison of multiple heterogeneous models(OCMH).Results:The classification-based STC model outperformed the other STC task models.It achieved F1 scores of 0.91,0.91,and 0.83 for performing symptoms,patterns,and treatments STC tasks,respectively.The OCMH method showed the best performance in misjudgment inspection,with wrong detection rates of 0.80,0.84,and 0.90 in the term conversion results for symptoms,patterns,and treatments,respectively.Conclusion:The TCM STC model based on classification achieved superior performance in converting synonymous terms for symptoms,patterns,and treatments.The misjudgment inspection method based on OCMH showed superior performance in identifying incorrect outputs.展开更多
首先利用bidirectional encoder representations from transformers(BERT)模型的强大的语境理解能力来提取数据法律文本的深层语义特征,然后引入细粒度特征提取层,依照注意力机制,重点关注文本中与数据法律问答相关的关键部分,最后对...首先利用bidirectional encoder representations from transformers(BERT)模型的强大的语境理解能力来提取数据法律文本的深层语义特征,然后引入细粒度特征提取层,依照注意力机制,重点关注文本中与数据法律问答相关的关键部分,最后对所采集的法律问答数据集进行训练和评估.结果显示:与传统的多个单一模型相比,所提出的模型在准确度、精确度、召回率、F1分数等关键性能指标上均有提升,表明该系统能够更有效地理解和回应复杂的数据法学问题,为研究数据法学的专业人士和公众用户提供更高质量的问答服务.展开更多
The rapid expansion of online content and big data has precipitated an urgent need for efficient summarization techniques to swiftly comprehend vast textual documents without compromising their original integrity.Curr...The rapid expansion of online content and big data has precipitated an urgent need for efficient summarization techniques to swiftly comprehend vast textual documents without compromising their original integrity.Current approaches in Extractive Text Summarization(ETS)leverage the modeling of inter-sentence relationships,a task of paramount importance in producing coherent summaries.This study introduces an innovative model that integrates Graph Attention Networks(GATs)with Transformer-based Bidirectional Encoder Representa-tions from Transformers(BERT)and Latent Dirichlet Allocation(LDA),further enhanced by Term Frequency-Inverse Document Frequency(TF-IDF)values,to improve sentence selection by capturing comprehensive topical information.Our approach constructs a graph with nodes representing sentences,words,and topics,thereby elevating the interconnectivity and enabling a more refined understanding of text structures.This model is stretched to Multi-Document Summarization(MDS)from Single-Document Summarization,offering significant improvements over existing models such as THGS-GMM and Topic-GraphSum,as demonstrated by empirical evaluations on benchmark news datasets like Cable News Network(CNN)/Daily Mail(DM)and Multi-News.The results consistently demonstrate superior performance,showcasing the model’s robustness in handling complex summarization tasks across single and multi-document contexts.This research not only advances the integration of BERT and LDA within a GATs but also emphasizes our model’s capacity to effectively manage global information and adapt to diverse summarization challenges.展开更多
While encryption technology safeguards the security of network communications,malicious traffic also uses encryption protocols to obscure its malicious behavior.To address the issues of traditional machine learning me...While encryption technology safeguards the security of network communications,malicious traffic also uses encryption protocols to obscure its malicious behavior.To address the issues of traditional machine learning methods relying on expert experience and the insufficient representation capabilities of existing deep learning methods for encrypted malicious traffic,we propose an encrypted malicious traffic classification method that integrates global semantic features with local spatiotemporal features,called BERT-based Spatio-Temporal Features Network(BSTFNet).At the packet-level granularity,the model captures the global semantic features of packets through the attention mechanism of the Bidirectional Encoder Representations from Transformers(BERT)model.At the byte-level granularity,we initially employ the Bidirectional Gated Recurrent Unit(BiGRU)model to extract temporal features from bytes,followed by the utilization of the Text Convolutional Neural Network(TextCNN)model with multi-sized convolution kernels to extract local multi-receptive field spatial features.The fusion of features from both granularities serves as the ultimate multidimensional representation of malicious traffic.Our approach achieves accuracy and F1-score of 99.39%and 99.40%,respectively,on the publicly available USTC-TFC2016 dataset,and effectively reduces sample confusion within the Neris and Virut categories.The experimental results demonstrate that our method has outstanding representation and classification capabilities for encrypted malicious traffic.展开更多
Offensive messages on social media,have recently been frequently used to harass and criticize people.In recent studies,many promising algorithms have been developed to identify offensive texts.Most algorithms analyze ...Offensive messages on social media,have recently been frequently used to harass and criticize people.In recent studies,many promising algorithms have been developed to identify offensive texts.Most algorithms analyze text in a unidirectional manner,where a bidirectional method can maximize performance results and capture semantic and contextual information in sentences.In addition,there are many separate models for identifying offensive texts based on monolin-gual and multilingual,but there are a few models that can detect both monolingual and multilingual-based offensive texts.In this study,a detection system has been developed for both monolingual and multilingual offensive texts by combining deep convolutional neural network and bidirectional encoder representations from transformers(Deep-BERT)to identify offensive posts on social media that are used to harass others.This paper explores a variety of ways to deal with multilin-gualism,including collaborative multilingual and translation-based approaches.Then,the Deep-BERT is tested on the Bengali and English datasets,including the different bidirectional encoder representations from transformers(BERT)pre-trained word-embedding techniques,and found that the proposed Deep-BERT’s efficacy outperformed all existing offensive text classification algorithms reaching an accuracy of 91.83%.The proposed model is a state-of-the-art model that can classify both monolingual-based and multilingual-based offensive texts.展开更多
For the existing aspect category sentiment analysis research,most of the aspects are given for sentiment extraction,and this pipeline method is prone to error accumulation,and the use of graph convolutional neural net...For the existing aspect category sentiment analysis research,most of the aspects are given for sentiment extraction,and this pipeline method is prone to error accumulation,and the use of graph convolutional neural network for aspect category sentiment analysis does not fully utilize the dependency type information between words,so it cannot enhance feature extraction.This paper proposes an end-to-end aspect category sentiment analysis(ETESA)model based on type graph convolutional networks.The model uses the bidirectional encoder representation from transformers(BERT)pretraining model to obtain aspect categories and word vectors containing contextual dynamic semantic information,which can solve the problem of polysemy;when using graph convolutional network(GCN)for feature extraction,the fusion operation of word vectors and initialization tensor of dependency types can obtain the importance values of different dependency types and enhance the text feature representation;by transforming aspect category and sentiment pair extraction into multiple single-label classification problems,aspect category and sentiment can be extracted simultaneously in an end-to-end way and solve the problem of error accumulation.Experiments are tested on three public datasets,and the results show that the ETESA model can achieve higher Precision,Recall and F1 value,proving the effectiveness of the model.展开更多
In the era of big data,E-commerce plays an increasingly important role,and steel E-commerce certainly occupies a positive position.However,it is very difficult to choose satisfactory steel raw materials from diverse s...In the era of big data,E-commerce plays an increasingly important role,and steel E-commerce certainly occupies a positive position.However,it is very difficult to choose satisfactory steel raw materials from diverse steel commodities online on steel E-commerce platforms in the purchase of staffs.In order to improve the efficiency of purchasers searching for commodities on the steel E-commerce platforms,we propose a novel deep learning-based loss function for named entity recognition(NER).Considering the impacts of small sample and imbalanced data,in our NER scheme,the focal loss,the label smoothing,and the cross entropy are incorporated into a lite bidirectional encoder representations from transformers(BERT)model to avoid the over-fitting.Moreover,through the analysis of different classic annotation techniques used to tag data,an ideal one is chosen for the training model in our proposed scheme.Experiments are conducted on Chinese steel E-commerce datasets.The experimental results show that the training time of a lite BERT(ALBERT)-based method is much shorter than that of BERT-based models,while achieving the similar computational performance in terms of metrics precision,recall,and F1 with BERT-based models.Meanwhile,our proposed approach performs much better than that of combining Word2Vec,bidirectional long short-term memory(Bi-LSTM),and conditional random field(CRF)models,in consideration of training time and F1.展开更多
针对标书文本重要信息的抽取需求,提出一种基于BERT(bidirectional encoder representations from transformers)的阅读理解式标书文本信息抽取方法。该方法将信息抽取任务转换为阅读理解任务,根据标书文本内容,生成对应问题,再抽取标...针对标书文本重要信息的抽取需求,提出一种基于BERT(bidirectional encoder representations from transformers)的阅读理解式标书文本信息抽取方法。该方法将信息抽取任务转换为阅读理解任务,根据标书文本内容,生成对应问题,再抽取标书文本片段作为问题答案。利用BERT预训练模型,得到强健的语言模型,获取更深层次的上下文关联。相比传统的命名实体识别方法,基于阅读理解的信息抽取方法能够很好地同时处理非嵌套实体和嵌套实体的抽取,也能充分利用问题所包含的先验语义信息,区分出具有相似属性的信息。从中国政府采购网下载标书文本数据进行了实验,本文方法总体EM(exact match)值达到92.41%,F1值达到95.03%。实验结果表明本文提出的方法对标书文本的信息抽取是有效的。展开更多
Objective:This study aimed to construct an intelligent prescription-generating(IPG)model based on deep-learning natural language processing(NLP)technology for multiple prescriptions in Chinese medicine.Materials and M...Objective:This study aimed to construct an intelligent prescription-generating(IPG)model based on deep-learning natural language processing(NLP)technology for multiple prescriptions in Chinese medicine.Materials and Methods:We selected the Treatise on Febrile Diseases and the Synopsis of Golden Chamber as basic datasets with EDA data augmentation,and the Yellow Emperor’s Canon of Internal Medicine,the Classic of the Miraculous Pivot,and the Classic on Medical Problems as supplementary datasets for fine-tuning.We selected the word-embedding model based on the Imperial Collection of Four,the bidirectional encoder representations from transformers(BERT)model based on the Chinese Wikipedia,and the robustly optimized BERT approach(RoBERTa)model based on the Chinese Wikipedia and a general database.In addition,the BERT model was fine-tuned using the supplementary datasets to generate a Traditional Chinese Medicine-BERT model.Multiple IPG models were constructed based on the pretraining strategy and experiments were performed.Metrics of precision,recall,and F1-score were used to assess the model performance.Based on the trained models,we extracted and visualized the semantic features of some typical texts from treatise on febrile diseases and investigated the patterns.Results:Among all the trained models,the RoBERTa-large model performed the best,with a test set precision of 92.22%,recall of 86.71%,and F1-score of 89.38%and 10-fold cross-validation precision of 94.5%±2.5%,recall of 90.47%±4.1%,and F1-score of 92.38%±2.8%.The semantic feature extraction results based on this model showed that the model was intelligently stratified based on different meanings such that the within-layer’s patterns showed the associations of symptom–symptoms,disease–symptoms,and symptom–punctuations,while the between-layer’s patterns showed a progressive or dynamic symptom and disease transformation.Conclusions:Deep-learning-based NLP technology significantly improves the performance of IPG model.In addition,NLP-based semantic feature extraction may be vital to further investigate the ancient Chinese medicine texts.展开更多
In the context of interdisciplinary research,using computer technology to further mine keywords in cultural texts and carry out semantic analysis can deepen the understanding of texts,and provide quantitative support ...In the context of interdisciplinary research,using computer technology to further mine keywords in cultural texts and carry out semantic analysis can deepen the understanding of texts,and provide quantitative support and evidence for humanistic studies.Based on the novel A Dream of Red Mansions,the automatic extraction and classification of those sentiment terms in it were realized,and detailed analysis of large-scale sentiment terms was carried out.Bidirectional encoder representation from transformers(BERT) pretraining and fine-tuning model was used to construct the sentiment classifier of A Dream of Red Mansions.Sentiment terms of A Dream of Red Mansions are divided into eight sentimental categories,and the relevant people in sentences are extracted according to specific rules.It also tries to visually display the sentimental interactions between Twelve Girls of Jinling and Jia Baoyu along with the development of the episode.The overall F_(1) score of BERT-based sentiment classifier reached 84.89%.The best single sentiment score reached 91.15%.Experimental results show that the classifier can satisfactorily classify the text of A Dream of Red Mansions,and the text classification and interactional analysis results can be mutually verified with the text interpretation of A dream of Red Mansions by literature experts.展开更多
基金The National Key R&D Program of China supported this study(2017YFC1700303).
文摘Background:The medical records of traditional Chinese medicine(TCM)contain numerous synonymous terms with different descriptions,which is not conducive to computer-aided data mining of TCM.However,there is a lack of models available to normalize synonymous TCM terms.Therefore,construction of a synonymous term conversion(STC)model for normalizing synonymous TCM terms is necessary.Methods:Based on the neural networks of bidirectional encoder representations from transformers(BERT),four types of TCM STC models were designed:Models based on BERT and text classification,text sequence generation,named entity recognition,and text matching.The superior STC model was selected on the basis of its performance in converting synonymous terms.Moreover,three misjudgment inspection methods for the conversion results of the STC model based on inconsistency were proposed to find incorrect term conversion:Neuron random deactivation,output comparison of multiple isomorphic models,and output comparison of multiple heterogeneous models(OCMH).Results:The classification-based STC model outperformed the other STC task models.It achieved F1 scores of 0.91,0.91,and 0.83 for performing symptoms,patterns,and treatments STC tasks,respectively.The OCMH method showed the best performance in misjudgment inspection,with wrong detection rates of 0.80,0.84,and 0.90 in the term conversion results for symptoms,patterns,and treatments,respectively.Conclusion:The TCM STC model based on classification achieved superior performance in converting synonymous terms for symptoms,patterns,and treatments.The misjudgment inspection method based on OCMH showed superior performance in identifying incorrect outputs.
文摘首先利用bidirectional encoder representations from transformers(BERT)模型的强大的语境理解能力来提取数据法律文本的深层语义特征,然后引入细粒度特征提取层,依照注意力机制,重点关注文本中与数据法律问答相关的关键部分,最后对所采集的法律问答数据集进行训练和评估.结果显示:与传统的多个单一模型相比,所提出的模型在准确度、精确度、召回率、F1分数等关键性能指标上均有提升,表明该系统能够更有效地理解和回应复杂的数据法学问题,为研究数据法学的专业人士和公众用户提供更高质量的问答服务.
文摘The rapid expansion of online content and big data has precipitated an urgent need for efficient summarization techniques to swiftly comprehend vast textual documents without compromising their original integrity.Current approaches in Extractive Text Summarization(ETS)leverage the modeling of inter-sentence relationships,a task of paramount importance in producing coherent summaries.This study introduces an innovative model that integrates Graph Attention Networks(GATs)with Transformer-based Bidirectional Encoder Representa-tions from Transformers(BERT)and Latent Dirichlet Allocation(LDA),further enhanced by Term Frequency-Inverse Document Frequency(TF-IDF)values,to improve sentence selection by capturing comprehensive topical information.Our approach constructs a graph with nodes representing sentences,words,and topics,thereby elevating the interconnectivity and enabling a more refined understanding of text structures.This model is stretched to Multi-Document Summarization(MDS)from Single-Document Summarization,offering significant improvements over existing models such as THGS-GMM and Topic-GraphSum,as demonstrated by empirical evaluations on benchmark news datasets like Cable News Network(CNN)/Daily Mail(DM)and Multi-News.The results consistently demonstrate superior performance,showcasing the model’s robustness in handling complex summarization tasks across single and multi-document contexts.This research not only advances the integration of BERT and LDA within a GATs but also emphasizes our model’s capacity to effectively manage global information and adapt to diverse summarization challenges.
基金This research was funded by National Natural Science Foundation of China under Grant No.61806171Sichuan University of Science&Engineering Talent Project under Grant No.2021RC15+2 种基金Open Fund Project of Key Laboratory for Non-Destructive Testing and Engineering Computer of Sichuan Province Universities on Bridge Inspection and Engineering under Grant No.2022QYJ06Sichuan University of Science&Engineering Graduate Student Innovation Fund under Grant No.Y2023115The Scientific Research and Innovation Team Program of Sichuan University of Science and Technology under Grant No.SUSE652A006.
文摘While encryption technology safeguards the security of network communications,malicious traffic also uses encryption protocols to obscure its malicious behavior.To address the issues of traditional machine learning methods relying on expert experience and the insufficient representation capabilities of existing deep learning methods for encrypted malicious traffic,we propose an encrypted malicious traffic classification method that integrates global semantic features with local spatiotemporal features,called BERT-based Spatio-Temporal Features Network(BSTFNet).At the packet-level granularity,the model captures the global semantic features of packets through the attention mechanism of the Bidirectional Encoder Representations from Transformers(BERT)model.At the byte-level granularity,we initially employ the Bidirectional Gated Recurrent Unit(BiGRU)model to extract temporal features from bytes,followed by the utilization of the Text Convolutional Neural Network(TextCNN)model with multi-sized convolution kernels to extract local multi-receptive field spatial features.The fusion of features from both granularities serves as the ultimate multidimensional representation of malicious traffic.Our approach achieves accuracy and F1-score of 99.39%and 99.40%,respectively,on the publicly available USTC-TFC2016 dataset,and effectively reduces sample confusion within the Neris and Virut categories.The experimental results demonstrate that our method has outstanding representation and classification capabilities for encrypted malicious traffic.
文摘Offensive messages on social media,have recently been frequently used to harass and criticize people.In recent studies,many promising algorithms have been developed to identify offensive texts.Most algorithms analyze text in a unidirectional manner,where a bidirectional method can maximize performance results and capture semantic and contextual information in sentences.In addition,there are many separate models for identifying offensive texts based on monolin-gual and multilingual,but there are a few models that can detect both monolingual and multilingual-based offensive texts.In this study,a detection system has been developed for both monolingual and multilingual offensive texts by combining deep convolutional neural network and bidirectional encoder representations from transformers(Deep-BERT)to identify offensive posts on social media that are used to harass others.This paper explores a variety of ways to deal with multilin-gualism,including collaborative multilingual and translation-based approaches.Then,the Deep-BERT is tested on the Bengali and English datasets,including the different bidirectional encoder representations from transformers(BERT)pre-trained word-embedding techniques,and found that the proposed Deep-BERT’s efficacy outperformed all existing offensive text classification algorithms reaching an accuracy of 91.83%.The proposed model is a state-of-the-art model that can classify both monolingual-based and multilingual-based offensive texts.
基金Supported by the National Key Research and Development Program of China(No.2018YFB1702601).
文摘For the existing aspect category sentiment analysis research,most of the aspects are given for sentiment extraction,and this pipeline method is prone to error accumulation,and the use of graph convolutional neural network for aspect category sentiment analysis does not fully utilize the dependency type information between words,so it cannot enhance feature extraction.This paper proposes an end-to-end aspect category sentiment analysis(ETESA)model based on type graph convolutional networks.The model uses the bidirectional encoder representation from transformers(BERT)pretraining model to obtain aspect categories and word vectors containing contextual dynamic semantic information,which can solve the problem of polysemy;when using graph convolutional network(GCN)for feature extraction,the fusion operation of word vectors and initialization tensor of dependency types can obtain the importance values of different dependency types and enhance the text feature representation;by transforming aspect category and sentiment pair extraction into multiple single-label classification problems,aspect category and sentiment can be extracted simultaneously in an end-to-end way and solve the problem of error accumulation.Experiments are tested on three public datasets,and the results show that the ETESA model can achieve higher Precision,Recall and F1 value,proving the effectiveness of the model.
基金This work was supported in part by the National Natural Science Foundation of China under Grants U1836106 and 81961138010in part by the Beijing Natural Science Foundation under Grants M21032 and 19L2029+2 种基金in part by the Beijing Intelligent Logistics System Collaborative Innovation Center under Grant BILSCIC-2019KF-08in part by the Scientific and Technological Innovation Foundation of Shunde Graduate School,USTB,under Grants BK20BF010 and BK19BF006in part by the Fundamental Research Funds for the University of Science and Technology Beijing under Grant FRF-BD-19-012A.
文摘In the era of big data,E-commerce plays an increasingly important role,and steel E-commerce certainly occupies a positive position.However,it is very difficult to choose satisfactory steel raw materials from diverse steel commodities online on steel E-commerce platforms in the purchase of staffs.In order to improve the efficiency of purchasers searching for commodities on the steel E-commerce platforms,we propose a novel deep learning-based loss function for named entity recognition(NER).Considering the impacts of small sample and imbalanced data,in our NER scheme,the focal loss,the label smoothing,and the cross entropy are incorporated into a lite bidirectional encoder representations from transformers(BERT)model to avoid the over-fitting.Moreover,through the analysis of different classic annotation techniques used to tag data,an ideal one is chosen for the training model in our proposed scheme.Experiments are conducted on Chinese steel E-commerce datasets.The experimental results show that the training time of a lite BERT(ALBERT)-based method is much shorter than that of BERT-based models,while achieving the similar computational performance in terms of metrics precision,recall,and F1 with BERT-based models.Meanwhile,our proposed approach performs much better than that of combining Word2Vec,bidirectional long short-term memory(Bi-LSTM),and conditional random field(CRF)models,in consideration of training time and F1.
文摘Objective:This study aimed to construct an intelligent prescription-generating(IPG)model based on deep-learning natural language processing(NLP)technology for multiple prescriptions in Chinese medicine.Materials and Methods:We selected the Treatise on Febrile Diseases and the Synopsis of Golden Chamber as basic datasets with EDA data augmentation,and the Yellow Emperor’s Canon of Internal Medicine,the Classic of the Miraculous Pivot,and the Classic on Medical Problems as supplementary datasets for fine-tuning.We selected the word-embedding model based on the Imperial Collection of Four,the bidirectional encoder representations from transformers(BERT)model based on the Chinese Wikipedia,and the robustly optimized BERT approach(RoBERTa)model based on the Chinese Wikipedia and a general database.In addition,the BERT model was fine-tuned using the supplementary datasets to generate a Traditional Chinese Medicine-BERT model.Multiple IPG models were constructed based on the pretraining strategy and experiments were performed.Metrics of precision,recall,and F1-score were used to assess the model performance.Based on the trained models,we extracted and visualized the semantic features of some typical texts from treatise on febrile diseases and investigated the patterns.Results:Among all the trained models,the RoBERTa-large model performed the best,with a test set precision of 92.22%,recall of 86.71%,and F1-score of 89.38%and 10-fold cross-validation precision of 94.5%±2.5%,recall of 90.47%±4.1%,and F1-score of 92.38%±2.8%.The semantic feature extraction results based on this model showed that the model was intelligently stratified based on different meanings such that the within-layer’s patterns showed the associations of symptom–symptoms,disease–symptoms,and symptom–punctuations,while the between-layer’s patterns showed a progressive or dynamic symptom and disease transformation.Conclusions:Deep-learning-based NLP technology significantly improves the performance of IPG model.In addition,NLP-based semantic feature extraction may be vital to further investigate the ancient Chinese medicine texts.
基金supported by the Fundamental Research Funds for the Central Universities (2019XD-A03-3)the Beijing Key Lab of Network System and Network Culture (NSNC-202 A09)。
文摘In the context of interdisciplinary research,using computer technology to further mine keywords in cultural texts and carry out semantic analysis can deepen the understanding of texts,and provide quantitative support and evidence for humanistic studies.Based on the novel A Dream of Red Mansions,the automatic extraction and classification of those sentiment terms in it were realized,and detailed analysis of large-scale sentiment terms was carried out.Bidirectional encoder representation from transformers(BERT) pretraining and fine-tuning model was used to construct the sentiment classifier of A Dream of Red Mansions.Sentiment terms of A Dream of Red Mansions are divided into eight sentimental categories,and the relevant people in sentences are extracted according to specific rules.It also tries to visually display the sentimental interactions between Twelve Girls of Jinling and Jia Baoyu along with the development of the episode.The overall F_(1) score of BERT-based sentiment classifier reached 84.89%.The best single sentiment score reached 91.15%.Experimental results show that the classifier can satisfactorily classify the text of A Dream of Red Mansions,and the text classification and interactional analysis results can be mutually verified with the text interpretation of A dream of Red Mansions by literature experts.