To promote behavioral change among adolescents in Zambia, the National HIV/AIDS/STI/TB Council, in collaboration with UNICEF, developed the Zambia U-Report platform. This platform provides young people with improved a...To promote behavioral change among adolescents in Zambia, the National HIV/AIDS/STI/TB Council, in collaboration with UNICEF, developed the Zambia U-Report platform. This platform provides young people with improved access to information on various Sexual Reproductive Health topics through Short Messaging Service (SMS) messages. Over the years, the platform has accumulated millions of incoming and outgoing messages, which need to be categorized into key thematic areas for better tracking of sexual reproductive health knowledge gaps among young people. The current manual categorization process of these text messages is inefficient and time-consuming and this study aims to automate the process for improved analysis using text-mining techniques. Firstly, the study investigates the current text message categorization process and identifies a list of categories adopted by counselors over time which are then used to build and train a categorization model. Secondly, the study presents a proof of concept tool that automates the categorization of U-report messages into key thematic areas using the developed categorization model. Finally, it compares the performance and effectiveness of the developed proof of concept tool against the manual system. The study used a dataset comprising 206,625 text messages. The current process would take roughly 2.82 years to categorise this dataset whereas the trained SVM model would require only 6.4 minutes while achieving an accuracy of 70.4% demonstrating that the automated method is significantly faster, more scalable, and consistent when compared to the current manual categorization. These advantages make the SVM model a more efficient and effective tool for categorizing large unstructured text datasets. These results and the proof-of-concept tool developed demonstrate the potential for enhancing the efficiency and accuracy of message categorization on the Zambia U-report platform and other similar text messages-based platforms.展开更多
A method of fast data processing has been developed to rapidly obtain evolution of the electron density profile for a multichannel polarimeter-interferometer system(POLARIS)on J-TEXT. Compared with the Abel inversio...A method of fast data processing has been developed to rapidly obtain evolution of the electron density profile for a multichannel polarimeter-interferometer system(POLARIS)on J-TEXT. Compared with the Abel inversion method, evolution of the density profile analyzed by this method can quickly offer important information. This method has the advantage of fast calculation speed with the order of ten milliseconds per normal shot and it is capable of processing up to 1 MHz sampled data, which is helpful for studying density sawtooth instability and the disruption between shots. In the duration of a flat-top plasma current of usual ohmic discharges on J-TEXT, shape factor u is ranged from 4 to 5. When the disruption of discharge happens, the density profile becomes peaked and the shape factor u typically decreases to 1.展开更多
网络流量识别是网络管理和安全服务的基础.随着互联网的不断扩展及其复杂性的增加,传统基于规则的识别方法或流行为特征的方法正在面临着巨大挑战.受自然语言处理(Nature Language Processing, NLP)启发,本文提出了一种多特征融合的加...网络流量识别是网络管理和安全服务的基础.随着互联网的不断扩展及其复杂性的增加,传统基于规则的识别方法或流行为特征的方法正在面临着巨大挑战.受自然语言处理(Nature Language Processing, NLP)启发,本文提出了一种多特征融合的加密流量快速分类方法 .该方法通过融合数据包和字节序列特征来完成网络流的特征表示,采用双元字节编码将所选特征扩展为双字节序列,增加了字节的上下文语义特征;通过与数据包特征处理相适应的池化方法来最大限度保留数据包的特征信息,从而使所提模型具有更强的抗噪能力和更精确的分类能力.本文方法分别在ISCX-2016和一个包含66个热门应用程序的私有数据集(ETD66)上进行验证,并与其他模型展开比较.结果表明:本文所提方法在ISCX-2016及ETD66上的测试精度和性能都明显优于其他流量分类模型,分别取得了98.2%和98.6%的识别准确率,从而证明了所提方法的特征提取能力和强泛化能力.展开更多
近年来,以ChatGPT为代表的能够适应复杂场景、并能满足人类的各种应用需求为目标的文本生成算法模型成为学术界与产业界共同关注的焦点.然而,ChatGPT等大规模语言模型(Large Language Model,LLM)高度忠实于用户意图的优势隐含了部分的...近年来,以ChatGPT为代表的能够适应复杂场景、并能满足人类的各种应用需求为目标的文本生成算法模型成为学术界与产业界共同关注的焦点.然而,ChatGPT等大规模语言模型(Large Language Model,LLM)高度忠实于用户意图的优势隐含了部分的事实性错误,而且也需要依靠提示内容来控制细致的生成质量和领域适应性,因此,研究以内在质量约束为核心的文本生成方法仍具有重要意义.本文在近年来关键的内容生成模型和技术对比研究的基础上,定义了基于内在质量约束的文本生成的基本形式,以及基于“信、达、雅”的6种质量特征;针对这6种质量特征,分析并总结了生成器模型的设计和相关算法;同时,围绕不同的内在质量特征总结了多种自动评价和人工评价指标与方法.最后,本文对文本内在质量约束技术的未来研究方向进行了展望.展开更多
One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse ...One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse of dimensionality, a problem which plagues NLP in general given that the feature set for learning starts as a function of the size of the language in question, upwards of hundreds of thousands of terms typically. As such, much of the research and development in NLP in the last two decades has been in finding and optimizing solutions to this problem, to feature selection in NLP effectively. This paper looks at the development of these various techniques, leveraging a variety of statistical methods which rest on linguistic theories that were advanced in the middle of the last century, namely the distributional hypothesis which suggests that words that are found in similar contexts generally have similar meanings. In this survey paper we look at the development of some of the most popular of these techniques from a mathematical as well as data structure perspective, from Latent Semantic Analysis to Vector Space Models to their more modern variants which are typically referred to as word embeddings. In this review of algoriths such as Word2Vec, GloVe, ELMo and BERT, we explore the idea of semantic spaces more generally beyond applicability to NLP.展开更多
文摘To promote behavioral change among adolescents in Zambia, the National HIV/AIDS/STI/TB Council, in collaboration with UNICEF, developed the Zambia U-Report platform. This platform provides young people with improved access to information on various Sexual Reproductive Health topics through Short Messaging Service (SMS) messages. Over the years, the platform has accumulated millions of incoming and outgoing messages, which need to be categorized into key thematic areas for better tracking of sexual reproductive health knowledge gaps among young people. The current manual categorization process of these text messages is inefficient and time-consuming and this study aims to automate the process for improved analysis using text-mining techniques. Firstly, the study investigates the current text message categorization process and identifies a list of categories adopted by counselors over time which are then used to build and train a categorization model. Secondly, the study presents a proof of concept tool that automates the categorization of U-report messages into key thematic areas using the developed categorization model. Finally, it compares the performance and effectiveness of the developed proof of concept tool against the manual system. The study used a dataset comprising 206,625 text messages. The current process would take roughly 2.82 years to categorise this dataset whereas the trained SVM model would require only 6.4 minutes while achieving an accuracy of 70.4% demonstrating that the automated method is significantly faster, more scalable, and consistent when compared to the current manual categorization. These advantages make the SVM model a more efficient and effective tool for categorizing large unstructured text datasets. These results and the proof-of-concept tool developed demonstrate the potential for enhancing the efficiency and accuracy of message categorization on the Zambia U-report platform and other similar text messages-based platforms.
基金supported by the National Magnetic Confinement Fusion Science Program of China(Nos.2014GB106000,2014GB106002,and2014GB106003)National Natural Science Foundation of China(Nos.11275234,11375237 and 11505238)Scientific Research Grant of Hefei Science Center of CAS(No.2015SRG-HSC010)
文摘A method of fast data processing has been developed to rapidly obtain evolution of the electron density profile for a multichannel polarimeter-interferometer system(POLARIS)on J-TEXT. Compared with the Abel inversion method, evolution of the density profile analyzed by this method can quickly offer important information. This method has the advantage of fast calculation speed with the order of ten milliseconds per normal shot and it is capable of processing up to 1 MHz sampled data, which is helpful for studying density sawtooth instability and the disruption between shots. In the duration of a flat-top plasma current of usual ohmic discharges on J-TEXT, shape factor u is ranged from 4 to 5. When the disruption of discharge happens, the density profile becomes peaked and the shape factor u typically decreases to 1.
文摘网络流量识别是网络管理和安全服务的基础.随着互联网的不断扩展及其复杂性的增加,传统基于规则的识别方法或流行为特征的方法正在面临着巨大挑战.受自然语言处理(Nature Language Processing, NLP)启发,本文提出了一种多特征融合的加密流量快速分类方法 .该方法通过融合数据包和字节序列特征来完成网络流的特征表示,采用双元字节编码将所选特征扩展为双字节序列,增加了字节的上下文语义特征;通过与数据包特征处理相适应的池化方法来最大限度保留数据包的特征信息,从而使所提模型具有更强的抗噪能力和更精确的分类能力.本文方法分别在ISCX-2016和一个包含66个热门应用程序的私有数据集(ETD66)上进行验证,并与其他模型展开比较.结果表明:本文所提方法在ISCX-2016及ETD66上的测试精度和性能都明显优于其他流量分类模型,分别取得了98.2%和98.6%的识别准确率,从而证明了所提方法的特征提取能力和强泛化能力.
文摘近年来,以ChatGPT为代表的能够适应复杂场景、并能满足人类的各种应用需求为目标的文本生成算法模型成为学术界与产业界共同关注的焦点.然而,ChatGPT等大规模语言模型(Large Language Model,LLM)高度忠实于用户意图的优势隐含了部分的事实性错误,而且也需要依靠提示内容来控制细致的生成质量和领域适应性,因此,研究以内在质量约束为核心的文本生成方法仍具有重要意义.本文在近年来关键的内容生成模型和技术对比研究的基础上,定义了基于内在质量约束的文本生成的基本形式,以及基于“信、达、雅”的6种质量特征;针对这6种质量特征,分析并总结了生成器模型的设计和相关算法;同时,围绕不同的内在质量特征总结了多种自动评价和人工评价指标与方法.最后,本文对文本内在质量约束技术的未来研究方向进行了展望.
文摘One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse of dimensionality, a problem which plagues NLP in general given that the feature set for learning starts as a function of the size of the language in question, upwards of hundreds of thousands of terms typically. As such, much of the research and development in NLP in the last two decades has been in finding and optimizing solutions to this problem, to feature selection in NLP effectively. This paper looks at the development of these various techniques, leveraging a variety of statistical methods which rest on linguistic theories that were advanced in the middle of the last century, namely the distributional hypothesis which suggests that words that are found in similar contexts generally have similar meanings. In this survey paper we look at the development of some of the most popular of these techniques from a mathematical as well as data structure perspective, from Latent Semantic Analysis to Vector Space Models to their more modern variants which are typically referred to as word embeddings. In this review of algoriths such as Word2Vec, GloVe, ELMo and BERT, we explore the idea of semantic spaces more generally beyond applicability to NLP.