Clustering text data streams is an important issue in data mining community and has a number of applications such as news group filtering, text crawling, document organization and topic detection and tracing etc. Howe...Clustering text data streams is an important issue in data mining community and has a number of applications such as news group filtering, text crawling, document organization and topic detection and tracing etc. However, most methods are similarity-based approaches and only use the TF,IDF scheme to represent the semantics of text data and often lead to poor clustering quality. Recently, researchers argue that semantic smoothing model is more efficient than the existing TF,IDF scheme for improving text clustering quality. However, the existing semantic smoothing model is not suitable for dynamic text data context. In this paper, we extend the semantic smoothing model into text data streams context firstly. Based on the extended model, we then present two online clustering algorithms OCTS and OCTSM for the clustering of massive text data streams. In both algorithms, we also present a new cluster statistics structure named cluster profile which can capture the semantics of text data streams dynamically and at the same time speed up the clustering process. Some efficient implementations for our algorithms are also given. Finally, we present a series of experimental results illustrating the effectiveness of our technique.展开更多
Purpose: In the open science era, it is typical to share project-generated scientific data by depositing it in an open and accessible database. Moreover, scientific publications are preserved in a digital library arc...Purpose: In the open science era, it is typical to share project-generated scientific data by depositing it in an open and accessible database. Moreover, scientific publications are preserved in a digital library archive. It is challenging to identify the data usage that is mentioned in literature and associate it with its source. Here, we investigated the data usage of a government-funded cancer genomics project, The Cancer Genome Atlas(TCGA), via a full-text literature analysis.Design/methodology/approach: We focused on identifying articles using the TCGA dataset and constructing linkages between the articles and the specific TCGA dataset. First, we collected 5,372 TCGA-related articles from Pub Med Central(PMC). Second, we constructed a benchmark set with 25 full-text articles that truly used the TCGA data in their studies, and we summarized the key features of the benchmark set. Third, the key features were applied to the remaining PMC full-text articles that were collected from PMC.Findings: The amount of publications that use TCGA data has increased significantly since 2011, although the TCGA project was launched in 2005. Additionally, we found that the critical areas of focus in the studies that use the TCGA data were glioblastoma multiforme, lung cancer, and breast cancer; meanwhile, data from the RNA-sequencing(RNA-seq) platform is the most preferable for use.Research limitations: The current workflow to identify articles that truly used TCGA data is labor-intensive. An automatic method is expected to improve the performance.Practical implications: This study will help cancer genomics researchers determine the latest advancements in cancer molecular therapy, and it will promote data sharing and data-intensive scientific discovery.Originality/value: Few studies have been conducted to investigate data usage by governmentfunded projects/programs since their launch. In this preliminary study, we extracted articles that use TCGA data from PMC, and we created a link between the full-text articles and the source data.展开更多
Understanding the underlying goal behind a user's Web query has been proved to be helpful to improve the quality of search. This paper focuses on the problem of automatic identification of query types according to th...Understanding the underlying goal behind a user's Web query has been proved to be helpful to improve the quality of search. This paper focuses on the problem of automatic identification of query types according to the goals. Four novel entropy-based features extracted from anchor data and click-through data are proposed, and a support vector machines (SVM) classifier is used to identify the user's goal based on these features. Experi- mental results show that the proposed entropy-based features are more effective than those reported in previous work. By combin- ing multiple features the goals for more than 97% of the queries studied can be correctly identified. Besides these, this paper reaches the following important conclusions: First, anchor-based features are more effective than click-through-based features; Second, the number of sites is more reliable than the number of links; Third, click-distribution- based features are more effective than session-based ones.展开更多
In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorith...In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorithms on different files of different sizes and then conclude that: LZW is the best one in all compression scales that we tested especially on the large files, then Huffman, HFLC, and FLC, respectively. Data compression still is an important topic for research these days, and has many applications and uses needed. Therefore, we suggest continuing searching in this field and trying to combine two techniques in order to reach a best one, or use another source mapping (Hamming) like embedding a linear array into a Hypercube with other good techniques like Huffman and trying to reach good results.展开更多
Feature representation is one of the key issues in data clustering. The existing feature representation of scientific data is not sufficient, which to some extent affects the result of scientific data clustering. Ther...Feature representation is one of the key issues in data clustering. The existing feature representation of scientific data is not sufficient, which to some extent affects the result of scientific data clustering. Therefore, the paper proposes a concept of composite text description(CTD) and a CTD-based feature representation method for biomedical scientific data. The method mainly uses different feature weight algorisms to represent candidate features based on two types of data sources respectively, combines and finally strengthens the two feature sets. Experiments show that comparing with traditional methods, the feature representation method is more effective than traditional methods and can significantly improve the performance of biomedcial data clustering.展开更多
A method of fast data processing has been developed to rapidly obtain evolution of the electron density profile for a multichannel polarimeter-interferometer system(POLARIS)on J-TEXT. Compared with the Abel inversio...A method of fast data processing has been developed to rapidly obtain evolution of the electron density profile for a multichannel polarimeter-interferometer system(POLARIS)on J-TEXT. Compared with the Abel inversion method, evolution of the density profile analyzed by this method can quickly offer important information. This method has the advantage of fast calculation speed with the order of ten milliseconds per normal shot and it is capable of processing up to 1 MHz sampled data, which is helpful for studying density sawtooth instability and the disruption between shots. In the duration of a flat-top plasma current of usual ohmic discharges on J-TEXT, shape factor u is ranged from 4 to 5. When the disruption of discharge happens, the density profile becomes peaked and the shape factor u typically decreases to 1.展开更多
财务欺诈不仅会导致会计信息失真,还会危害经济的健康发展。因此,找到一种高效的智能化欺诈识别方法具有重要的现实意义。本文基于2020—2022年美国上市公司提交到EDGAR数据库的年度报告,聚焦于报告中管理层讨论与分析部分的文本信息(Ma...财务欺诈不仅会导致会计信息失真,还会危害经济的健康发展。因此,找到一种高效的智能化欺诈识别方法具有重要的现实意义。本文基于2020—2022年美国上市公司提交到EDGAR数据库的年度报告,聚焦于报告中管理层讨论与分析部分的文本信息(Management Discussion and Analysis,MD&A)并对其进行分析。考虑到现有数据中欺诈和非欺诈样本数据极度不平衡的特点,本文在分层注意力网络的基础上设计了一个更高效的财务欺诈识别模型,最终使得欺诈识别模型的F1分数和F2分数分别提高了4.1%和3.7%,所提出的算法框架能够有效提高非平衡MD&A文本数据集的分类正确率。研究结果为财务欺诈识别系统性能的提高以及其他领域长文本分类任务的预测提供了新的解决思路,并进一步验证了使用MD&A文本数据进行财务欺诈识别的有效性,为使用非平衡数据进行欺诈识别提供了直接的实证支持。展开更多
当前电网数字化转型升级,电力设备智能运维技术快速发展,在运维过程中积累了大量包含电网重要信息的电力设备缺陷文本。由于文本数据标签稀疏,以及描述语言的模糊性、差异性等问题,电力文本中的运维信息难以被有效挖掘。文章提出了一种...当前电网数字化转型升级,电力设备智能运维技术快速发展,在运维过程中积累了大量包含电网重要信息的电力设备缺陷文本。由于文本数据标签稀疏,以及描述语言的模糊性、差异性等问题,电力文本中的运维信息难以被有效挖掘。文章提出了一种针对电力设备缺陷文本的数据增强方法。首先,使用缺陷文本数据集微调预训练模型ERNIE(enhanced representation through knowledge integration),应用多阶段知识掩码策略将电气领域专业知识集成到对缺陷文本的动态编码中;然后在流形假设的基础上基于降噪自动编码器架构设计破坏函数和重建函数,遵循基于信息价值的掩码单元选择策略构建破坏函数,基于微调过的ERNIE构建重建函数,在“破坏-重建”过程中获得位于原始数据流形范围内的增强样本;其次对增强数据集基于影响函数和多样性度量进行数据选择,过滤掉数据质量差和重复度高的增强样本;最后通过多层训练框架,将增强数据应用于各种缺陷文本挖掘任务。算例基于真实设备巡检、检修记录构建了电力设备缺陷文本等级分类任务。结果表明,所提出的算法对缺陷文本挖掘效果有较大提升,并且可以广泛灵活地应用在多种电力设备缺陷文本挖掘任务中。展开更多
对雷达装备故障文本进行智能化分类,有助于提高雷达装备保障效率。针对雷达故障文本专业性强,样本量小且不平衡的问题,通过非核心词EDA进行类内数据增强,以实现在增加文本量的同时保持关键信息不变。针对非核心词EDA方法产生的新样本多...对雷达装备故障文本进行智能化分类,有助于提高雷达装备保障效率。针对雷达故障文本专业性强,样本量小且不平衡的问题,通过非核心词EDA进行类内数据增强,以实现在增加文本量的同时保持关键信息不变。针对非核心词EDA方法产生的新样本多样性不够的问题,增加SSMix(saliency-based span mixup for text classification),进行类间数据增强,通过对输入文本非线性的交叉融合来提升文本的多样性。实验证明,与现有的经典基线分类方法和典型数据增强分类方法相比,该方法在准确率上有较大幅度的提升。展开更多
为了更有效的利用已有数据资源,不造成科研设施的重复投资,数据共享越来越受到重视.NASA对地观测系统(EOS)提供了大量的包括MODIS在内的免费数据资源,为此,EOS Data Dumper(EDD)通过程序模拟EOS数据门户的正常下载流程,采用了先进的Web...为了更有效的利用已有数据资源,不造成科研设施的重复投资,数据共享越来越受到重视.NASA对地观测系统(EOS)提供了大量的包括MODIS在内的免费数据资源,为此,EOS Data Dumper(EDD)通过程序模拟EOS数据门户的正常下载流程,采用了先进的Web页面文本信息捕捉技术,实现定时自动下载研究区的全部EOS免费数据,并通过免费的DIAL系统,向互联网重新发布,实现复杂的基于时空的数据查询.从技术角度详细介绍了EDD的项目背景与意义、实现方案。展开更多
基金Supported by the National Natural Science Foundation of China under Grant Nos.60573097,60703111,60773198the Natural Science Foundation of Guangdong Province under Grant No.06104916+1 种基金the Specialized Research Foundation for the Doctoral Program of Higher Education under Grant No.20050558017the Program for New Century Excellent Talents in University of China under Grant No.NCET-06-0727.
文摘Clustering text data streams is an important issue in data mining community and has a number of applications such as news group filtering, text crawling, document organization and topic detection and tracing etc. However, most methods are similarity-based approaches and only use the TF,IDF scheme to represent the semantics of text data and often lead to poor clustering quality. Recently, researchers argue that semantic smoothing model is more efficient than the existing TF,IDF scheme for improving text clustering quality. However, the existing semantic smoothing model is not suitable for dynamic text data context. In this paper, we extend the semantic smoothing model into text data streams context firstly. Based on the extended model, we then present two online clustering algorithms OCTS and OCTSM for the clustering of massive text data streams. In both algorithms, we also present a new cluster statistics structure named cluster profile which can capture the semantics of text data streams dynamically and at the same time speed up the clustering process. Some efficient implementations for our algorithms are also given. Finally, we present a series of experimental results illustrating the effectiveness of our technique.
基金supported by the National Population and Health Scientific Data Sharing Program of Chinathe Knowledge Centre for Engineering Sciences and Technology (Medical Centre)the Fundamental Research Funds for the Central Universities (Grant No.: 13R0101)
文摘Purpose: In the open science era, it is typical to share project-generated scientific data by depositing it in an open and accessible database. Moreover, scientific publications are preserved in a digital library archive. It is challenging to identify the data usage that is mentioned in literature and associate it with its source. Here, we investigated the data usage of a government-funded cancer genomics project, The Cancer Genome Atlas(TCGA), via a full-text literature analysis.Design/methodology/approach: We focused on identifying articles using the TCGA dataset and constructing linkages between the articles and the specific TCGA dataset. First, we collected 5,372 TCGA-related articles from Pub Med Central(PMC). Second, we constructed a benchmark set with 25 full-text articles that truly used the TCGA data in their studies, and we summarized the key features of the benchmark set. Third, the key features were applied to the remaining PMC full-text articles that were collected from PMC.Findings: The amount of publications that use TCGA data has increased significantly since 2011, although the TCGA project was launched in 2005. Additionally, we found that the critical areas of focus in the studies that use the TCGA data were glioblastoma multiforme, lung cancer, and breast cancer; meanwhile, data from the RNA-sequencing(RNA-seq) platform is the most preferable for use.Research limitations: The current workflow to identify articles that truly used TCGA data is labor-intensive. An automatic method is expected to improve the performance.Practical implications: This study will help cancer genomics researchers determine the latest advancements in cancer molecular therapy, and it will promote data sharing and data-intensive scientific discovery.Originality/value: Few studies have been conducted to investigate data usage by governmentfunded projects/programs since their launch. In this preliminary study, we extracted articles that use TCGA data from PMC, and we created a link between the full-text articles and the source data.
基金the Tianjin Applied Fundamental Research Plan (07JCYBJC14500)
文摘Understanding the underlying goal behind a user's Web query has been proved to be helpful to improve the quality of search. This paper focuses on the problem of automatic identification of query types according to the goals. Four novel entropy-based features extracted from anchor data and click-through data are proposed, and a support vector machines (SVM) classifier is used to identify the user's goal based on these features. Experi- mental results show that the proposed entropy-based features are more effective than those reported in previous work. By combin- ing multiple features the goals for more than 97% of the queries studied can be correctly identified. Besides these, this paper reaches the following important conclusions: First, anchor-based features are more effective than click-through-based features; Second, the number of sites is more reliable than the number of links; Third, click-distribution- based features are more effective than session-based ones.
文摘In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorithms on different files of different sizes and then conclude that: LZW is the best one in all compression scales that we tested especially on the large files, then Huffman, HFLC, and FLC, respectively. Data compression still is an important topic for research these days, and has many applications and uses needed. Therefore, we suggest continuing searching in this field and trying to combine two techniques in order to reach a best one, or use another source mapping (Hamming) like embedding a linear array into a Hypercube with other good techniques like Huffman and trying to reach good results.
基金supported by the Agridata,the sub-program of National Science and Technology Infrastructure Program(Grant No.2005DKA31800)
文摘Feature representation is one of the key issues in data clustering. The existing feature representation of scientific data is not sufficient, which to some extent affects the result of scientific data clustering. Therefore, the paper proposes a concept of composite text description(CTD) and a CTD-based feature representation method for biomedical scientific data. The method mainly uses different feature weight algorisms to represent candidate features based on two types of data sources respectively, combines and finally strengthens the two feature sets. Experiments show that comparing with traditional methods, the feature representation method is more effective than traditional methods and can significantly improve the performance of biomedcial data clustering.
基金supported by the National Magnetic Confinement Fusion Science Program of China(Nos.2014GB106000,2014GB106002,and2014GB106003)National Natural Science Foundation of China(Nos.11275234,11375237 and 11505238)Scientific Research Grant of Hefei Science Center of CAS(No.2015SRG-HSC010)
文摘A method of fast data processing has been developed to rapidly obtain evolution of the electron density profile for a multichannel polarimeter-interferometer system(POLARIS)on J-TEXT. Compared with the Abel inversion method, evolution of the density profile analyzed by this method can quickly offer important information. This method has the advantage of fast calculation speed with the order of ten milliseconds per normal shot and it is capable of processing up to 1 MHz sampled data, which is helpful for studying density sawtooth instability and the disruption between shots. In the duration of a flat-top plasma current of usual ohmic discharges on J-TEXT, shape factor u is ranged from 4 to 5. When the disruption of discharge happens, the density profile becomes peaked and the shape factor u typically decreases to 1.
文摘财务欺诈不仅会导致会计信息失真,还会危害经济的健康发展。因此,找到一种高效的智能化欺诈识别方法具有重要的现实意义。本文基于2020—2022年美国上市公司提交到EDGAR数据库的年度报告,聚焦于报告中管理层讨论与分析部分的文本信息(Management Discussion and Analysis,MD&A)并对其进行分析。考虑到现有数据中欺诈和非欺诈样本数据极度不平衡的特点,本文在分层注意力网络的基础上设计了一个更高效的财务欺诈识别模型,最终使得欺诈识别模型的F1分数和F2分数分别提高了4.1%和3.7%,所提出的算法框架能够有效提高非平衡MD&A文本数据集的分类正确率。研究结果为财务欺诈识别系统性能的提高以及其他领域长文本分类任务的预测提供了新的解决思路,并进一步验证了使用MD&A文本数据进行财务欺诈识别的有效性,为使用非平衡数据进行欺诈识别提供了直接的实证支持。
文摘当前电网数字化转型升级,电力设备智能运维技术快速发展,在运维过程中积累了大量包含电网重要信息的电力设备缺陷文本。由于文本数据标签稀疏,以及描述语言的模糊性、差异性等问题,电力文本中的运维信息难以被有效挖掘。文章提出了一种针对电力设备缺陷文本的数据增强方法。首先,使用缺陷文本数据集微调预训练模型ERNIE(enhanced representation through knowledge integration),应用多阶段知识掩码策略将电气领域专业知识集成到对缺陷文本的动态编码中;然后在流形假设的基础上基于降噪自动编码器架构设计破坏函数和重建函数,遵循基于信息价值的掩码单元选择策略构建破坏函数,基于微调过的ERNIE构建重建函数,在“破坏-重建”过程中获得位于原始数据流形范围内的增强样本;其次对增强数据集基于影响函数和多样性度量进行数据选择,过滤掉数据质量差和重复度高的增强样本;最后通过多层训练框架,将增强数据应用于各种缺陷文本挖掘任务。算例基于真实设备巡检、检修记录构建了电力设备缺陷文本等级分类任务。结果表明,所提出的算法对缺陷文本挖掘效果有较大提升,并且可以广泛灵活地应用在多种电力设备缺陷文本挖掘任务中。
文摘对雷达装备故障文本进行智能化分类,有助于提高雷达装备保障效率。针对雷达故障文本专业性强,样本量小且不平衡的问题,通过非核心词EDA进行类内数据增强,以实现在增加文本量的同时保持关键信息不变。针对非核心词EDA方法产生的新样本多样性不够的问题,增加SSMix(saliency-based span mixup for text classification),进行类间数据增强,通过对输入文本非线性的交叉融合来提升文本的多样性。实验证明,与现有的经典基线分类方法和典型数据增强分类方法相比,该方法在准确率上有较大幅度的提升。
文摘为了更有效的利用已有数据资源,不造成科研设施的重复投资,数据共享越来越受到重视.NASA对地观测系统(EOS)提供了大量的包括MODIS在内的免费数据资源,为此,EOS Data Dumper(EDD)通过程序模拟EOS数据门户的正常下载流程,采用了先进的Web页面文本信息捕捉技术,实现定时自动下载研究区的全部EOS免费数据,并通过免费的DIAL系统,向互联网重新发布,实现复杂的基于时空的数据查询.从技术角度详细介绍了EDD的项目背景与意义、实现方案。