One recent area of interest in computer science is data stream management and processing. By ‘data stream', we refer to continuous and rapidly generated packages of data. Specific features of data streams are imm...One recent area of interest in computer science is data stream management and processing. By ‘data stream', we refer to continuous and rapidly generated packages of data. Specific features of data streams are immense volume, high production rate, limited data processing time, and data concept drift; these features differentiate the data stream from standard types of data. An issue for the data stream is classification of input data. A novel ensemble classifier is proposed in this paper. The classifier uses base classifiers of two weighting functions under different data input conditions. In addition, a new method is used to determine drift, which emphasizes the precision of the algorithm. Another characteristic of the proposed method is removal of different numbers of the base classifiers based on their quality. Implementation of a weighting mechanism to the base classifiers at the decision-making stage is another advantage of the algorithm. This facilitates adaptability when drifts take place, which leads to classifiers with higher efficiency. Furthermore, the proposed method is tested on a set of standard data and the results confirm higher accuracy compared to available ensemble classifiers and single classifiers. In addition, in some cases the proposed classifier is faster and needs less storage space.展开更多
By combining multiple weak learners with concept drift in the classification of big data stream learning, the ensemble learning can achieve better generalization performance than the single learning approach. In this ...By combining multiple weak learners with concept drift in the classification of big data stream learning, the ensemble learning can achieve better generalization performance than the single learning approach. In this paper,we present an efficient classifier using the online bagging ensemble method for big data stream learning. In this classifier, we introduce an efficient online resampling mechanism on the training instances, and use a robust coding method based on error-correcting output codes. This is done in order to reduce the effects of correlations between the classifiers and increase the diversity of the ensemble. A dynamic updating model based on classification performance is adopted to reduce the unnecessary updating operations and improve the efficiency of learning.We implement a parallel version of EoBag, which runs faster than the serial version, and results indicate that the classification performance is almost the same as the serial one. Finally, we compare the performance of classification and the usage of resources with other state-of-the-art algorithms using the artificial and the actual data sets, respectively. Results show that the proposed algorithm can obtain better accuracy and more feasible usage of resources for the classification of big data stream.展开更多
Textual data streams have been extensively used in practical applications where consumers of online products have expressed their views regarding online products.Due to changes in data distribution,commonly referred t...Textual data streams have been extensively used in practical applications where consumers of online products have expressed their views regarding online products.Due to changes in data distribution,commonly referred to as concept drift,mining this data stream is a challenging problem for researchers.The majority of the existing drift detection techniques are based on classification errors,which have higher probabilities of false-positive or missed detections.To improve classification accuracy,there is a need to develop more intuitive detection techniques that can identify a great number of drifts in the data streams.This paper presents an adaptive unsupervised learning technique,an ensemble classifier based on drift detection for opinion mining and sentiment classification.To improve classification performance,this approach uses four different dissimilarity measures to determine the degree of concept drifts in the data stream.Whenever a drift is detected,the proposed method builds and adds a new classifier to the ensemble.To add a new classifier,the total number of classifiers in the ensemble is first checked if the limit is exceeded before the classifier with the least weight is removed from the ensemble.To this end,a weighting mechanism is used to calculate the weight of each classifier,which decides the contribution of each classifier in the final classification results.Several experiments were conducted on real-world datasets and the resultswere evaluated on the false positive rate,miss detection rate,and accuracy measures.The proposed method is also compared with the state-of-the-art methods,which include DDM,EDDM,and PageHinkley with support vector machine(SVM)and Naive Bayes classifiers that are frequently used in concept drift detection studies.In all cases,the results show the efficiency of our proposed method.展开更多
针对流数据中概念漂移发生后,在线学习模型不能对分布变化后的数据做出及时响应且难以提取数据分布的最新信息,导致学习模型收敛较慢的问题,提出一种基于在线集成的概念漂移自适应分类方法(adaptive classification method for concept ...针对流数据中概念漂移发生后,在线学习模型不能对分布变化后的数据做出及时响应且难以提取数据分布的最新信息,导致学习模型收敛较慢的问题,提出一种基于在线集成的概念漂移自适应分类方法(adaptive classification method for concept drift based on online ensemble,AC_OE).一方面,该方法利用在线集成策略构建在线集成学习器,对数据块中的训练样本进行局部预测以动态调整学习器权重,有助于深入提取漂移位点附近流数据的演化信息,对数据分布变化进行精准响应,提升在线学习模型对概念漂移发生后新数据分布的适应能力,提高学习模型的实时泛化性能;另一方面,利用增量学习策略构建增量学习器,并随新样本的进入进行增量式的训练更新,提取流数据的全局分布信息,使模型在平稳的流数据状态下保持较好的鲁棒性.实验结果表明,该方法能够对概念漂移做出及时响应并加速在线学习模型的收敛速度,同时有效提高学习器的整体泛化性能.展开更多
集成算法是处理概念漂移数据流的常用方法之一。为了更全面反映基分类器在模型中的整体价值,提出了一种基于差异指标的概念漂移数据流的集成分类算法AE-Div(Ensemble Algorithm for Data Streams with Concept Drift Based on Diversity...集成算法是处理概念漂移数据流的常用方法之一。为了更全面反映基分类器在模型中的整体价值,提出了一种基于差异指标的概念漂移数据流的集成分类算法AE-Div(Ensemble Algorithm for Data Streams with Concept Drift Based on Diversity Measure)。将基分类器的分类准确率和集成差异性进行融合,结合时间因子作为综合度量指标,并根据概念漂移检测情况对基分类器设置不同权重。将AE-Div算法与其它几种使用广泛的概念漂移分类算法在合成数据集与真实数据集上进行仿真。结果表明,AE-Div具有更高的准确率和更好的适应性和稳定性。展开更多
概念漂移是流数据的主要特征之一,如何检测概念漂移的发生以及调整预测模型去适应概念漂移现象备受研究者的关注.目前有关概念漂移的大多数算法仅仅针对单一类型的概念漂移检测,并且需限制输入数据服从某一分布,所以在检测多种类型概念...概念漂移是流数据的主要特征之一,如何检测概念漂移的发生以及调整预测模型去适应概念漂移现象备受研究者的关注.目前有关概念漂移的大多数算法仅仅针对单一类型的概念漂移检测,并且需限制输入数据服从某一分布,所以在检测多种类型概念漂移时效果不理想.提出一种在线集成自适应算法(KSHPR),在自适应随机森林(Adaptive Random Forests,ARF)算法和流随机补丁(Streaming Random Patch,SRP)算法的基础上进行优化改进,采用非参数检验与滑动窗口相结合的策略进行概念漂移检测,降低窗口平均值对算法性能的影响,并以此为基础建立四个基学习者的集成学习模型,根据基学习者预测准确率,动态分配权值,有效解决流式数据中学习模型精度低的问题.实验证明,提出的算法在真实数据集和合成数据集中均表现优良,与其他算法相比,该算法的稳定性、分类准确性与多类型概念漂移适应能力均有所提升.展开更多
集成式数据流挖掘是对存在概念漂移的数据流进行学习的重要方法.针对传统集成式数据流挖掘存在的缺陷,将人类的回忆和遗忘机制引入到数据流挖掘中,提出基于记忆的数据流挖掘模型MDSM(memorizing based data stream mining).该模型将基...集成式数据流挖掘是对存在概念漂移的数据流进行学习的重要方法.针对传统集成式数据流挖掘存在的缺陷,将人类的回忆和遗忘机制引入到数据流挖掘中,提出基于记忆的数据流挖掘模型MDSM(memorizing based data stream mining).该模型将基分类器看作是系统获得的知识,通过"回忆与遗忘"机制,不仅使历史上有用的基分类器因记忆强度高而保存在"记忆库"中,提高预测的稳定性,而且从"记忆库"中选取当前分类效果好的基分类器参与集成预测,以提高对概念变化的适应能力.基于MDSM模型,提出了一种集成式数据流挖掘算法MAE(memorizing based adaptive ensemble),该算法利用Ebbinghaus遗忘曲线对系统的遗忘机制进行设计,并利用选择性集成来模拟人类的"回忆"机制.与4种典型的数据流挖掘算法进行比较,结果表明:MAE算法分类精度高,对概念漂移的整体适应能力强,尤其对重复出现的概念漂移以及实际应用中存在的复杂概念漂移具有很好的适应能力.不仅能够快速适应新的概念变化,并且能够有效抵御随机的概念波动对系统性能的影响.展开更多
文摘One recent area of interest in computer science is data stream management and processing. By ‘data stream', we refer to continuous and rapidly generated packages of data. Specific features of data streams are immense volume, high production rate, limited data processing time, and data concept drift; these features differentiate the data stream from standard types of data. An issue for the data stream is classification of input data. A novel ensemble classifier is proposed in this paper. The classifier uses base classifiers of two weighting functions under different data input conditions. In addition, a new method is used to determine drift, which emphasizes the precision of the algorithm. Another characteristic of the proposed method is removal of different numbers of the base classifiers based on their quality. Implementation of a weighting mechanism to the base classifiers at the decision-making stage is another advantage of the algorithm. This facilitates adaptability when drifts take place, which leads to classifiers with higher efficiency. Furthermore, the proposed method is tested on a set of standard data and the results confirm higher accuracy compared to available ensemble classifiers and single classifiers. In addition, in some cases the proposed classifier is faster and needs less storage space.
基金supported in part by the National Natural Science Foundation of China(Nos.61702089,61876205,and 61501102)the Science and Technology Plan Project of Guangzhou(No.201804010433)the Bidding Project of Laboratory of Language Engineering and Computing(No.LEC2017ZBKT001)
文摘By combining multiple weak learners with concept drift in the classification of big data stream learning, the ensemble learning can achieve better generalization performance than the single learning approach. In this paper,we present an efficient classifier using the online bagging ensemble method for big data stream learning. In this classifier, we introduce an efficient online resampling mechanism on the training instances, and use a robust coding method based on error-correcting output codes. This is done in order to reduce the effects of correlations between the classifiers and increase the diversity of the ensemble. A dynamic updating model based on classification performance is adopted to reduce the unnecessary updating operations and improve the efficiency of learning.We implement a parallel version of EoBag, which runs faster than the serial version, and results indicate that the classification performance is almost the same as the serial one. Finally, we compare the performance of classification and the usage of resources with other state-of-the-art algorithms using the artificial and the actual data sets, respectively. Results show that the proposed algorithm can obtain better accuracy and more feasible usage of resources for the classification of big data stream.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups(Project under Grant Number(RGP.2/49/43)).
文摘Textual data streams have been extensively used in practical applications where consumers of online products have expressed their views regarding online products.Due to changes in data distribution,commonly referred to as concept drift,mining this data stream is a challenging problem for researchers.The majority of the existing drift detection techniques are based on classification errors,which have higher probabilities of false-positive or missed detections.To improve classification accuracy,there is a need to develop more intuitive detection techniques that can identify a great number of drifts in the data streams.This paper presents an adaptive unsupervised learning technique,an ensemble classifier based on drift detection for opinion mining and sentiment classification.To improve classification performance,this approach uses four different dissimilarity measures to determine the degree of concept drifts in the data stream.Whenever a drift is detected,the proposed method builds and adds a new classifier to the ensemble.To add a new classifier,the total number of classifiers in the ensemble is first checked if the limit is exceeded before the classifier with the least weight is removed from the ensemble.To this end,a weighting mechanism is used to calculate the weight of each classifier,which decides the contribution of each classifier in the final classification results.Several experiments were conducted on real-world datasets and the resultswere evaluated on the false positive rate,miss detection rate,and accuracy measures.The proposed method is also compared with the state-of-the-art methods,which include DDM,EDDM,and PageHinkley with support vector machine(SVM)and Naive Bayes classifiers that are frequently used in concept drift detection studies.In all cases,the results show the efficiency of our proposed method.
文摘针对流数据中概念漂移发生后,在线学习模型不能对分布变化后的数据做出及时响应且难以提取数据分布的最新信息,导致学习模型收敛较慢的问题,提出一种基于在线集成的概念漂移自适应分类方法(adaptive classification method for concept drift based on online ensemble,AC_OE).一方面,该方法利用在线集成策略构建在线集成学习器,对数据块中的训练样本进行局部预测以动态调整学习器权重,有助于深入提取漂移位点附近流数据的演化信息,对数据分布变化进行精准响应,提升在线学习模型对概念漂移发生后新数据分布的适应能力,提高学习模型的实时泛化性能;另一方面,利用增量学习策略构建增量学习器,并随新样本的进入进行增量式的训练更新,提取流数据的全局分布信息,使模型在平稳的流数据状态下保持较好的鲁棒性.实验结果表明,该方法能够对概念漂移做出及时响应并加速在线学习模型的收敛速度,同时有效提高学习器的整体泛化性能.
文摘集成算法是处理概念漂移数据流的常用方法之一。为了更全面反映基分类器在模型中的整体价值,提出了一种基于差异指标的概念漂移数据流的集成分类算法AE-Div(Ensemble Algorithm for Data Streams with Concept Drift Based on Diversity Measure)。将基分类器的分类准确率和集成差异性进行融合,结合时间因子作为综合度量指标,并根据概念漂移检测情况对基分类器设置不同权重。将AE-Div算法与其它几种使用广泛的概念漂移分类算法在合成数据集与真实数据集上进行仿真。结果表明,AE-Div具有更高的准确率和更好的适应性和稳定性。
文摘概念漂移是流数据的主要特征之一,如何检测概念漂移的发生以及调整预测模型去适应概念漂移现象备受研究者的关注.目前有关概念漂移的大多数算法仅仅针对单一类型的概念漂移检测,并且需限制输入数据服从某一分布,所以在检测多种类型概念漂移时效果不理想.提出一种在线集成自适应算法(KSHPR),在自适应随机森林(Adaptive Random Forests,ARF)算法和流随机补丁(Streaming Random Patch,SRP)算法的基础上进行优化改进,采用非参数检验与滑动窗口相结合的策略进行概念漂移检测,降低窗口平均值对算法性能的影响,并以此为基础建立四个基学习者的集成学习模型,根据基学习者预测准确率,动态分配权值,有效解决流式数据中学习模型精度低的问题.实验证明,提出的算法在真实数据集和合成数据集中均表现优良,与其他算法相比,该算法的稳定性、分类准确性与多类型概念漂移适应能力均有所提升.
文摘集成式数据流挖掘是对存在概念漂移的数据流进行学习的重要方法.针对传统集成式数据流挖掘存在的缺陷,将人类的回忆和遗忘机制引入到数据流挖掘中,提出基于记忆的数据流挖掘模型MDSM(memorizing based data stream mining).该模型将基分类器看作是系统获得的知识,通过"回忆与遗忘"机制,不仅使历史上有用的基分类器因记忆强度高而保存在"记忆库"中,提高预测的稳定性,而且从"记忆库"中选取当前分类效果好的基分类器参与集成预测,以提高对概念变化的适应能力.基于MDSM模型,提出了一种集成式数据流挖掘算法MAE(memorizing based adaptive ensemble),该算法利用Ebbinghaus遗忘曲线对系统的遗忘机制进行设计,并利用选择性集成来模拟人类的"回忆"机制.与4种典型的数据流挖掘算法进行比较,结果表明:MAE算法分类精度高,对概念漂移的整体适应能力强,尤其对重复出现的概念漂移以及实际应用中存在的复杂概念漂移具有很好的适应能力.不仅能够快速适应新的概念变化,并且能够有效抵御随机的概念波动对系统性能的影响.
基金国家自然科学基金( the National Natural Science Foundation of China under Grant No.60573174)安徽省自然科学基金( the Natural Science Foundation of Anhui Province of China under Grant No.050420207)