Intrusion detection aims to detect intrusion behavior and serves as a complement to firewalls.It can detect attack types of malicious network communications and computer usage that cannot be detected by idiomatic fire...Intrusion detection aims to detect intrusion behavior and serves as a complement to firewalls.It can detect attack types of malicious network communications and computer usage that cannot be detected by idiomatic firewalls.Many intrusion detection methods are processed through machine learning.Previous literature has shown that the performance of an intrusion detection method based on hybrid learning or integration approach is superior to that of single learning technology.However,almost no studies focus on how additional representative and concise features can be extracted to process effective intrusion detection among massive and complicated data.In this paper,a new hybrid learning method is proposed on the basis of features such as density,cluster centers,and nearest neighbors(DCNN).In this algorithm,data is represented by the local density of each sample point and the sum of distances from each sample point to cluster centers and to its nearest neighbor.k-NN classifier is adopted to classify the new feature vectors.Our experiment shows that DCNN,which combines K-means,clustering-based density,and k-NN classifier,is effective in intrusion detection.展开更多
Many classifiers and methods are proposed to deal with letter recognition problem. Among them, clustering is a widely used method. But only one time for clustering is not adequately. Here, we adopt data preprocessing ...Many classifiers and methods are proposed to deal with letter recognition problem. Among them, clustering is a widely used method. But only one time for clustering is not adequately. Here, we adopt data preprocessing and a re kernel clustering method to tackle the letter recognition problem. In order to validate effectiveness and efficiency of proposed method, we introduce re kernel clustering into Kernel Nearest Neighbor classification(KNN), Radial Basis Function Neural Network(RBFNN), and Support Vector Machine(SVM). Furthermore, we compare the difference between re kernel clustering and one time kernel clustering which is denoted as kernel clustering for short. Experimental results validate that re kernel clustering forms fewer and more feasible kernels and attain higher classification accuracy.展开更多
The hypersonic interception in near space is a great challenge because of the target’s unpredictable trajectory, which demands the interceptors of trajectory cluster coverage of the predicted area and optimal traject...The hypersonic interception in near space is a great challenge because of the target’s unpredictable trajectory, which demands the interceptors of trajectory cluster coverage of the predicted area and optimal trajectory modification capability aiming at the consistently updating predicted impact point(PIP) in the midcourse phase. A novel midcourse optimal trajectory cluster generation and trajectory modification algorithm is proposed based on the neighboring optimal control theory. Firstly, the midcourse trajectory optimization problem is introduced; the necessary conditions for the optimal control and the transversality constraints are given.Secondly, with the description of the neighboring optimal trajectory existence theory(NOTET), the neighboring optimal control(NOC)algorithm is derived by taking the second order partial derivations with the necessary conditions and transversality conditions. The revised terminal constraints are reversely integrated to the initial time and the perturbations of the co-states are further expressed with the states deviations and terminal constraints modifications.Thirdly, the simulations of two different scenarios are carried out and the results prove the effectiveness and optimality of the proposed method.展开更多
Several typical supervised clustering methods such as Gaussian mixture model-based supervised clustering (GMM), k- nearest-neighbor (KNN), binary support vector machines (SVMs) and multiclass support vector mach...Several typical supervised clustering methods such as Gaussian mixture model-based supervised clustering (GMM), k- nearest-neighbor (KNN), binary support vector machines (SVMs) and multiclass support vector machines (MC-SVMs) were employed to classify the computer simulation data and two real microarray expression datasets. False positive, false negative, true positive, true negative, clustering accuracy and Matthews' correlation coefficient (MCC) were compared among these methods. The results are as follows: (1) In classifying thousands of gene expression data, the performances of two GMM methods have the maximal clustering accuracy and the least overall FP+FN error numbers on the basis of the assumption that the whole set of microarray data are a finite mixture of multivariate Gaussian distributions. Furthermore, when the number of training sample is very small, the clustering accuracy of GMM-Ⅱ method has superiority over GMM- Ⅰ method. (2) In general, the superior classification performance of the MC-SVMs are more robust and more practical, which are less sensitive to the curse of dimensionality, and not only next to GMM method in clustering accuracy to thousands of gene expression data, but also more robust to a small number of high-dimensional gene expression samples than other techniques. (3) Of the MC-SVMs, OVO and DAGSVM perform better on the large sample sizes, whereas five MC-SVMs methods have very similar performance on moderate sample sizes. In other cases, OVR, WW and CS yield better results when sample sizes are small. So, it is recommended that at least two candidate methods, choosing on the basis of the real data features and experimental conditions, should be performed and compared to obtain better clustering result.展开更多
为解决均值漂移聚类算法聚类效果依赖于带宽参数的主观选取,以及处理密度变化大的数据集时聚类结果精确度问题,提出一种基于覆盖树的自适应均值漂移聚类算法MSCT(MeanShift based on Cover-Tree)。构建一个覆盖树数据集,在计算漂移向量...为解决均值漂移聚类算法聚类效果依赖于带宽参数的主观选取,以及处理密度变化大的数据集时聚类结果精确度问题,提出一种基于覆盖树的自适应均值漂移聚类算法MSCT(MeanShift based on Cover-Tree)。构建一个覆盖树数据集,在计算漂移向量过程中结合覆盖树数据集获得新的漂移向量结果KnnShift,在不同数据密度分布的数据集上都能自适应产生带宽参数,所有数据点完成漂移过程后获得聚类结果。实验结果表明,MSCT算法的聚类效果整体上优于MS、DBSCAN等算法。展开更多
文摘Intrusion detection aims to detect intrusion behavior and serves as a complement to firewalls.It can detect attack types of malicious network communications and computer usage that cannot be detected by idiomatic firewalls.Many intrusion detection methods are processed through machine learning.Previous literature has shown that the performance of an intrusion detection method based on hybrid learning or integration approach is superior to that of single learning technology.However,almost no studies focus on how additional representative and concise features can be extracted to process effective intrusion detection among massive and complicated data.In this paper,a new hybrid learning method is proposed on the basis of features such as density,cluster centers,and nearest neighbors(DCNN).In this algorithm,data is represented by the local density of each sample point and the sum of distances from each sample point to cluster centers and to its nearest neighbor.k-NN classifier is adopted to classify the new feature vectors.Our experiment shows that DCNN,which combines K-means,clustering-based density,and k-NN classifier,is effective in intrusion detection.
基金Supported by the National Science Foundation(No.IIS-9988642)the Multidisciplinary Research Program
文摘Many classifiers and methods are proposed to deal with letter recognition problem. Among them, clustering is a widely used method. But only one time for clustering is not adequately. Here, we adopt data preprocessing and a re kernel clustering method to tackle the letter recognition problem. In order to validate effectiveness and efficiency of proposed method, we introduce re kernel clustering into Kernel Nearest Neighbor classification(KNN), Radial Basis Function Neural Network(RBFNN), and Support Vector Machine(SVM). Furthermore, we compare the difference between re kernel clustering and one time kernel clustering which is denoted as kernel clustering for short. Experimental results validate that re kernel clustering forms fewer and more feasible kernels and attain higher classification accuracy.
基金supported by the National Natural Science Foundation of China(6150340861573374)
文摘The hypersonic interception in near space is a great challenge because of the target’s unpredictable trajectory, which demands the interceptors of trajectory cluster coverage of the predicted area and optimal trajectory modification capability aiming at the consistently updating predicted impact point(PIP) in the midcourse phase. A novel midcourse optimal trajectory cluster generation and trajectory modification algorithm is proposed based on the neighboring optimal control theory. Firstly, the midcourse trajectory optimization problem is introduced; the necessary conditions for the optimal control and the transversality constraints are given.Secondly, with the description of the neighboring optimal trajectory existence theory(NOTET), the neighboring optimal control(NOC)algorithm is derived by taking the second order partial derivations with the necessary conditions and transversality conditions. The revised terminal constraints are reversely integrated to the initial time and the perturbations of the co-states are further expressed with the states deviations and terminal constraints modifications.Thirdly, the simulations of two different scenarios are carried out and the results prove the effectiveness and optimality of the proposed method.
基金This research was supported by the National Natural Science Foundation of China(30370758)Program for New Century Excellent Talents in Universities(NCET)of Ministry of Education to Dr.Xu Chenwu(NCET-05-0502).
文摘Several typical supervised clustering methods such as Gaussian mixture model-based supervised clustering (GMM), k- nearest-neighbor (KNN), binary support vector machines (SVMs) and multiclass support vector machines (MC-SVMs) were employed to classify the computer simulation data and two real microarray expression datasets. False positive, false negative, true positive, true negative, clustering accuracy and Matthews' correlation coefficient (MCC) were compared among these methods. The results are as follows: (1) In classifying thousands of gene expression data, the performances of two GMM methods have the maximal clustering accuracy and the least overall FP+FN error numbers on the basis of the assumption that the whole set of microarray data are a finite mixture of multivariate Gaussian distributions. Furthermore, when the number of training sample is very small, the clustering accuracy of GMM-Ⅱ method has superiority over GMM- Ⅰ method. (2) In general, the superior classification performance of the MC-SVMs are more robust and more practical, which are less sensitive to the curse of dimensionality, and not only next to GMM method in clustering accuracy to thousands of gene expression data, but also more robust to a small number of high-dimensional gene expression samples than other techniques. (3) Of the MC-SVMs, OVO and DAGSVM perform better on the large sample sizes, whereas five MC-SVMs methods have very similar performance on moderate sample sizes. In other cases, OVR, WW and CS yield better results when sample sizes are small. So, it is recommended that at least two candidate methods, choosing on the basis of the real data features and experimental conditions, should be performed and compared to obtain better clustering result.
文摘为解决均值漂移聚类算法聚类效果依赖于带宽参数的主观选取,以及处理密度变化大的数据集时聚类结果精确度问题,提出一种基于覆盖树的自适应均值漂移聚类算法MSCT(MeanShift based on Cover-Tree)。构建一个覆盖树数据集,在计算漂移向量过程中结合覆盖树数据集获得新的漂移向量结果KnnShift,在不同数据密度分布的数据集上都能自适应产生带宽参数,所有数据点完成漂移过程后获得聚类结果。实验结果表明,MSCT算法的聚类效果整体上优于MS、DBSCAN等算法。