期刊文献+
共找到10篇文章
< 1 >
每页显示 20 50 100
Hadoop架构下基于分布式粒子群算法的暂态稳定评估特征量选择 被引量:7
1
作者 谢彦祥 刘天琪 苏学能 《电网技术》 EI CSCD 北大核心 2018年第12期4107-4115,共9页
特征量选择是基于机器学习的电力系统暂态稳定评估的重要环节。针对现有特征量选择方法存在分类判据选择效果不佳和初始特征集构建不全面等问题,提出一种基于改进分类判据和考虑单机特征的特征量选择方法。首先以基于类内类间离散度的... 特征量选择是基于机器学习的电力系统暂态稳定评估的重要环节。针对现有特征量选择方法存在分类判据选择效果不佳和初始特征集构建不全面等问题,提出一种基于改进分类判据和考虑单机特征的特征量选择方法。首先以基于类内类间离散度的分类判据为基础,对类内类间离散度进行改进,同时基于信息熵提出特征熵的概念用于衡量低维特征组合中各特征量在初始特征集中的重要程度,进一步提出基于改进类内类间离散度和特征熵的分类判据;其次,利用系统特征和可表征临界机组特性的单机特征构建初始特征集,且为尽量避免所提特征量选择方法出现维数灾问题,提出用于特征量选择的Hadoop架构下分布式粒子群算法;最后,以EPRI-36节点系统和某实际系统为算例验证所提方法的有效性。 展开更多
关键词 暂态稳定评估 特征量选择 分布式粒子群算法 HADOOP平台 分类判据
下载PDF
试论特征分析与特征量选择
2
作者 郝中波 《机电产品开发与创新》 2010年第5期109-111,共3页
机械设备故障诊断是识别机械设备运行状态的一门学科。本文以旋转机械设备的故障诊断为例,论述如何在故障诊断过程中实现特征分析与特征量的选择。
关键词 特征分析 特征量选择 信号采集 特征提取
下载PDF
自动评分测谎SOC实现的算法研究 被引量:1
3
作者 李文石 曹勇 《中国集成电路》 2004年第2期66-67,共2页
为探索SOC化的测谎自动评分系统,本文综论美国测谎技术的自动评分算法研究成果,简介国内包括我们的相应工作,讨论SOC化创新的技术生长点。
关键词 自动评分测谎技术 SOC 算法研究 创新 特征量选择 算法评介
下载PDF
Accelerated Recursive Feature Elimination Based on Support Vector Machine for Key Variable Identification 被引量:4
4
作者 毛勇 皮道映 +1 位作者 刘育明 孙优贤 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2006年第1期65-72,共8页
Key variable identification for classifications is related to many trouble-shooting problems in process indus-tries. Recursive feature elimination based on support vector machine (SVM-RFE) has been proposed recently i... Key variable identification for classifications is related to many trouble-shooting problems in process indus-tries. Recursive feature elimination based on support vector machine (SVM-RFE) has been proposed recently in applica-tion for feature selection in cancer diagnosis. In this paper, SVM-RFE is used to the key variable selection in fault diag-nosis, and an accelerated SVM-RFE procedure based on heuristic criterion is proposed. The data from Tennessee East-man process (TEP) simulator is used to evaluate the effectiveness of the key variable selection using accelerated SVM-RFE (A-SVM-RFE). A-SVM-RFE integrates computational rate and algorithm effectiveness into a consistent framework. It not only can correctly identify the key variables, but also has very good computational rate. In comparison with contribution charts combined with principal component aralysis (PCA) and other two SVM-RFE algorithms, A-SVM-RFE performs better. It is more fitting for industrial application. 展开更多
关键词 variable selection support vector machine recursive feature elimination fault diagnosis
下载PDF
COMBINING FEATURE SCALING ESTIMATION WITH SVM CLASSIFIER DESIGN USING GA APPROACH 被引量:2
5
作者 Yu Ying Wang Xiaolong Liu Bingquan 《Journal of Electronics(China)》 2005年第5期550-557,共8页
This letter adopts a GA (Genetic Algorithm) approach to assist in learning scaling of features that are most favorable to SVM (Support Vector Machines) classifier, which is named as GA-SVM. The relevant coefficients o... This letter adopts a GA (Genetic Algorithm) approach to assist in learning scaling of features that are most favorable to SVM (Support Vector Machines) classifier, which is named as GA-SVM. The relevant coefficients of various features to the classification task, measured by real-valued scaling, are estimated efficiently by using GA. And GA exploits heavy-bias operator to promote sparsity in the scaling of features. There are many potential benefits of this method:Feature selection is performed by eliminating irrelevant features whose scaling is zero, an SVM classifier that has enhanced generalization ability can be learned simultaneously. Experimental comparisons using original SVM and GA-SVM demonstrate both economical feature selection and excellent classification accuracy on junk e-mail recognition problem and Internet ad recognition problem. The experimental results show that comparing with original SVM classifier, the number of support vector decreases significantly and better classification results are achieved based on GA-SVM. It also demonstrates that GA can provide a simple, general, and powerful framework for tuning parameters in optimal problem, which directly improves the recognition performance and recognition rate of SVM. 展开更多
关键词 Support Vector Machines (SVM) Genetic Algorithm (GA) Feature scaling Feature selection Zero-bias operator
下载PDF
Study on Support Vector Machine Based on 1-Norm
6
作者 潘美芹 贺国平 +2 位作者 韩丛英 薛欣 史有群 《Journal of Donghua University(English Edition)》 EI CAS 2006年第6期148-152,共5页
The model of optimization problem for Support Vector Machine(SVM) is provided, which based on the definitions of the dual norm and the distance between a point and its projection onto a given plane. The model of impro... The model of optimization problem for Support Vector Machine(SVM) is provided, which based on the definitions of the dual norm and the distance between a point and its projection onto a given plane. The model of improved Support Vector Machine based on 1-norm(1-SVM) is provided from the optimization problem, yet it is a discrete programming. With the smoothing technique and optimality knowledge, the discrete programming is changed into a continuous programming. Experimental results show that the algorithm is easy to implement and this method can select and suppress the problem features more efficiently. Illustrative examples show that the 1-SVM deal with the linear or nonlinear classification well. 展开更多
关键词 1- SVM best separating plane feature suppression feature selection.
下载PDF
Fault depth estimation using support vector classifier and features selection
7
作者 Mohammad Ehsan Hekmatian Vahid E. Ardestani +2 位作者 Mohammad Ali Riahi Ayyub Memar Koucheh Bagh Jalal Amini 《Applied Geophysics》 SCIE CSCD 2013年第1期88-96,119,共10页
Depth estimation of subsurface faults is one of the problems in gravity interpretation. We tried using the support vector classifier (SVC) method in the estimation. Using forward and nonlinear inverse techniques, de... Depth estimation of subsurface faults is one of the problems in gravity interpretation. We tried using the support vector classifier (SVC) method in the estimation. Using forward and nonlinear inverse techniques, detecting the depth of subsurface faults with related error is possible but it is necessary to have an initial guess for the depth and this initial guess usually comes from non-gravity data. We introduce SVC in this paper as one of the tools for estimating the depth of subsurface faults using gravity data. We can suppose that each subsurface fault depth is a class and that SVC is a classification algorithm. To better use the SVC algorithm, we select proper depth estimation features using a proper features selection (FS) algorithm. In this research, we produce a training set consisting of synthetic gravity profiles created by subsurface faults at different depths to train the SVC code to estimate the depth of real subsurface faults. Then we test our trained SVC code by a testing set consisting of other synthetic gravity profiles created by subsurface faults at different depths. We also tested our trained SVC code using real data. 展开更多
关键词 depth estimation subsurface fault support vector classifier FEATURE featuresselection
下载PDF
A feature selection approach based on a similarity measure for software defect prediction 被引量:3
8
作者 Qiao YU Shu-juan JIANG +1 位作者 Rong-cun WANG Hong-yang WANG 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2017年第11期1744-1753,共10页
Software defect prediction is aimed to find potential defects based on historical data and software features. Software features can reflect the characteristics of software modules. However, some of these features may ... Software defect prediction is aimed to find potential defects based on historical data and software features. Software features can reflect the characteristics of software modules. However, some of these features may be more relevant to the class (defective or non-defective), but others may be redundant or irrelevant. To fully measure the correlation between different features and the class, we present a feature selection approach based on a similarity measure (SM) for software defect prediction. First, the feature weights are updated according to the similarity of samples in different classes. Second, a feature ranking list is generated by sorting the feature weights in descending order, and all feature subsets are selected from the feature ranking list in sequence. Finally, all feature subsets are evaluated on a k-nearest neighbor (KNN) model and measured by an area under curve (AUC) metric for classification performance. The experiments are conducted on 11 National Aeronautics and Space Administration (NASA) datasets, and the results show that our approach performs better than or is comparable to the compared feature selection approaches in terms of classification performance. 展开更多
关键词 Software defect prediction Feature selection Similarity measure Feature weights Feature ranking list
原文传递
A selective overview of feature screening for ultrahigh-dimensional data 被引量:11
9
作者 LIU JingYuan ZHONG Wei LI RunZe 《Science China Mathematics》 SCIE CSCD 2015年第10期2033-2054,共22页
High-dimensional data have frequently been collected in many scientific areas including genomewide association study, biomedical imaging, tomography, tumor classifications, and finance. Analysis of highdimensional dat... High-dimensional data have frequently been collected in many scientific areas including genomewide association study, biomedical imaging, tomography, tumor classifications, and finance. Analysis of highdimensional data poses many challenges for statisticians. Feature selection and variable selection are fundamental for high-dimensional data analysis. The sparsity principle, which assumes that only a small number of predictors contribute to the response, is frequently adopted and deemed useful in the analysis of high-dimensional data.Following this general principle, a large number of variable selection approaches via penalized least squares or likelihood have been developed in the recent literature to estimate a sparse model and select significant variables simultaneously. While the penalized variable selection methods have been successfully applied in many highdimensional analyses, modern applications in areas such as genomics and proteomics push the dimensionality of data to an even larger scale, where the dimension of data may grow exponentially with the sample size. This has been called ultrahigh-dimensional data in the literature. This work aims to present a selective overview of feature screening procedures for ultrahigh-dimensional data. We focus on insights into how to construct marginal utilities for feature screening on specific models and motivation for the need of model-free feature screening procedures. 展开更多
关键词 correlation learning distance correlation sure independence screening sure joint screening sure screening property ultrahigh-dim
原文传递
Schiff base aluminum catalysts containing morpholinomethyl groups in the ring opening polymerization of rac-lactide
10
作者 Zhijie Guo Ranlong Duan +3 位作者 Mingxiao Deng Xuan Pang Chenyang Hu Xuesi Chen 《Science China Chemistry》 SCIE EI CAS CSCD 2015年第11期1741-1747,共7页
A series of Schiff base aluminum(III) complexes bearing morpholinomethyl substituents were synthesized. Comprehensive investigations on their stereoselective and kinetic features in the ring opening polymerization of ... A series of Schiff base aluminum(III) complexes bearing morpholinomethyl substituents were synthesized. Comprehensive investigations on their stereoselective and kinetic features in the ring opening polymerization of lactide were carried out. The ring opening polymerization proved to be first-order in the catalyst and the monomer. Linear relationships between the numberaverage molecular weight of the polylactide and the monomer conversion were consistent with a well-controlled polymerization. The propagation rate was strongly affected by morpholinomethyl substituents on the salicylaldehyde moiety. 展开更多
关键词 POLYLACTIDE morpholinomethyl ring opening polymerization
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部