期刊文献+
共找到113篇文章
< 1 2 6 >
每页显示 20 50 100
A Power Data Anomaly Detection Model Based on Deep Learning with Adaptive Feature Fusion
1
作者 Xiu Liu Liang Gu +3 位作者 Xin Gong Long An Xurui Gao Juying Wu 《Computers, Materials & Continua》 SCIE EI 2024年第6期4045-4061,共17页
With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve suffi... With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve sufficient extraction of data features,which seriously affects the accuracy and performance of anomaly detection.Therefore,this paper proposes a deep learning-based anomaly detection model for power data,which integrates a data alignment enhancement technique based on random sampling and an adaptive feature fusion method leveraging dimension reduction.Aiming at the distribution variability of power data,this paper developed a sliding window-based data adjustment method for this model,which solves the problem of high-dimensional feature noise and low-dimensional missing data.To address the problem of insufficient feature fusion,an adaptive feature fusion method based on feature dimension reduction and dictionary learning is proposed to improve the anomaly data detection accuracy of the model.In order to verify the effectiveness of the proposed method,we conducted effectiveness comparisons through elimination experiments.The experimental results show that compared with the traditional anomaly detection methods,the method proposed in this paper not only has an advantage in model accuracy,but also reduces the amount of parameter calculation of the model in the process of feature matching and improves the detection speed. 展开更多
关键词 data alignment dimension reduction feature fusion data anomaly detection deep learning
下载PDF
Effective and Efficient Feature Selection for Large-scale Data Using Bayes' Theorem 被引量:7
2
作者 Subramanian Appavu Alias Balamurugan Ramasamy Rajaram 《International Journal of Automation and computing》 EI 2009年第1期62-71,共10页
This paper proposes one method of feature selection by using Bayes' theorem. The purpose of the proposed method is to reduce the computational complexity and increase the classification accuracy of the selected featu... This paper proposes one method of feature selection by using Bayes' theorem. The purpose of the proposed method is to reduce the computational complexity and increase the classification accuracy of the selected feature subsets. The dependence between two attributes (binary) is determined based on the probabilities of their joint values that contribute to positive and negative classification decisions. If opposing sets of attribute values do not lead to opposing classification decisions (zero probability), then the two attributes are considered independent of each other, otherwise dependent, and one of them can be removed and thus the number of attributes is reduced. The process must be repeated on all combinations of attributes. The paper also evaluates the approach by comparing it with existing feature selection algorithms over 8 datasets from University of California, Irvine (UCI) machine learning databases. The proposed method shows better results in terms of number of selected features, classification accuracy, and running time than most existing algorithms. 展开更多
关键词 data mining CLASSIFICATION feature selection dimensionality reduction Bayes' theorem.
下载PDF
Rough Sets Hybridization with Mayfly Optimization for Dimensionality Reduction
3
作者 Ahmad Taher Azar Mustafa Samy Elgendy +1 位作者 Mustafa Abdul Salam Khaled M.Fouad 《Computers, Materials & Continua》 SCIE EI 2022年第10期1087-1108,共22页
Big data is a vast amount of structured and unstructured data that must be dealt with on a regular basis.Dimensionality reduction is the process of converting a huge set of data into data with tiny dimensions so that ... Big data is a vast amount of structured and unstructured data that must be dealt with on a regular basis.Dimensionality reduction is the process of converting a huge set of data into data with tiny dimensions so that equal information may be expressed easily.These tactics are frequently utilized to improve classification or regression challenges while dealing with machine learning issues.To achieve dimensionality reduction for huge data sets,this paper offers a hybrid particle swarm optimization-rough set PSO-RS and Mayfly algorithm-rough set MA-RS.A novel hybrid strategy based on the Mayfly algorithm(MA)and the rough set(RS)is proposed in particular.The performance of the novel hybrid algorithm MA-RS is evaluated by solving six different data sets from the literature.The simulation results and comparison with common reduction methods demonstrate the proposed MARS algorithm’s capacity to handle a wide range of data sets.Finally,the rough set approach,as well as the hybrid optimization techniques PSO-RS and MARS,were applied to deal with the massive data problem.MA-hybrid RS’s method beats other classic dimensionality reduction techniques,according to the experimental results and statistical testing studies. 展开更多
关键词 dimensionality reduction metaheuristics optimization algorithm MAYFLY particle swarm optimizer feature selection
下载PDF
A New Hybrid Feature Selection Method Using T-test and Fitness Function
4
作者 Husam Ali Abdulmohsin Hala Bahjat Abdul Wahab Abdul Mohssen Jaber Abdul Hossen 《Computers, Materials & Continua》 SCIE EI 2021年第9期3997-4016,共20页
Feature selection(FS)(or feature dimensional reduction,or feature optimization)is an essential process in pattern recognition and machine learning because of its enhanced classification speed and accuracy and reduced ... Feature selection(FS)(or feature dimensional reduction,or feature optimization)is an essential process in pattern recognition and machine learning because of its enhanced classification speed and accuracy and reduced system complexity.FS reduces the number of features extracted in the feature extraction phase by reducing highly correlated features,retaining features with high information gain,and removing features with no weights in classification.In this work,an FS filter-type statistical method is designed and implemented,utilizing a t-test to decrease the convergence between feature subsets by calculating the quality of performance value(QoPV).The approach utilizes the well-designed fitness function to calculate the strength of recognition value(SoRV).The two values are used to rank all features according to the final weight(FW)calculated for each feature subset using a function that prioritizes feature subsets with high SoRV values.An FW is assigned to each feature subset,and those with FWs less than a predefined threshold are removed from the feature subset domain.Experiments are implemented on three datasets:Ryerson Audio-Visual Database of Emotional Speech and Song,Berlin,and Surrey Audio-Visual Expressed Emotion.The performance of the F-test and F-score FS methods are compared to those of the proposed method.Tests are also conducted on a system before and after deploying the FS methods.Results demonstrate the comparative efficiency of the proposed method.The complexity of the system is calculated based on the time overhead required before and after FS.Results show that the proposed method can reduce system complexity. 展开更多
关键词 feature selection dimensional reduction feature optimization pattern recognition CLASSIFICATION T-TEST
下载PDF
Multi-state Information Dimension Reduction Based on Particle Swarm Optimization-Kernel Independent Component Analysis
5
作者 邓士杰 苏续军 +1 位作者 唐力伟 张英波 《Journal of Donghua University(English Edition)》 EI CAS 2017年第6期791-795,共5页
The precision of the kernel independent component analysis( KICA) algorithm depends on the type and parameter values of kernel function. Therefore,it's of great significance to study the choice method of KICA'... The precision of the kernel independent component analysis( KICA) algorithm depends on the type and parameter values of kernel function. Therefore,it's of great significance to study the choice method of KICA's kernel parameters for improving its feature dimension reduction result. In this paper, a fitness function was established by use of the ideal of Fisher discrimination function firstly. Then the global optimal solution of fitness function was searched by particle swarm optimization( PSO) algorithm and a multi-state information dimension reduction algorithm based on PSO-KICA was established. Finally,the validity of this algorithm to enhance the precision of feature dimension reduction has been proven. 展开更多
关键词 kernel independent component analysis(KICA) particle swarm optimization(PSO) feature dimension reduction fitness function
下载PDF
Critical Evaluation of Linear Dimensionality Reduction Techniques for Cardiac Arrhythmia Classification
6
作者 Rekha Rajagopal Vidhyapriya Ranganathan 《Circuits and Systems》 2016年第9期2603-2612,共10页
Embedding the original high dimensional data in a low dimensional space helps to overcome the curse of dimensionality and removes noise. The aim of this work is to evaluate the performance of three different linear di... Embedding the original high dimensional data in a low dimensional space helps to overcome the curse of dimensionality and removes noise. The aim of this work is to evaluate the performance of three different linear dimensionality reduction techniques (DR) techniques namely principal component analysis (PCA), multi dimensional scaling (MDS) and linear discriminant analysis (LDA) on classification of cardiac arrhythmias using probabilistic neural network classifier (PNN). The design phase of classification model comprises of the following stages: preprocessing of the cardiac signal by eliminating detail coefficients that contain noise, feature extraction through daubechies wavelet transform, dimensionality reduction through linear DR techniques specified, and arrhythmia classification using PNN. Linear dimensionality reduction techniques have simple geometric representations and simple computational properties. Entire MIT-BIH arrhythmia database is used for experimentation. The experimental results demonstrates that combination of PNN classifier (spread parameter, σ = 0.08) and PCA DR technique exhibits highest sensitivity and F score of 78.84% and 78.82% respectively with a minimum of 8 dimensions. 展开更多
关键词 data Preprocessing Decision Support Systems feature Extraction dimensionality reduction
下载PDF
EliteVec: Feature Fusion for Depression Diagnosis Using Optimized Long Short-Term Memory Network
7
作者 S.Kavi Priya K.Pon Karthika 《Intelligent Automation & Soft Computing》 SCIE 2023年第5期1745-1766,共22页
Globally,depression is perceived as the most recurrent and risky disor-der among young people and adults under the age of 60.Depression has a strong influence on the usage of words which can be observed in the form of ... Globally,depression is perceived as the most recurrent and risky disor-der among young people and adults under the age of 60.Depression has a strong influence on the usage of words which can be observed in the form of written texts or stories posted on social media.With the help of Natural Language Proces-sing(NLP)and Machine Learning(ML)techniques,the depressive signs expressed by people can be identified at the earliest stage from their Social Media posts.The proposed work aims to introduce an efficacious depression detection model unifying an exemplary feature extraction scheme and a hybrid Long Short-Term Memory network(LSTM)model.The feature extraction process combines a novel feature selection method called Elite Term Score(ETS)and Word2Vec to extract the syntactic and semantic information respectively.First,the ETS method leverages the document level,class level,and corpus level prob-abilities for computing the weightage/score of the terms.Then,the ideal and per-tinent set of features with a high ETS score is selected,and the Word2vec model is trained to generate the intense feature vector representation for the set of selected terms.Finally,the resultant word vector obtained is called EliteVec,which is fed to the hybrid LSTM model based on Honey Badger optimizer with population reduction technique(PHB)which predicts whether the input textual content is depressive or not.The PHB algorithm is integrated to explore and exploit the opti-mal hyperparameters for strengthening the performance of the LSTM network.The comprehensive experiments are carried out with two different Twitter depres-sion corpus based on accuracy and Root Mean Square Error(RMSE)metrics.The results demonstrated that the proposed EliteVec+LSTM+PHB model outperforms the state-of-art models with 98.1%accuracy and 0.0559 RMSE. 展开更多
关键词 Depression detection dimensionality reduction feature extraction feature selection hybrid LSTM network population reduction honey badger optimization social media twitter
下载PDF
综合天气相似分析方法及其气象预报服务应用
8
作者 李宇中 董良淼 +3 位作者 梁存桂 刘国忠 覃月凤 黄伊曼 《气象科技》 2024年第4期571-582,共12页
为改进传统“切片”式天气形势相似分析方法存在的不同切片相似结果不一致、预报稳定性欠佳问题,借鉴大数据思维,将天气系统视为一个由高中低层大气相互配合、静力热力动力条件相互影响的综合体,以多种气象要素再分析格点资料为基础,采... 为改进传统“切片”式天气形势相似分析方法存在的不同切片相似结果不一致、预报稳定性欠佳问题,借鉴大数据思维,将天气系统视为一个由高中低层大气相互配合、静力热力动力条件相互影响的综合体,以多种气象要素再分析格点资料为基础,采用机器学习PCA方法对原始数据进行降维、浓缩,经归一化处理后构建出适于综合天气相似分析的样本衍生特征因子矩阵;然后使用KNN算法计算样本间各特征维度的相似距离、并结合方差贡献率赋予其相应的权重,最终按综合相似距离大小排序给出目标样本在历史天气形势库中的综合最相似序列,从而实现对传统相似天气预报方法的升级改进。对比分析和测试应用表明,该方法可提供多要素、多层次“立体”综合相似下的一致性结论,有助于预报员更好地理解天气系统结构和演变过程、进而更准确地研判可能发生的相关天气现象,在精细化气象预报服务方面有良好的应用前景。在2023年以来的几次广西区域性极端降水气象预报服务中,该方法取得了较为显著的应用效果。 展开更多
关键词 数据驱动 相似距离 PCA降维 衍生特征 KNN
下载PDF
基于最优近邻的局部保持投影方法
9
作者 赵俊涛 李陶深 卢志翔 《计算机工程》 CAS CSCD 北大核心 2024年第9期161-168,共8页
局部保持投影(LPP)方法是机器学习领域中一种经典的降维方法。然而LPP方法以及部分改进方法在构建数据的局部结构时简单地使用k最近邻(k-NN)分类算法寻找样本的近邻点,容易受到参数k、噪声和异常值的影响。为了解决上述问题,提出一种基... 局部保持投影(LPP)方法是机器学习领域中一种经典的降维方法。然而LPP方法以及部分改进方法在构建数据的局部结构时简单地使用k最近邻(k-NN)分类算法寻找样本的近邻点,容易受到参数k、噪声和异常值的影响。为了解决上述问题,提出一种基于最优近邻的LPP方法。该方法使用寻找最优近邻算法,在找到样本近邻点后,进一步选择与样本有一定数量的共同近邻点的近邻样本作为最优近邻,通过共同近邻点的限定来选择与样本最相似的近邻,增强近邻样本间的相关性,避免了传统LPP方法受参数k影响大等问题。在选择出足够的样本最优近邻后,构建数据局部结构,以便准确地反映数据的本质结构特征,使降维后的数据能最大程度保留样本的有效信息,提升后续机器学习模型的性能。公共图像数据集上的对比实验结果表明,该方法具有较好的数据降维效果,有效地提高了图像识别准确率。 展开更多
关键词 局部保持投影方法 最优近邻 近邻样本 降维 特征提取
下载PDF
面向多目标优化的航空发动机装配特征选择
10
作者 陆文灏 柯勇伟 +1 位作者 郭永强 司书宾 《工业工程》 2024年第4期1-8,共8页
由于航空发动机装配工艺和试车工艺的复杂性,收集到的航空发动机装配数据的装配特征非常庞大,严重干扰了对航空发动机装配质量的准确预测,如何选择航空发动机装配的关键质量特征实现质量预测成为极具挑战性的问题。因此,针对航空发动机... 由于航空发动机装配工艺和试车工艺的复杂性,收集到的航空发动机装配数据的装配特征非常庞大,严重干扰了对航空发动机装配质量的准确预测,如何选择航空发动机装配的关键质量特征实现质量预测成为极具挑战性的问题。因此,针对航空发动机装配特征选择难题,本文提出面向多目标优化的航空发动机装配数据的两阶段特征选择方法。明确特征选择的优化目标,在第1阶段,基于最大相关最小冗余算法的相关特征选择过程,计算装配特征与试车指标的互信息值,筛选出与试车指标相关性最大的相关特征,并剔除有干扰影响的冗余特征。在第2阶段,通过引入种群初始化策略和自适应遗传算子,提出基于改进的二代非支配排序遗传算法的关键质量特征选择过程,得到航空发动机装配的关键质量特征子集的帕累托前沿。实验表明,本文所提出的两阶段特征选择方法比传统的方法有更好的适用性和有效性,实现了对航空发动机装配特征选择,提高了对航空发动机装配质量的预测准确率。 展开更多
关键词 航空发动机装配 多目标优化 高维数据 关键质量特征
下载PDF
数据驱动的气动热建模预测方法总结与展望
11
作者 王泽 宋述芳 +1 位作者 王旭 张伟伟 《气体物理》 2024年第4期39-55,共17页
气动热的准确预测是指导高超声速飞行器设计的基础。在经典气动热预测方法愈发难以满足工程中高效准确的气动热预测需求的背景下,近年来蓬勃发展的数据驱动气动热建模预测方法逐渐成为气动热预测的新范式。对此,首先阐述了数据驱动气动... 气动热的准确预测是指导高超声速飞行器设计的基础。在经典气动热预测方法愈发难以满足工程中高效准确的气动热预测需求的背景下,近年来蓬勃发展的数据驱动气动热建模预测方法逐渐成为气动热预测的新范式。对此,首先阐述了数据驱动气动热建模预测方法和经典气动热预测方法的相互关系。然后,从建模思路上将数据驱动气动热建模预测方法归纳为3类,即气动热特征空间降维建模预测、气动热逐点建模预测和气动热物理信息嵌入建模预测,并对这3类方法进行了详细介绍和分析总结。数据驱动气动热建模预测方法不仅比工程算法准确,而且和采样方法结合后,还能够有效降低实验测量和数值计算的工作量,给出的模型也更加高效简洁。最后,对数据驱动气动热建模预测方法的发展趋势进行了展望,指出数据驱动技术与经典气动热预测方法的深度结合、气动热物理信息嵌入建模预测方法和气动热预测大模型将会是未来研究的要点。 展开更多
关键词 气动热预测 数据驱动 特征空间降维 逐点建模 物理信息嵌入
下载PDF
基于NMI-SC的糖尿病混合数据特征选择
12
作者 朱潘蕾 容芷君 +2 位作者 但斌斌 代超 吕生 《电子设计工程》 2024年第11期6-10,共5页
针对糖尿病预测精度受高维混合数据影响的问题,提出基于NMI-SC的糖尿病特征选择方法,通过邻域互信息(NMI)计算混合属性特征邻域半径内的联合概率密度,构建相似度矩阵,通过糖尿病特征之间的相似性构建无向图,基于谱聚类(SC)将糖尿病特征... 针对糖尿病预测精度受高维混合数据影响的问题,提出基于NMI-SC的糖尿病特征选择方法,通过邻域互信息(NMI)计算混合属性特征邻域半径内的联合概率密度,构建相似度矩阵,通过糖尿病特征之间的相似性构建无向图,基于谱聚类(SC)将糖尿病特征切分为多个特征相似组,实现非线性特征间的聚类,根据特征分类重要性选出相似组中的代表特征。并将其与原始特征集在支持向量机分类器上的准确率进行比较,该特征选择方法在删除46个冗余特征后,准确率提高了13.07%。实验结果表明,该方法能有效删除冗余特征,得到糖尿病分类性能优异的特征子集。 展开更多
关键词 特征选择 混合数据降维 邻域互信息 谱聚类
下载PDF
基于特征工程和深度自动编码器的桥梁损伤识别研究
13
作者 侯怡 钱松荣 李雪梅 《软件工程》 2024年第6期63-67,共5页
针对桥梁监测领域中损伤识别精度较低的问题,提出了一种基于特征工程和深度自动编码器的识别方案。首先采用快速傅里叶变换分析原始数据的特征和规律,其次通过滑动窗口从频谱图中提取表现出损伤差异的模态频率,最后将经过主成分分析法... 针对桥梁监测领域中损伤识别精度较低的问题,提出了一种基于特征工程和深度自动编码器的识别方案。首先采用快速傅里叶变换分析原始数据的特征和规律,其次通过滑动窗口从频谱图中提取表现出损伤差异的模态频率,最后将经过主成分分析法选择的保留损伤信息量最大的敏感特征作为深度自动编码器的输入。实验结果表明,经过特征工程处理后的新指标提高了模型的识别能力和计算效率,在仅占原始数据集14.9%的特征维度的情况下,模型的识别精确率从81.12%提升到98.67%。 展开更多
关键词 损伤识别 特征工程 特征提取 数据降维 深度学习
下载PDF
基于支持向量机的网格化电网负荷预测算法设计 被引量:3
14
作者 徐良德 郭挺 +2 位作者 雷才嘉 陈中豪 刘恒玮 《电子设计工程》 2024年第3期12-16,共5页
针对电网负荷预测算法预测能力较差、效率偏低的问题,文中提出了一种PCA-PSO-SVM算法。其在经典粒子群算法的基础上引入主元分析法,使模型具有降低数据维度及算法冗余度的特性。同时通过改进的PCA-PSO算法对SVM模型的内置参数进行最优选... 针对电网负荷预测算法预测能力较差、效率偏低的问题,文中提出了一种PCA-PSO-SVM算法。其在经典粒子群算法的基础上引入主元分析法,使模型具有降低数据维度及算法冗余度的特性。同时通过改进的PCA-PSO算法对SVM模型的内置参数进行最优选取,从而使改进后的SVM模型具有最佳的分类性能。在实验测试中,采用PCA算法选取了91%贡献度内的6项数据特征进行样本数据训练。结果表明,相较于其他对比算法,该文算法预测结果的RMSE、MAE与MAPE误差值均为最小,证明其可对网格化电网负荷加以预测。此外,该算法还能提升传统算法的准确度,为电力负荷分配提供有力支持。 展开更多
关键词 支持向量机 粒子群算法 主元分析法 数据降维 电网负荷预测
下载PDF
基于K最近邻的高速公路偷逃费事件识别
15
作者 段钢 许慧玲 +2 位作者 黄诗音 林述韬 赵建东 《科学技术与工程》 北大核心 2024年第25期10974-10982,共9页
为解决现有高速公路收费稽查的低效率和高成本等问题,提出一种基于K最近邻算法(k-nearest neighbor,KNN)的偷逃费事件识别模型。首先分析高速公路大车小标逃费行为特性,设计并建立识别与校核算法,从原始收费数据中提取出逃费样本。其次... 为解决现有高速公路收费稽查的低效率和高成本等问题,提出一种基于K最近邻算法(k-nearest neighbor,KNN)的偷逃费事件识别模型。首先分析高速公路大车小标逃费行为特性,设计并建立识别与校核算法,从原始收费数据中提取出逃费样本。其次,对原始逃费数据进行预处理后,采用主成分分析法进行特征降维。最后针对逃费数据集不同类别样本数分布的极端不平衡特点,使用过采样方法中的改进的合成少数类过采样算法(borderline2 synthetic minority over-sampling technique,BorderlineSMOTE2)对数据做平衡处理,再通过KNN算法建立逃费行为分类识别模型。最终验证结果表明,所建立的逃费行为识别模型识别精确率为0.75,召回率为0.84,f 1系数为0.79,表明该算法针对逃费行为样本分类识别精度较高,模型性能较好。基于KNN的高速公路车辆偷逃费事件识别模型针对逃费数据的高维不平衡特点建立了相应的处理规则与算法,提高了识别精度,识别结果可助力高速公路收费稽核有效筛查逃费行为,降低通行费损失成本。 展开更多
关键词 偷逃费行为 特征降维 数据平衡 KNN算法
下载PDF
最大相关和最大差异的高维数据特征选择算法
16
作者 孟圣洁 于万钧 陈颖 《计算机应用》 CSCD 北大核心 2024年第3期767-771,共5页
针对高维数据存在冗余信息且维度过高的问题,提出基于信息量的最大相关最大差异特征选择算法(MCD)。首先,利用互信息(MI)度量特征和标签之间的相关性,对特征进行排序,选择互信息最大的特征加入特征子集;然后,引入信息距离度量特征之间... 针对高维数据存在冗余信息且维度过高的问题,提出基于信息量的最大相关最大差异特征选择算法(MCD)。首先,利用互信息(MI)度量特征和标签之间的相关性,对特征进行排序,选择互信息最大的特征加入特征子集;然后,引入信息距离度量特征之间的信息冗余性及差异性,设计评价准则对每个特征进行评价,使特征子集中特征和标签的相关性、特征之间的差异性最大;最后,用前向搜索策略结合评价准则进行属性约简,最优化特征子集。采用2种不同的分类器,在6个数据集上和mRMR(minimal-Redundancy-Maximal-Relevance criterion)、RReliefF等5个经典算法进行对比实验,利用分类精度验证MCD的有效性。在支持向量机(SVM)分类器下,平均分类精度提高了5.67~23.80个百分点;在K-近邻(KNN)分类器下,平均分类精度提高了2.69~25.18个百分点。可见,MCD在绝大多数情况下,能有效去除冗余特征,分类精度有明显提高。 展开更多
关键词 特征选择 高维数据 特征冗余 相关性 分类准确率 降维
下载PDF
基于跨模态近邻流形散布的基因特征提取方法
17
作者 王孟明 张志鹏 侯雅魁 《湖北民族大学学报(自然科学版)》 CAS 2024年第1期59-63,共5页
为解决因基因表达数据维度高、样本少、噪声高等特点导致在基因分类研究中难以提取有效特征的问题,提出了跨模态近邻流形散布(cross-modal nearest neighbor manifold scatter,CNNMS)方法,在核方法基础上采用近邻数据的方式,从而进一步... 为解决因基因表达数据维度高、样本少、噪声高等特点导致在基因分类研究中难以提取有效特征的问题,提出了跨模态近邻流形散布(cross-modal nearest neighbor manifold scatter,CNNMS)方法,在核方法基础上采用近邻数据的方式,从而进一步降低了类别不平衡对分类精度的影响。此外,基于近邻均值受异常点影响较小的特点,CNNMS方法把高维基因特征映射到核空间,将所有样本与其近邻样本之间距离均值定义为样本的近邻均值,使跨模态近邻流形散布子空间在最大程度上保持同类特征内部的聚集性。实验结果表明,CNNMS方法在肺癌基因表达数据集上的分类识别率超过98%,在胃癌基因表达数据集上也获得了良好的分类识别率,相较于其他方法具有更好的分类能力。CNNMS方法在基因分类研究中表现出较高的识别率,对基因特征提取研究具有深远意义。 展开更多
关键词 基因特征提取 典型相关分析 数据降维 基因分类 近邻散布 鉴别敏感 癌症诊断
下载PDF
基于分步特征选取和WOA-LSSVM的变压器故障诊断
18
作者 谢乐 杨浙 潘成南 《电工电气》 2024年第8期31-36,共6页
为了提高变压器故障诊断的精度,保障电网的稳定运行,提出了一种基于ReliefF算法与界标等距映射(L-Isomap)的分步特征选取和鲸鱼群算法(WOA)优化最小二乘支持向量机(LSSVM)的故障诊断模型。选取7种常见故障特征油中溶解气体分析(DGA)气... 为了提高变压器故障诊断的精度,保障电网的稳定运行,提出了一种基于ReliefF算法与界标等距映射(L-Isomap)的分步特征选取和鲸鱼群算法(WOA)优化最小二乘支持向量机(LSSVM)的故障诊断模型。选取7种常见故障特征油中溶解气体分析(DGA)气体以及其构造出的16组比值作为初始特征集,利用ReliefF算法分别对初始特征集进行特征选择,再利用L-Isomap算法对融合后的特征集进行降维处理,将降维处理后的特征集作为故障特征向量代入诊断模型,故障诊断模型采用WOA-LSSVM进行训练与测试。实验结果表明,诊断模型的精度高达98.31%,相比于其他模型拥有更高的诊断精度。 展开更多
关键词 变压器 故障诊断 分步特征选取 降维 鲸鱼群算法 最小二乘支持向量机
下载PDF
Global aerodynamic design optimization based on data dimensionality reduction 被引量:8
19
作者 Yasong QIU Junqiang BAI +1 位作者 Nan LIU Chen WANG 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2018年第4期643-659,共17页
In aerodynamic optimization, global optimization methods such as genetic algorithms are preferred in many cases because of their advantage on reaching global optimum. However,for complex problems in which large number... In aerodynamic optimization, global optimization methods such as genetic algorithms are preferred in many cases because of their advantage on reaching global optimum. However,for complex problems in which large number of design variables are needed, the computational cost becomes prohibitive, and thus original global optimization strategies are required. To address this need, data dimensionality reduction method is combined with global optimization methods, thus forming a new global optimization system, aiming to improve the efficiency of conventional global optimization. The new optimization system involves applying Proper Orthogonal Decomposition(POD) in dimensionality reduction of design space while maintaining the generality of original design space. Besides, an acceleration approach for samples calculation in surrogate modeling is applied to reduce the computational time while providing sufficient accuracy. The optimizations of a transonic airfoil RAE2822 and the transonic wing ONERA M6 are performed to demonstrate the effectiveness of the proposed new optimization system. In both cases, we manage to reduce the number of design variables from 20 to 10 and from 42 to 20 respectively. The new design optimization system converges faster and it takes 1/3 of the total time of traditional optimization to converge to a better design, thus significantly reducing the overall optimization time and improving the efficiency of conventional global design optimization method. 展开更多
关键词 Aerodynamic shape design optimization data dimensionality reduction Genetic algorithm Kriging surrogate model Proper orthogonal decomposition
原文传递
基于逆向工程的Android应用漏洞检测技术研究 被引量:1
20
作者 许庆富 谈文蓉 王彩霞 《西南民族大学学报(自然科学版)》 CAS 2018年第5期512-520,共9页
Android应用程序面临着各种各样的安全威胁,针对如何在黑客利用Android程序漏洞攻击前发现潜在漏洞的漏洞检测技术研究,提出了一种基于APK逆向分析的应用漏洞检测技术.在逆向反编译APK静态代码的基础上,运用特征提取算法将smali静态代... Android应用程序面临着各种各样的安全威胁,针对如何在黑客利用Android程序漏洞攻击前发现潜在漏洞的漏洞检测技术研究,提出了一种基于APK逆向分析的应用漏洞检测技术.在逆向反编译APK静态代码的基础上,运用特征提取算法将smali静态代码解析转换为函数调用图作为特征来源,建立原始特征集合提取模型,继而通过改进ReliefF特征选择算法对原始特征集合进行数据降维,提取APK包中的漏洞特征向量,依次构建漏洞的检测规则.再结合Android漏洞库收录的漏洞特征对特征向量进行正则匹配,以挖掘其中潜在的安全漏洞.基于该检测方法实现了系统模型并进行对比性实验,实验结果表明此检测方法的漏洞检出率达91%以上.因此,该漏洞检测技术能够有效挖掘Android应用中常见的安全漏洞. 展开更多
关键词 移动安全 逆向工程 漏洞挖掘 特征提取 数据降维
下载PDF
上一页 1 2 6 下一页 到第
使用帮助 返回顶部