期刊文献+
共找到247篇文章
< 1 2 13 >
每页显示 20 50 100
Learning Vector Quantization-Based Fuzzy Rules Oversampling Method
1
作者 Jiqiang Chen Ranran Han +1 位作者 Dongqing Zhang Litao Ma 《Computers, Materials & Continua》 SCIE EI 2024年第6期5067-5082,共16页
Imbalanced datasets are common in practical applications,and oversampling methods using fuzzy rules have been shown to enhance the classification performance of imbalanced data by taking into account the relationship ... Imbalanced datasets are common in practical applications,and oversampling methods using fuzzy rules have been shown to enhance the classification performance of imbalanced data by taking into account the relationship between data attributes.However,the creation of fuzzy rules typically depends on expert knowledge,which may not fully leverage the label information in training data and may be subjective.To address this issue,a novel fuzzy rule oversampling approach is developed based on the learning vector quantization(LVQ)algorithm.In this method,the label information of the training data is utilized to determine the antecedent part of If-Then fuzzy rules by dynamically dividing attribute intervals using LVQ.Subsequently,fuzzy rules are generated and adjusted to calculate rule weights.The number of new samples to be synthesized for each rule is then computed,and samples from the minority class are synthesized based on the newly generated fuzzy rules.This results in the establishment of a fuzzy rule oversampling method based on LVQ.To evaluate the effectiveness of this method,comparative experiments are conducted on 12 publicly available imbalance datasets with five other sampling techniques in combination with the support function machine.The experimental results demonstrate that the proposed method can significantly enhance the classification algorithm across seven performance indicators,including a boost of 2.15%to 12.34%in Accuracy,6.11%to 27.06%in G-mean,and 4.69%to 18.78%in AUC.These show that the proposed method is capable of more efficiently improving the classification performance of imbalanced data. 展开更多
关键词 OVERSAMPLING fuzzy rules learning vector quantization imbalanced data support function machine
下载PDF
A Fast Algorithm for Training Large Scale Support Vector Machines
2
作者 Mayowa Kassim Aregbesola Igor Griva 《Journal of Computer and Communications》 2022年第12期1-15,共15页
The manuscript presents an augmented Lagrangian—fast projected gradient method (ALFPGM) with an improved scheme of working set selection, pWSS, a decomposition based algorithm for training support vector classificati... The manuscript presents an augmented Lagrangian—fast projected gradient method (ALFPGM) with an improved scheme of working set selection, pWSS, a decomposition based algorithm for training support vector classification machines (SVM). The manuscript describes the ALFPGM algorithm, provides numerical results for training SVM on large data sets, and compares the training times of ALFPGM and Sequential Minimal Minimization algorithms (SMO) from Scikit-learn library. The numerical results demonstrate that ALFPGM with the improved working selection scheme is capable of training SVM with tens of thousands of training examples in a fraction of the training time of some widely adopted SVM tools. 展开更多
关键词 SVM machine learning support vector machines FISTA fast Projected Gradient Augmented Lagrangian Working Set Selection DECOMPOSITION
下载PDF
Data Selection Using Support Vector Regression
3
作者 Michael B.RICHMAN Lance M.LESLIE +1 位作者 Theodore B.TRAFALIS Hicham MANSOURI 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2015年第3期277-286,共10页
Geophysical data sets are growing at an ever-increasing rate,requiring computationally efficient data selection (thinning) methods to preserve essential information.Satellites,such as WindSat,provide large data sets... Geophysical data sets are growing at an ever-increasing rate,requiring computationally efficient data selection (thinning) methods to preserve essential information.Satellites,such as WindSat,provide large data sets for assessing the accuracy and computational efficiency of data selection techniques.A new data thinning technique,based on support vector regression (SVR),is developed and tested.To manage large on-line satellite data streams,observations from WindSat are formed into subsets by Voronoi tessellation and then each is thinned by SVR (TSVR).Three experiments are performed.The first confirms the viability of TSVR for a relatively small sample,comparing it to several commonly used data thinning methods (random selection,averaging and Barnes filtering),producing a 10% thinning rate (90% data reduction),low mean absolute errors (MAE) and large correlations with the original data.A second experiment,using a larger dataset,shows TSVR retrievals with MAE < 1 m s-1 and correlations ≥ 0.98.TSVR was an order of magnitude faster than the commonly used thinning methods.A third experiment applies a two-stage pipeline to TSVR,to accommodate online data.The pipeline subsets reconstruct the wind field with the same accuracy as the second experiment,is an order of magnitude faster than the nonpipeline TSVR.Therefore,pipeline TSVR is two orders of magnitude faster than commonly used thinning methods that ingest the entire data set.This study demonstrates that TSVR pipeline thinning is an accurate and computationally efficient alternative to commonly used data selection techniques. 展开更多
关键词 data selection data thinning machine learning support vector regression Voronoi tessellation pipeline methods
下载PDF
Temperature Prediction Model Identification Using Cyclic Coordinate Descent Based Linear Support Vector Regression 被引量:1
4
作者 张堃 费敏锐 +1 位作者 吴建国 张培建 《Journal of Donghua University(English Edition)》 EI CAS 2014年第2期113-118,共6页
Temperature prediction plays an important role in ring die granulator control,which can influence the quantity and quality of production. Temperature prediction modeling is a complicated problem with its MIMO, nonline... Temperature prediction plays an important role in ring die granulator control,which can influence the quantity and quality of production. Temperature prediction modeling is a complicated problem with its MIMO, nonlinear, and large time-delay characteristics. Support vector machine( SVM) has been successfully based on small data. But its accuracy is not high,in contrast,if the number of data and dimension of feature increase,the training time of model will increase dramatically. In this paper,a linear SVM was applied combing with cyclic coordinate descent( CCD) to solving big data regression. It was mathematically strictly proved and validated by simulation. Meanwhile,real data were conducted to prove the linear SVM model's effect. Compared with other methods for big data in simulation, this algorithm has apparent advantage not only in fast modeling but also in high fitness. 展开更多
关键词 linear support vector machine(SVM) cyclic coordinates descent(CCD) optimization big data fast identification
下载PDF
Flood Forecasting of Malaysia Kelantan River using Support Vector Regression Technique
5
作者 Amrul Faruq Aminaton Marto Shahrum Shah Abdullah 《Computer Systems Science & Engineering》 SCIE EI 2021年第12期297-306,共10页
The rainstorm is believed to contribute flood disasters in upstream catchments,resulting in further consequences in downstream area due to rise of river water levels.Forecasting for flood water level has been challeng... The rainstorm is believed to contribute flood disasters in upstream catchments,resulting in further consequences in downstream area due to rise of river water levels.Forecasting for flood water level has been challenging,present-ing complex task due to its nonlinearities and dependencies.This study proposes a support vector machine regression model,regarded as a powerful machine learning-based technique to forecast flood water levels in downstream area for different lead times.As a case study,Kelantan River in Malaysia has been selected to validate the proposed model.Four water level stations in river basin upstream were identified as input variables.A river water level in downstream area was selected as output of flood forecasting model.A comparison with several bench-marking models,including radial basis function(RBF)and nonlinear autoregres-sive with exogenous input(NARX)neural network was performed.The results demonstrated that in terms of RMSE error,NARX model was better for the proposed models.However,support vector regression(SVR)demonstrated a more consistent performance,indicated by the highest coefficient of determination value in twelve-hour period ahead of forecasting time.The findings of this study signified that SVR was more capable of addressing the long-term flood forecasting problems. 展开更多
关键词 Flood forecasting support vector machine machine learning artificial intelligence disaster risk reduction data mining
下载PDF
State of the art in applications of machine learning in steelmaking process modeling 被引量:6
6
作者 Runhao Zhang Jian Yang 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS CSCD 2023年第11期2055-2075,共21页
With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning te... With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning technology provides a new method other than production experience and metallurgical principles in dealing with large amounts of data.The application of machine learning in the steelmaking process has become a research hotspot in recent years.This paper provides an overview of the applications of machine learning in the steelmaking process modeling involving hot metal pretreatment,primary steelmaking,secondary refining,and some other aspects.The three most frequently used machine learning algorithms in steelmaking process modeling are the artificial neural network,support vector machine,and case-based reasoning,demonstrating proportions of 56%,14%,and 10%,respectively.Collected data in the steelmaking plants are frequently faulty.Thus,data processing,especially data cleaning,is crucially important to the performance of machine learning models.The detection of variable importance can be used to optimize the process parameters and guide production.Machine learning is used in hot metal pretreatment modeling mainly for endpoint S content prediction.The predictions of the endpoints of element compositions and the process parameters are widely investigated in primary steelmaking.Machine learning is used in secondary refining modeling mainly for ladle furnaces,Ruhrstahl–Heraeus,vacuum degassing,argon oxygen decarburization,and vacuum oxygen decarburization processes.Further development of machine learning in the steelmaking process modeling can be realized through additional efforts in the construction of the data platform,the industrial transformation of the research achievements to the practical steelmaking process,and the improvement of the universality of the machine learning models. 展开更多
关键词 machine learning steelmaking process modeling artificial neural network support vector machine case-based reasoning data processing
下载PDF
Machine Learning and Artificial Neural Network for Predicting Heart Failure Risk
7
作者 Polin Rahman Ahmed Rifat +3 位作者 MD.IftehadAmjad Chy Mohammad Monirujjaman Khan Mehedi Masud Sultan Aljahdali 《Computer Systems Science & Engineering》 SCIE EI 2023年第1期757-775,共19页
Heart failure is now widely spread throughout the world.Heart disease affects approximately 48%of the population.It is too expensive and also difficult to cure the disease.This research paper represents machine learni... Heart failure is now widely spread throughout the world.Heart disease affects approximately 48%of the population.It is too expensive and also difficult to cure the disease.This research paper represents machine learning models to predict heart failure.The fundamental concept is to compare the correctness of various Machine Learning(ML)algorithms and boost algorithms to improve models’accuracy for prediction.Some supervised algorithms like K-Nearest Neighbor(KNN),Support Vector Machine(SVM),Decision Trees(DT),Random Forest(RF),Logistic Regression(LR)are considered to achieve the best results.Some boosting algorithms like Extreme Gradient Boosting(XGBoost)and Cat-Boost are also used to improve the prediction using Artificial Neural Networks(ANN).This research also focuses on data visualization to identify patterns,trends,and outliers in a massive data set.Python and Scikit-learns are used for ML.Tensor Flow and Keras,along with Python,are used for ANN model train-ing.The DT and RF algorithms achieved the highest accuracy of 95%among the classifiers.Meanwhile,KNN obtained a second height accuracy of 93.33%.XGBoost had a gratified accuracy of 91.67%,SVM,CATBoost,and ANN had an accuracy of 90%,and LR had 88.33%accuracy. 展开更多
关键词 Heart failure prediction data visualization machine learning k-nearest neighbors support vector machine decision tree random forest logistic regression xgboost and catboost artificial neural network
下载PDF
Deep learning CNN-APSO-LSSVM hybrid fusion model for feature optimization and gas-bearing prediction
8
作者 Jiu-Qiang Yang Nian-Tian Lin +3 位作者 Kai Zhang Yan Cui Chao Fu Dong Zhang 《Petroleum Science》 SCIE EI CAS CSCD 2024年第4期2329-2344,共16页
Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the i... Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the input samples is particularly important.Commonly used feature optimization methods increase the interpretability of gas reservoirs;however,their steps are cumbersome,and the selected features cannot sufficiently guide CML models to mine the intrinsic features of sample data efficiently.In contrast to CML methods,deep learning(DL)methods can directly extract the important features of targets from raw data.Therefore,this study proposes a feature optimization and gas-bearing prediction method based on a hybrid fusion model that combines a convolutional neural network(CNN)and an adaptive particle swarm optimization-least squares support vector machine(APSO-LSSVM).This model adopts an end-to-end algorithm structure to directly extract features from sensitive multicomponent seismic attributes,considerably simplifying the feature optimization.A CNN was used for feature optimization to highlight sensitive gas reservoir information.APSO-LSSVM was used to fully learn the relationship between the features extracted by the CNN to obtain the prediction results.The constructed hybrid fusion model improves gas-bearing prediction accuracy through two processes of feature optimization and intelligent prediction,giving full play to the advantages of DL and CML methods.The prediction results obtained are better than those of a single CNN model or APSO-LSSVM model.In the feature optimization process of multicomponent seismic attribute data,CNN has demonstrated better gas reservoir feature extraction capabilities than commonly used attribute optimization methods.In the prediction process,the APSO-LSSVM model can learn the gas reservoir characteristics better than the LSSVM model and has a higher prediction accuracy.The constructed CNN-APSO-LSSVM model had lower errors and a better fit on the test dataset than the other individual models.This method proves the effectiveness of DL technology for the feature extraction of gas reservoirs and provides a feasible way to combine DL and CML technologies to predict gas reservoirs. 展开更多
关键词 Multicomponent seismic data Deep learning Adaptive particle swarm optimization Convolutional neural network Least squares support vector machine Feature optimization Gas-bearing distribution prediction
下载PDF
基于深度自回归模型的电网异常流量检测算法 被引量:1
9
作者 李勇 韩俊飞 +2 位作者 李秀芬 王鹏 王蓓 《沈阳工业大学学报》 CAS 北大核心 2024年第1期24-28,共5页
针对电网中行为种类复杂多样且数量众多的问题,提出了一种基于自回归模型的电网异常流量检测算法。该算法利用深度自编码网络自动提取网络流量数据的特征,降低异常流量检测的分析周期,并自动挖掘数据的层次关系。通过支持向量机对提取... 针对电网中行为种类复杂多样且数量众多的问题,提出了一种基于自回归模型的电网异常流量检测算法。该算法利用深度自编码网络自动提取网络流量数据的特征,降低异常流量检测的分析周期,并自动挖掘数据的层次关系。通过支持向量机对提取的特征进行分类,实现对异常流量的检测。仿真实验结果表明,所提算法可以分析不同攻击向量,避免噪声数据的干扰,进而提高电网异常流量检测的精度,对于流量数据处理具有重要意义。 展开更多
关键词 自回归模型 深度学习 异常检测 海量数据 分析周期 支持向量机
下载PDF
不平衡数据下基于SVM增量学习的指挥信息系统状态监控方法
10
作者 焦志强 易侃 +1 位作者 张杰勇 姚佩阳 《系统工程与电子技术》 EI CSCD 北大核心 2024年第3期992-1003,共12页
针对指挥信息系统历史状态样本有限的特点,基于支持向量机(support vector machines,SVM)设计了一种面向不平衡数据的SVM增量学习方法。针对系统正常/异常状态样本不平衡的情况,首先利用支持向量生成一部分新样本,然后通过分带的思想逐... 针对指挥信息系统历史状态样本有限的特点,基于支持向量机(support vector machines,SVM)设计了一种面向不平衡数据的SVM增量学习方法。针对系统正常/异常状态样本不平衡的情况,首先利用支持向量生成一部分新样本,然后通过分带的思想逐带产生分布更加均匀的新样本以调节原样本集的不平衡比。针对系统监控实时性要求高且在运行过程中会有新样本不断加入的特点,采用增量学习的方式对分类模型进行持续更新,在放松KKT(Karush-Kuhn-Tucker)更新触发条件的基础上,通过定义样本重要度并引入保留率和遗忘率的方式减少了增量学习过程中所需训练的样本数量。为了验证算法的有效性和优越性,实验部分在真实系统中获得的数据集以及UCI数据集中3类6组不平衡数据集中与现有的算法进行了对比。结果表明,所提算法能够有效实现对不平衡数据的增量学习,从而满足指挥信息系统状态监控的需求。 展开更多
关键词 指挥信息系统 系统监控 支持向量机 不平衡数据 增量学习
下载PDF
基于机器学习的碳酸盐岩油藏地层压力预测 被引量:1
11
作者 孙浩 夏朝辉 +3 位作者 李云波 余洋 杨朝蓬 徐立坤 《中国海上油气》 CAS CSCD 北大核心 2024年第2期129-140,共12页
地层压力涉及到开发方式、配产配注等调整,是油田开发过程中极为重要的参数。但获取地层压力需要关井进行压力恢复,操作繁琐,数值模拟法工作量大且计算耗时,现有公式法不太适合采用复杂工作制度的碳酸盐岩油藏。基于相关性计算和主成分... 地层压力涉及到开发方式、配产配注等调整,是油田开发过程中极为重要的参数。但获取地层压力需要关井进行压力恢复,操作繁琐,数值模拟法工作量大且计算耗时,现有公式法不太适合采用复杂工作制度的碳酸盐岩油藏。基于相关性计算和主成分分析等数据预处理过程,结合精英策略遗传算法和支持向量回归模型(SEGA-SVR),建立了基于数据驱动的地层压力预测模型。SEGA-SVR模型在训练集决策系数得分为0.97,均方根误差为0.04;测试集决策系数得分0.95,均方根误差为0.05,对邻区验证井也有较好的表现。SEGA-SVR模型的性能与SVR模型相比有了很大提高,与其他机器学习模型相比,总体来说表现最优。研究结果表明,SEGA-SVR模型无需关井即可预测实时地层压力,且通过遗传算法调参省时省力,数据驱动的方式可更好适应复杂情况。同时该模型具有较好的泛化性和稳定性,预测效果较好,为碳酸盐岩油田地层压力预测提供了新方法。 展开更多
关键词 碳酸盐岩油藏 地层压力预测 数据驱动 机器学习 遗传算法 支持向量机
下载PDF
面向电力设备异常检测的深度自编码支持向量数据描述模型研究
12
作者 耿波 潘曙辉 董晓旭 《湖南电力》 2024年第1期119-127,共9页
针对深度自编码支持向量数据描述模型对电力设备部分异常区分能力不足的问题,提出自监督混合专家增强的深度自编码支持向量数据描述模型,构造多种自监督变换数据集模拟潜在未知异常,引入自监督分类和掩码重构任务以学习更具区分性的表... 针对深度自编码支持向量数据描述模型对电力设备部分异常区分能力不足的问题,提出自监督混合专家增强的深度自编码支持向量数据描述模型,构造多种自监督变换数据集模拟潜在未知异常,引入自监督分类和掩码重构任务以学习更具区分性的表示。此外,将编码器部分改造为混合专家模型结构,将数据分配给不同专家子模块进行专业化的学习,使异常决策边界更清晰。在4个公开数据集和3个电厂设备数据集上的实验结果证实了自监督学习和混合专家模型的有效性。 展开更多
关键词 异常检测 深度自编码支持向量数据描述 自监督学习 混合专家模型
下载PDF
基于最小二乘孪生支持向量机的不确定数据学习算法 被引量:1
13
作者 刘锦能 肖燕珊 刘波 《广东工业大学学报》 CAS 2024年第1期79-85,共7页
孪生支持向量机通过计算2个二次规划问题,得到2个不平行的超平面,用于解决二分类问题。然而在实际的应用中,数据通常包含不确定信息,这将会对构建模型带来困难。对此,提出了一种用于求解带有不确定数据的最小二乘孪生支持向量机模型。首... 孪生支持向量机通过计算2个二次规划问题,得到2个不平行的超平面,用于解决二分类问题。然而在实际的应用中,数据通常包含不确定信息,这将会对构建模型带来困难。对此,提出了一种用于求解带有不确定数据的最小二乘孪生支持向量机模型。首先,对于每个实例,该方法都分配一个噪声向量来构建噪声信息。其次,将噪声向量结合到最小二乘孪生支持向量机,并在训练阶段得到优化。最后,采用一个2步循环迭代的启发式框架求解得到分类器和更新噪声向量。实验表明,跟其他对比方法比较,本方法采用噪声向量对不确定信息进行建模,并将孪生支持向量机的二次规划问题转化为线性方程,具有更好的分类精度和更高的训练效率。 展开更多
关键词 最小二乘 孪生支持向量机 不平行平面学习 数据不确定性 分类
下载PDF
SVM样本约简算法研究综述 被引量:1
14
作者 张代俐 汪廷华 朱兴淋 《计算机科学》 CSCD 北大核心 2024年第7期59-70,共12页
支持向量机(Support Vector Machine, SVM)是基于统计学习理论和结构风险最小化原则发展起来的一种有监督的机器学习算法,它有效克服了局部最小和维数灾难等问题,具有良好的泛化性能,并被广泛应用于模式识别和人工智能领域。但SVM的学... 支持向量机(Support Vector Machine, SVM)是基于统计学习理论和结构风险最小化原则发展起来的一种有监督的机器学习算法,它有效克服了局部最小和维数灾难等问题,具有良好的泛化性能,并被广泛应用于模式识别和人工智能领域。但SVM的学习效率随着训练样本数量的增加而显著降低,对于大规模训练集,采用标准优化方法的传统SVM面临着内存需求过大、执行速度慢,有时甚至无法执行的问题。为了缓解SVM在大规模训练集上存储需求高、训练时间长等问题,学者们提出了SVM样本约简算法。文中首先介绍了SVM理论基础,然后从基于聚类、几何分析、主动学习、增量学习和随机抽样5个方面系统综述了SVM样本约简算法的研究现状,讨论了各种SVM样本约简算法的优缺点,最后总结全文并展望未来。 展开更多
关键词 支持向量机 大规模数据集 样本约简 机器学习 分类
下载PDF
基于多核学习的单分类多示例学习算法
15
作者 古慧敏 肖燕珊 刘波 《广东工业大学学报》 CAS 2024年第2期101-107,共7页
将多核学习引入到单分类多示例学习中,提出了一种基于多核学习的单分类多示例支持向量数据描述算法,解决了多核学习方法在实际应用中多示例数据具有比较复杂分布结构的学习问题。本文算法是将多个示例数据通过多个不同的核函数多核映射... 将多核学习引入到单分类多示例学习中,提出了一种基于多核学习的单分类多示例支持向量数据描述算法,解决了多核学习方法在实际应用中多示例数据具有比较复杂分布结构的学习问题。本文算法是将多个示例数据通过多个不同的核函数多核映射到特征空间,在特征空间中通过支持向量数据描述算法构建球形分类器。该算法采用迭代优化框架,首先,根据初始化包中的正示例来优化目标函数以此建立分类器。然后,根据上一步得到的分类器再对包中的正示例的标签进行更新。最后,在Corel、VOC 2007和Messidor数据集上的实验结果表明,所提出的算法比单核多示例方法具有更好的性能,进一步验证了算法的可行性和有效性。 展开更多
关键词 多核学习 单分类 支持向量数据描述 多示例学习
下载PDF
基于P-L双重特征提取的PEMFC系统故障诊断方法
16
作者 贺飞 张雪霞 陈维荣 《太阳能学报》 EI CAS CSCD 北大核心 2024年第1期492-499,共8页
针对质子交换膜燃料电池系统故障诊断问题,提出基于P-L双重特征提取的故障诊断方法。使用P-L双重特征提取对预处理后的样本数据进行特征提取,通过冗余变量剔除与二次特征提取,最大程度保留分类特征并有效降低样本数据维度。利用二叉树... 针对质子交换膜燃料电池系统故障诊断问题,提出基于P-L双重特征提取的故障诊断方法。使用P-L双重特征提取对预处理后的样本数据进行特征提取,通过冗余变量剔除与二次特征提取,最大程度保留分类特征并有效降低样本数据维度。利用二叉树多类支持向量机与极限学习机对二维故障特征向量进行分类实现故障诊断。通过实例验证,对比线性判别分析的特征提取效果,P-L双重特征提取可使相同分类器测试集诊断准确率提高21.19%,诊断准确率达99.27%,实现了PEMFC系统膜干、氢气供应故障的精准快速诊断。 展开更多
关键词 质子交换膜燃料电池 故障检测 数据挖掘 P-L双重特征提取 支持向量机 极限学习机
下载PDF
基于支持向量回归数据驱动的配电网潮流回归
17
作者 张泰源 周云海 +1 位作者 陈潇潇 郑培城 《三峡大学学报(自然科学版)》 CAS 北大核心 2024年第3期91-98,共8页
随着大量分布式电源接入配电网,配电网潮流分布由原来的单向流动转为双向流动,配电网潮流分布不均、电压越限等问题频现,研究适用于主动配电网的潮流计算更为重要.由于中低压配电网电气参数往往收集不到,量测系统也不如输电网完备,开关... 随着大量分布式电源接入配电网,配电网潮流分布由原来的单向流动转为双向流动,配电网潮流分布不均、电压越限等问题频现,研究适用于主动配电网的潮流计算更为重要.由于中低压配电网电气参数往往收集不到,量测系统也不如输电网完备,开关动作导致的拓扑变化难以实时反映到监控系统.实践中难以将基于导纳矩阵的输电网潮流方法应用到中低压配电网中.鉴于此,本文提出一种数据驱动的潮流线性回归模型,以实现不依赖于配电网物理模型的潮流计算与分析.首先建立不同类型的母线已知量与未知量的映射关系;其次,进一步推导该模型母线类型变换的更新方式;然后构建基于支持向量回归(support vector regression,SVR)的潮流回归模型,通过嵌入高斯核函数以及对样本进行聚类更好地拟合潮流的非线性;最后,在多个IEEE标准系统和改进的IEEE33节点系统仿真验证了所提方法的有效性. 展开更多
关键词 数据驱动 潮流计算 支持向量回归 母线类型变换 机器学习
下载PDF
基于支持向量回归的高炉出铁量预测方法
18
作者 李建生 盛钢 张硕 《河北冶金》 2024年第10期55-58,70,共5页
高炉炼铁是钢铁生产中的重要环节。基于高炉流出的铁水量,选择指定数量的铁包进行装载和运输,能够提高生产效率,并降低整体调度线的能耗。准确预测出铁量,对于后续生产调度有着重要意义。但一方面高炉炼铁涉及大量的物理化学反应和参数... 高炉炼铁是钢铁生产中的重要环节。基于高炉流出的铁水量,选择指定数量的铁包进行装载和运输,能够提高生产效率,并降低整体调度线的能耗。准确预测出铁量,对于后续生产调度有着重要意义。但一方面高炉炼铁涉及大量的物理化学反应和参数变化,且炼铁过程无法从外部实时观测,难以通过直接进行机理分析实现准确的自动控制;另一方面,炼铁过程中记录的鼓风参数、焦炭比、炉渣成分等参数丰富的测量数据,可被用于数据驱动的建模分析。本文旨在通过机理模型分析理想状态下的铁水流速,并设计基于支持向量回归的机器学习模型,对高炉出铁量进行预测。对某日产8000 t铁量高炉的出铁数据进行建模分析,实验结果表明,支持向量回归模型预测出铁量的平均误差在200 t以内,且平均误差、预测标准差等指标优于其它常见的机器学习模型,表现出了数据驱动模型的准确性,能够对实际的高炉炼铁分析和建模提供指导作用,从而降低资源消耗,并提高整体钢铁生产线的生产效率。 展开更多
关键词 高炉炼铁 出铁量 机器学习 支持向量回归 数据驱动
下载PDF
基于多源数据的工业用户功率因数治理决策模型研究
19
作者 李语菲 《电气开关》 2024年第4期23-28,31,共7页
首先根据工业用户力率考核费用及实际力率,创新提出无功治理降损潜力用户标签模型,对用户分层级贴上六类标签;接着,结合配网变压器、线路等参数,创新提出无功治理效益分析模型,挖掘力率不达标引起的用户损失及电网侧供电电量损失;再次... 首先根据工业用户力率考核费用及实际力率,创新提出无功治理降损潜力用户标签模型,对用户分层级贴上六类标签;接着,结合配网变压器、线路等参数,创新提出无功治理效益分析模型,挖掘力率不达标引起的用户损失及电网侧供电电量损失;再次基于支持向量机和主动学习,创新提出用户无功治理状态及需求跟踪关联模型,辨识用户无功治理状态,以便进一步制定服务策略。最后,在某供电公司所辖区域用户验证了模型的经济降损效果。 展开更多
关键词 融合数据 无功画像 支持向量机 主动学习法 技术降损
下载PDF
Deep Learning Based Intrusion Detection in Cloud Services for Resilience Management 被引量:1
20
作者 S.Sreenivasa Chakravarthi R.Jagadeesh Kannan +1 位作者 V.Anantha Natarajan Xiao-Zhi Gao 《Computers, Materials & Continua》 SCIE EI 2022年第6期5117-5133,共17页
In the global scenario one of the important goals for sustainable development in industrial field is innovate new technology,and invest in building infrastructure.All the developed and developing countries focus on bu... In the global scenario one of the important goals for sustainable development in industrial field is innovate new technology,and invest in building infrastructure.All the developed and developing countries focus on building resilient infrastructure and promote sustainable developments by fostering innovation.At this juncture the cloud computing has become an important information and communication technologies model influencing sustainable development of the industries in the developing countries.As part of the innovations happening in the industrial sector,a new concept termed as‘smart manufacturing’has emerged,which employs the benefits of emerging technologies like internet of things and cloud computing.Cloud services deliver an on-demand access to computing,storage,and infrastructural platforms for the industrial users through Internet.In the recent era of information technology the number of business and individual users of cloud services have been increased and larger volumes of data is being processed and stored in it.As a consequence,the data breaches in the cloud services are also increasing day by day.Due to various security vulnerabilities in the cloud architecture;as a result the cloud environment has become non-resilient.To restore the normal behavior of the cloud,detect the deviations,and achieve higher resilience,anomaly detection becomes essential.The deep learning architectures-based anomaly detection mechanisms uses various monitoring metrics characterize the normal behavior of cloud services and identify the abnormal events.This paper focuses on designing an intelligent deep learning based approach for detecting cloud anomalies in real time to make it more resilient.The deep learning models are trained using features extracted from the system level and network level performance metrics observed in the Transfer Control Protocol(TCP)traces of the simulation.The experimental results of the proposed approach demonstrate a superior performance in terms of higher detection rate and lower false alarm rate when compared to the Support Vector Machine(SVM). 展开更多
关键词 Anomaly detection network flow data deep learning MIGRATION auto-encoder support vector machine
下载PDF
上一页 1 2 13 下一页 到第
使用帮助 返回顶部