期刊文献+
共找到5篇文章
< 1 >
每页显示 20 50 100
Prototypical Network Based on Manhattan Distance
1
作者 Zengchen Yu Ke Wang +2 位作者 Shuxuan Xie Yuanfeng Zhong Zhihan Lv 《Computer Modeling in Engineering & Sciences》 SCIE EI 2022年第5期655-675,共21页
Few-shot Learning algorithms can be effectively applied to fields where certain categories have only a small amount of data or a small amount of labeled data,such as medical images,terrorist surveillance,and so on.The... Few-shot Learning algorithms can be effectively applied to fields where certain categories have only a small amount of data or a small amount of labeled data,such as medical images,terrorist surveillance,and so on.The Metric Learning in the Few-shot Learning algorithmis classified by measuring the similarity between the classified samples and the unclassified samples.This paper improves the Prototypical Network in the Metric Learning,and changes its core metric function to Manhattan distance.The Convolutional Neural Network of the embedded module is changed,and mechanisms such as average pooling and Dropout are added.Through comparative experiments,it is found that thismodel can converge in a small number of iterations(below 15,000 episodes),and its performance exceeds algorithms such asMAML.Research shows that replacingManhattan distance with Euclidean distance can effectively improve the classification effect of the Prototypical Network,and mechanisms such as average pooling and Dropout can also effectively improve the model. 展开更多
关键词 Few-shot Learning Prototypical Network Convolutional Neural Network manhattan distance
下载PDF
VGWO: Variant Grey Wolf Optimizer with High Accuracy and Low Time Complexity
2
作者 Junqiang Jiang Zhifang Sun +3 位作者 Xiong Jiang Shengjie Jin Yinli Jiang Bo Fan 《Computers, Materials & Continua》 SCIE EI 2023年第11期1617-1644,共28页
The grey wolf optimizer(GWO)is a swarm-based intelligence optimization algorithm by simulating the steps of searching,encircling,and attacking prey in the process of wolf hunting.Along with its advantages of simple pr... The grey wolf optimizer(GWO)is a swarm-based intelligence optimization algorithm by simulating the steps of searching,encircling,and attacking prey in the process of wolf hunting.Along with its advantages of simple principle and few parameters setting,GWO bears drawbacks such as low solution accuracy and slow convergence speed.A few recent advanced GWOs are proposed to try to overcome these disadvantages.However,they are either difficult to apply to large-scale problems due to high time complexity or easily lead to early convergence.To solve the abovementioned issues,a high-accuracy variable grey wolf optimizer(VGWO)with low time complexity is proposed in this study.VGWO first uses the symmetrical wolf strategy to generate an initial population of individuals to lay the foundation for the global seek of the algorithm,and then inspired by the simulated annealing algorithm and the differential evolution algorithm,a mutation operation for generating a new mutant individual is performed on three wolves which are randomly selected in the current wolf individuals while after each iteration.A vectorized Manhattan distance calculation method is specifically designed to evaluate the probability of selecting the mutant individual based on its status in the current wolf population for the purpose of dynamically balancing global search and fast convergence capability of VGWO.A series of experiments are conducted on 19 benchmark functions from CEC2014 and CEC2020 and three real-world engineering cases.For 19 benchmark functions,VGWO’s optimization results place first in 80%of comparisons to the state-of-art GWOs and the CEC2020 competition winner.A further evaluation based on the Friedman test,VGWO also outperforms all other algorithms statistically in terms of robustness with a better average ranking value. 展开更多
关键词 Intelligence optimization algorithm grey wolf optimizer(GWO) manhattan distance symmetric coordinates
下载PDF
A Novel Approach to Design Distribution Preserving Framework for Big Data
3
作者 Mini Prince P.M.Joe Prathap 《Intelligent Automation & Soft Computing》 SCIE 2023年第3期2789-2803,共15页
In several fields like financial dealing,industry,business,medicine,et cetera,Big Data(BD)has been utilized extensively,which is nothing but a collection of a huge amount of data.However,it is highly complicated alon... In several fields like financial dealing,industry,business,medicine,et cetera,Big Data(BD)has been utilized extensively,which is nothing but a collection of a huge amount of data.However,it is highly complicated along with time-consuming to process a massive amount of data.Thus,to design the Distribution Preserving Framework for BD,a novel methodology has been proposed utilizing Manhattan Distance(MD)-centered Partition Around Medoid(MD–PAM)along with Conjugate Gradient Artificial Neural Network(CG-ANN),which undergoes various steps to reduce the complications of BD.Firstly,the data are processed in the pre-processing phase by mitigating the data repetition utilizing the map-reduce function;subsequently,the missing data are handled by substituting or by ignoring the missed values.After that,the data are transmuted into a normalized form.Next,to enhance the classification performance,the data’s dimensionalities are minimized by employing Gaussian Kernel(GK)-Fisher Discriminant Analysis(GK-FDA).Afterwards,the processed data is submitted to the partitioning phase after transmuting it into a structured format.In the partition phase,by utilizing the MD-PAM,the data are partitioned along with grouped into a cluster.Lastly,by employing CG-ANN,the data are classified in the classification phase so that the needed data can be effortlessly retrieved by the user.To analogize the outcomes of the CG-ANN with the prevailing methodologies,the NSL-KDD openly accessible datasets are utilized.The experiential outcomes displayed that an efficient result along with a reduced computation cost was shown by the proposed CG-ANN.The proposed work outperforms well in terms of accuracy,sensitivity and specificity than the existing systems. 展开更多
关键词 Big data artificial neural network fisher discriminant analysis distribution preserving framework manhattan distance
下载PDF
Surrogate modeling for long-term and high-resolution prediction of building thermal load with a metric-optimized KNN algorithm 被引量:1
4
作者 Yumin Liang Yiqun Pan +2 位作者 Xiaolei Yuan Wenqi Jia Zhizhong Huang 《Energy and Built Environment》 2023年第6期709-724,共16页
During the pre-design stage of buildings,reliable long-term prediction of thermal loads is significant for cool-ing/heating system configuration and efficient operation.This paper proposes a surrogate modeling method ... During the pre-design stage of buildings,reliable long-term prediction of thermal loads is significant for cool-ing/heating system configuration and efficient operation.This paper proposes a surrogate modeling method to predict all-year hourly cooling/heating loads in high resolution for retail,hotel,and office buildings.16384 surrogate models are simulated in EnergyPlus to generate the load database,which contains 7 crucial building features as inputs and hourly loads as outputs.K-nearest-neighbors(KNN)is chosen as the data-driven algorithm to approximate the surrogates for load prediction.With test samples from the database,performances of five different spatial metrics for KNN are evaluated and optimized.Results show that the Manhattan distance is the optimal metric with the highest efficient hour rates of 93.57%and 97.14%for cooling and heating loads in office buildings.The method is verified by predicting the thermal loads of a given district in Shanghai,China.The mean absolute percentage errors(MAPE)are 5.26%and 6.88%for cooling/heating loads,respectively,and 5.63%for the annual thermal loads.The proposed surrogate modeling method meets the precision requirement of engineering in the building pre-design stage and achieves the fast prediction of all-year hourly thermal loads at the district level.As a data-driven approximation,it does not require as much detailed building information as the commonly used physics-based methods.And by pre-simulation of sufficient prototypical models,the method overcomes the gaps of data missing in current data-driven methods. 展开更多
关键词 Thermal load prediction Surrogate modeling Pre-design K-nearest-neighbors manhattan distance
原文传递
Dual-Stage Hybrid Learning Particle Swarm Optimization Algorithm for Global Optimization Problems 被引量:2
5
作者 Wei Li Yangtao Chen +3 位作者 Qian Cai Cancan Wang Ying Huang Soroosh Mahmoodi 《Complex System Modeling and Simulation》 2022年第4期288-306,共19页
Particle swarm optimization(PSO)is a type of swarm intelligence algorithm that is frequently used to resolve specific global optimization problems due to its rapid convergence and ease of operation.However,PSO still h... Particle swarm optimization(PSO)is a type of swarm intelligence algorithm that is frequently used to resolve specific global optimization problems due to its rapid convergence and ease of operation.However,PSO still has certain deficiencies,such as a poor trade-off between exploration and exploitation and premature convergence.Hence,this paper proposes a dual-stage hybrid learning particle swarm optimization(DHLPSO).In the algorithm,the iterative process is partitioned into two stages.The learning strategy used at each stage emphasizes exploration and exploitation,respectively.In the first stage,to increase population variety,a Manhattan distance based learning strategy is proposed.In this strategy,each particle chooses the furthest Manhattan distance particle and a better particle for learning.In the second stage,an excellent example learning strategy is adopted to perform local optimization operations on the population,in which each particle learns from the global optimal particle and a better particle.Utilizing the Gaussian mutation strategy,the algorithm’s searchability in particular multimodal functions is significantly enhanced.On benchmark functions from CEC 2013,DHLPSO is evaluated alongside other PSO variants already in existence.The comparison results clearly demonstrate that,compared to other cutting-edge PSO variations,DHLPSO implements highly competitive performance in handling global optimization problems. 展开更多
关键词 particle swarm optimization manhattan distance example learning gaussian mutation dual-stage global optimization problem
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部