期刊文献+
共找到39篇文章
< 1 2 >
每页显示 20 50 100
Yarn Quality Prediction for Small Samples Based on AdaBoost Algorithm 被引量:1
1
作者 刘智玉 陈南梁 汪军 《Journal of Donghua University(English Edition)》 CAS 2023年第3期261-266,共6页
In order to solve the problems of weak prediction stability and generalization ability of a neural network algorithm model in the yarn quality prediction research for small samples,a prediction model based on an AdaBo... In order to solve the problems of weak prediction stability and generalization ability of a neural network algorithm model in the yarn quality prediction research for small samples,a prediction model based on an AdaBoost algorithm(AdaBoost model) was established.A prediction model based on a linear regression algorithm(LR model) and a prediction model based on a multi-layer perceptron neural network algorithm(MLP model) were established for comparison.The prediction experiments of the yarn evenness and the yarn strength were implemented.Determination coefficients and prediction errors were used to evaluate the prediction accuracy of these models,and the K-fold cross validation was used to evaluate the generalization ability of these models.In the prediction experiments,the determination coefficient of the yarn evenness prediction result of the AdaBoost model is 76% and 87% higher than that of the LR model and the MLP model,respectively.The determination coefficient of the yarn strength prediction result of the AdaBoost model is slightly higher than that of the other two models.Considering that the yarn evenness dataset has a weaker linear relationship with the cotton dataset than that of the yarn strength dataset in this paper,the AdaBoost model has the best adaptability for the nonlinear dataset among the three models.In addition,the AdaBoost model shows generally better results in the cross-validation experiments and the series of prediction experiments at eight different training set sample sizes.It is proved that the AdaBoost model not only has good prediction accuracy but also has good prediction stability and generalization ability for small samples. 展开更多
关键词 stability and generalization ability for small samples.Key words:yarn quality prediction AdaBoost algorithm small sample generalization ability
下载PDF
Metal Corrosion Rate Prediction of Small Samples Using an Ensemble Technique
2
作者 Yang Yang Pengfei Zheng +3 位作者 Fanru Zeng Peng Xin Guoxi He Kexi Liao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第1期267-291,共25页
Accurate prediction of the internal corrosion rates of oil and gas pipelines could be an effective way to prevent pipeline leaks.In this study,a proposed framework for predicting corrosion rates under a small sample o... Accurate prediction of the internal corrosion rates of oil and gas pipelines could be an effective way to prevent pipeline leaks.In this study,a proposed framework for predicting corrosion rates under a small sample of metal corrosion data in the laboratory was developed to provide a new perspective on how to solve the problem of pipeline corrosion under the condition of insufficient real samples.This approach employed the bagging algorithm to construct a strong learner by integrating several KNN learners.A total of 99 data were collected and split into training and test set with a 9:1 ratio.The training set was used to obtain the best hyperparameters by 10-fold cross-validation and grid search,and the test set was used to determine the performance of the model.The results showed that theMean Absolute Error(MAE)of this framework is 28.06%of the traditional model and outperforms other ensemblemethods.Therefore,the proposed framework is suitable formetal corrosion prediction under small sample conditions. 展开更多
关键词 Oil pipeline BAGGING KNN ensemble learning small sample size
下载PDF
Data processing of small samples based on grey distance information approach 被引量:13
3
作者 Ke Hongfa, Chen Yongguang & Liu Yi 1. Coll. of Electronic Science and Engineering, National Univ. of Defense Technology, Changsha 410073, P. R. China 2. Unit 63880, Luoyang 471003, P. R. China 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2007年第2期281-289,共9页
Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is di... Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is difficult to use the traditional probability theory to process the samples and assess the degree of uncertainty. Using the grey relational theory and the norm theory, the grey distance information approach, which is based on the grey distance information quantity of a sample and the average grey distance information quantity of the samples, is proposed in this article. The definitions of the grey distance information quantity of a sample and the average grey distance information quantity of the samples, with their characteristics and algorithms, are introduced. The correlative problems, including the algorithm of estimated value, the standard deviation, and the acceptance and rejection criteria of the samples and estimated results, are also proposed. Moreover, the information whitening ratio is introduced to select the weight algorithm and to compare the different samples. Several examples are given to demonstrate the application of the proposed approach. The examples show that the proposed approach, which has no demand for the probability distribution of small samples, is feasible and effective. 展开更多
关键词 Data processing Grey theory Norm theory small samples Uncertainty assessments Grey distance measure Information whitening ratio.
下载PDF
Gabor-CNN for object detection based on small samples 被引量:4
4
作者 Xiao-dong Hu Xin-qing Wang +5 位作者 Fan-jie Meng Xia Hua Yu-ji Yan Yu-yang Li Jing Huang Xun-lin Jiang 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2020年第6期1116-1129,共14页
Object detection models based on convolutional neural networks(CNN)have achieved state-of-the-art performance by heavily rely on large-scale training samples.They are insufficient when used in specific applications,su... Object detection models based on convolutional neural networks(CNN)have achieved state-of-the-art performance by heavily rely on large-scale training samples.They are insufficient when used in specific applications,such as the detection of military objects,as in these instances,a large number of samples is hard to obtain.In order to solve this problem,this paper proposes the use of Gabor-CNN for object detection based on a small number of samples.First of all,a feature extraction convolution kernel library composed of multi-shape Gabor and color Gabor is constructed,and the optimal Gabor convolution kernel group is obtained by means of training and screening,which is convolved with the input image to obtain feature information of objects with strong auxiliary function.Then,the k-means clustering algorithm is adopted to construct several different sizes of anchor boxes,which improves the quality of the regional proposals.We call this regional proposal process the Gabor-assisted Region Proposal Network(Gabor-assisted RPN).Finally,the Deeply-Utilized Feature Pyramid Network(DU-FPN)method is proposed to strengthen the feature expression of objects in the image.A bottom-up and a topdown feature pyramid is constructed in ResNet-50 and feature information of objects is deeply utilized through the transverse connection and integration of features at various scales.Experimental results show that the method proposed in this paper achieves better results than the state-of-art contrast models on data sets with small samples in terms of accuracy and recall rate,and thus has a strong application prospect. 展开更多
关键词 Deep learning Convolutional neural network small samples Gabor convolution kernel Feature pyramid
下载PDF
Auxiliary generative mutual adversarial networks for class-imbalanced fault diagnosis under small samples
5
作者 Ranran LI Shunming LI +4 位作者 Kun XU Mengjie ZENG Xianglian LI Jianfeng GU Yong CHEN 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2023年第9期464-478,共15页
The effect of intelligent fault diagnosis of mechanical equipment based on data-driven is often premised on big data and class-balance.However,due to the limitation of working environment,operating conditions and equi... The effect of intelligent fault diagnosis of mechanical equipment based on data-driven is often premised on big data and class-balance.However,due to the limitation of working environment,operating conditions and equipment status,the fault data collected by mechanical equipment are often small and imbalanced with normal samples.Therefore,in order to solve the abovementioned dilemma faced by the fault diagnosis of practical mechanical equipment,an auxiliary generative mutual adversarial network(AGMAN)is proposed.Firstly,the generator combined with the auto-encoder(AE)constructs the decoder reconstruction feature loss to assist it to complete the accurate mapping between noise distribution and real data distribution,generate highquality fake samples,supplement the imbalanced dataset to improve the accuracy of small sample class-imbalanced fault diagnosis.Secondly,the discriminator introduces a structure with unshared dual discriminators.Realize the mutual adversarial between the dual discriminator by setting the scoring criteria that the dual discriminator are completely opposite to the real and fake samples,thus improving the quality and diversity of generated samples to avoid mode collapse.Finally,the auxiliary generator and the dual discriminator are updated alternately.The auxiliary generator can generate fake samples that deceive both discriminators at the same time.Meanwhile,the dual discriminator cannot give correct scores to the real and fake samples according to their respective scoring criteria,so as to achieve Nash equilibrium.Using three different test-bed datasets for verification,the experimental results show that the proposed method can explicitly generate highquality fake samples,which greatly improves the accuracy of class-unbalanced fault diagnosis under small sample,especially when it is extremely imbalanced,after using this method to supplement fake samples,the fault diagnosis accuracy of DCNN and SAE are relatively big improvements.So,the proposed method provides an effective solution for small sample class-unbalanced fault diagnosis. 展开更多
关键词 Adversarial Networks Auto-encoder Class-imbalanced Fault detection small samples
原文传递
Research on aiming methods for small sample size shooting tests of two-dimensional trajectory correction fuse
6
作者 Chen Liang Qiang Shen +4 位作者 Zilong Deng Hongyun Li Wenyang Pu Lingyun Tian Ziyang Lin 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第3期506-517,共12页
The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction ... The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction fuse actuator.The impact point easily deviates from the target,and thus the correction result cannot be readily evaluated.However,the cost of shooting tests is considerably high to conduct many tests for data collection.To address this issue,this study proposes an aiming method for shooting tests based on small sample size.The proposed method uses the Bootstrap method to expand the test data;repeatedly iterates and corrects the position of the simulated theoretical impact points through an improved compatibility test method;and dynamically adjusts the weight of the prior distribution of simulation results based on Kullback-Leibler divergence,which to some extent avoids the real data being"submerged"by the simulation data and achieves the fusion Bayesian estimation of the dispersion center.The experimental results show that when the simulation accuracy is sufficiently high,the proposed method yields a smaller mean-square deviation in estimating the dispersion center and higher shooting accuracy than those of the three comparison methods,which is more conducive to reflecting the effect of the control algorithm and facilitating test personnel to iterate their proposed structures and algorithms.;in addition,this study provides a knowledge base for further comprehensive studies in the future. 展开更多
关键词 Two-dimensional trajectory correction fuse small sample size test Compatibility test KL divergence Fusion bayesian estimation
下载PDF
A fault diagnosis model based on weighted extension neural network for turbo-generator sets on small samples with noise 被引量:9
7
作者 Tichun WANG Jiayun WANG +1 位作者 Yong WU Xin SHENG 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2020年第10期2757-2769,共13页
In data-driven fault diagnosis for turbo-generator sets,the fault samples are usually expensive to obtain,and inevitably with noise,which will both lead to an unsatisfying identification performance of diagnosis model... In data-driven fault diagnosis for turbo-generator sets,the fault samples are usually expensive to obtain,and inevitably with noise,which will both lead to an unsatisfying identification performance of diagnosis models.To address these issues,this paper proposes a fault diagnosis model for turbo-generator sets based on Weighted Extension Neural Network(W-ENN).WENN is a novel neural network which has three types of connection weights and an improved correlation function.The performance of the proposed model is validated against Extension Neural Network(ENN),Support Vector Machine(SVM),Relevance Vector Machine(RVM)and Extreme Learning Machine(ELM)based models.The results indicate that,on noisy small sample sets,the proposed model is superior to the other models in terms of higher identification accuracy with fewer samples and strong noise-tolerant ability.The findings of this study may serve as a powerful fault diagnosis model for turbo-generator sets on noisy small sample sets. 展开更多
关键词 Fault diagnosis samples with noise small samples learning Turbo-generator sets Weighted Extension Neural Network
原文传递
Scatter factor confidence interval estimate of least square maximum entropy quantile function for small samples 被引量:3
8
作者 Wu Fuxian Wen Weidong 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2016年第5期1285-1293,共9页
Classic maximum entropy quantile function method (CMEQFM) based on the probability weighted moments (PWMs) can accurately estimate the quantile function of random variable on small samples, but inaccurately on the... Classic maximum entropy quantile function method (CMEQFM) based on the probability weighted moments (PWMs) can accurately estimate the quantile function of random variable on small samples, but inaccurately on the very small samples. To overcome this weakness, least square maximum entropy quantile function method (LSMEQFM) and that with constraint condition (LSMEQFMCC) are proposed. To improve the confidence level of quantile function estimation, scatter factor method is combined with maximum entropy method to estimate the confidence interval of quantile function. From the comparisons of these methods about two common probability distributions and one engineering application, it is showed that CMEQFM can estimate the quantile function accurately on the small samples but inaccurately on the very small samples (10 samples); LSMEQFM and LSMEQFMCC can be successfully applied to the very small samples; with consideration of the constraint condition on quantile function, LSMEQFMCC is more stable and computationally accurate than LSMEQFM; scatter factor confidence interval estimation method based on LSMEQFM or LSMEQFMCC has good estimation accuracy on the confidence interval of quantile function, and that based on LSMEQFMCC is the most stable and accurate method on the very small samples (10 samples). 展开更多
关键词 Confidence intervals Maximum entropy Quantile function RELIABILITY Scatter factor small samples
原文传递
Static Frame Model Validation with Small Samples Solution Using Improved Kernel Density Estimation and Confidence Level Method 被引量:5
9
作者 ZHANG Baoqiang CHEN Guoping GUO Qintao 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2012年第6期879-886,共8页
An improved method using kernel density estimation (KDE) and confidence level is presented for model validation with small samples. Decision making is a challenging problem because of input uncertainty and only smal... An improved method using kernel density estimation (KDE) and confidence level is presented for model validation with small samples. Decision making is a challenging problem because of input uncertainty and only small samples can be used due to the high costs of experimental measurements. However, model validation provides more confidence for decision makers when improving prediction accuracy at the same time. The confidence level method is introduced and the optimum sample variance is determined using a new method in kernel density estimation to increase the credibility of model validation. As a numerical example, the static frame model validation challenge problem presented by Sandia National Laboratories has been chosen. The optimum bandwidth is selected in kernel density estimation in order to build the probability model based on the calibration data. The model assessment is achieved using validation and accreditation experimental data respectively based on the probability model. Finally, the target structure prediction is performed using validated model, which are consistent with the results obtained by other researchers. The results demonstrate that the method using the improved confidence level and kernel density estimation is an effective approach to solve the model validation problem with small samples. 展开更多
关键词 model validation small samples uncertainty analysis kernel density estimation confidence level prediction
原文传递
Calculation of Two-Tailed Exact Probability in the Wald-Wolfowitz One-Sample Runs Test
10
作者 José Moral De La Rubia 《Journal of Data Analysis and Information Processing》 2024年第1期89-114,共26页
The objectives of this paper are to demonstrate the algorithms employed by three statistical software programs (R, Real Statistics using Excel, and SPSS) for calculating the exact two-tailed probability of the Wald-Wo... The objectives of this paper are to demonstrate the algorithms employed by three statistical software programs (R, Real Statistics using Excel, and SPSS) for calculating the exact two-tailed probability of the Wald-Wolfowitz one-sample runs test for randomness, to present a novel approach for computing this probability, and to compare the four procedures by generating samples of 10 and 11 data points, varying the parameters n<sub>0</sub> (number of zeros) and n<sub>1</sub> (number of ones), as well as the number of runs. Fifty-nine samples are created to replicate the behavior of the distribution of the number of runs with 10 and 11 data points. The exact two-tailed probabilities for the four procedures were compared using Friedman’s test. Given the significant difference in central tendency, post-hoc comparisons were conducted using Conover’s test with Benjamini-Yekutielli correction. It is concluded that the procedures of Real Statistics using Excel and R exhibit some inadequacies in the calculation of the exact two-tailed probability, whereas the new proposal and the SPSS procedure are deemed more suitable. The proposed robust algorithm has a more transparent rationale than the SPSS one, albeit being somewhat more conservative. We recommend its implementation for this test and its application to others, such as the binomial and sign test. 展开更多
关键词 RANDOMNESS Nonparametric Test Exact Probability small samples QUANTILES
下载PDF
LF-CNN:Deep Learning-Guided Small Sample Target Detection for Remote Sensing Classification
11
作者 Chengfan Li Lan Liu +1 位作者 Junjuan Zhao Xuefeng Liu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2022年第4期429-444,共16页
Target detection of small samples with a complex background is always difficult in the classification of remote sensing images.We propose a new small sample target detection method combining local features and a convo... Target detection of small samples with a complex background is always difficult in the classification of remote sensing images.We propose a new small sample target detection method combining local features and a convolutional neural network(LF-CNN)with the aim of detecting small numbers of unevenly distributed ground object targets in remote sensing images.The k-nearest neighbor method is used to construct the local neighborhood of each point and the local neighborhoods of the features are extracted one by one from the convolution layer.All the local features are aggregated by maximum pooling to obtain global feature representation.The classification probability of each category is then calculated and classified using the scaled expected linear units function and the full connection layer.The experimental results show that the proposed LF-CNN method has a high accuracy of target detection and classification for hyperspectral imager remote sensing data under the condition of small samples.Despite drawbacks in both time and complexity,the proposed LF-CNN method can more effectively integrate the local features of ground object samples and improve the accuracy of target identification and detection in small samples of remote sensing images than traditional target detection methods. 展开更多
关键词 small samples local features convolutional neural network(CNN) k-nearest neighbor(KNN) target detection
下载PDF
Reliability Assessment for the Solenoid Valve of a High-Speed Train Braking System under Small Sample Size 被引量:9
12
作者 Jian-Wei Yang Jin-Hai Wang +1 位作者 Qiang Huang Ming Zhou 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2018年第3期189-199,共11页
Reliability assessment of the braking system in a high?speed train under small sample size and zero?failure data is veryimportant for safe operation. Traditional reliability assessment methods are only performed well ... Reliability assessment of the braking system in a high?speed train under small sample size and zero?failure data is veryimportant for safe operation. Traditional reliability assessment methods are only performed well under conditions of large sample size and complete failure data,which lead to large deviation under conditions of small sample size and zero?failure data. To improve this problem,a new Bayesian method is proposed. Based on the characteristics of the solenoid valve in the braking system of a high?speed train,the modified Weibull distribution is selected to describe the failure rate over the entire lifetime. Based on the assumption of a binomial distribution for the failure probability at censored time,a concave method is employed to obtain the relationships between accumulation failure prob?abilities. A numerical simulation is performed to compare the results of the proposed method with those obtained from maximum likelihood estimation,and to illustrate that the proposed Bayesian model exhibits a better accuracy for the expectation value when the sample size is less than 12. Finally,the robustness of the model is demonstrated by obtaining the reliability indicators for a numerical case involving the solenoid valve of the braking system,which shows that the change in the reliability and failure rate among the di erent hyperparameters is small. The method is provided to avoid misleading of subjective information and improve accuracy of reliability assessment under condi?tions of small sample size and zero?failure data. 展开更多
关键词 Zero?failure data Modified Weibull distribution small sample size Bayesian method
下载PDF
Small sample Bayesian analyses in assessment of weapon performance 被引量:6
13
作者 Li Qingmin Wang Hongwei Liu Jun 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2007年第3期545-550,共6页
Abundant test data are required in assessment of weapon performance. When weapon test data are insufficient, Bayesian analyses in small sample circumstance should be considered and the test data should be provided by ... Abundant test data are required in assessment of weapon performance. When weapon test data are insufficient, Bayesian analyses in small sample circumstance should be considered and the test data should be provided by simulations. The several Bayesian approaches are discussed and some limitations are founded. An improvement is put forward after limitations of Bayesian approaches available are analyzed and the improved approach is applied to assessment of some new weapon performance. 展开更多
关键词 Bayesian approach small sample confidence.
下载PDF
Application of Small Sample Analysis in Life Estimation of Aeroengine Components 被引量:4
14
作者 聂挺 《Journal of Southwest Jiaotong University(English Edition)》 2010年第4期285-288,共4页
The samples of fatigue life tests for aeroengine components are usually less than 5,so the evaluation of these samples belongs to small sample analysis. The Weibull distribution is known to describe the life data accu... The samples of fatigue life tests for aeroengine components are usually less than 5,so the evaluation of these samples belongs to small sample analysis. The Weibull distribution is known to describe the life data accurately,and the Weibayes method (developed from Bayesian method) expands on the experiential data in the small sample analysis of fatigue life in aeroengine. Based on the Weibull analysis,a program was developed to improve the efficiency of the reliability analysis for aeroengine compgnents. This program has complete functions and offers highly accurate results. A particular turbine disk's low cycle fatigue life was evaluated by this program. From the results,the following conclusions were drawn:(a) that this program could be used for the engineering applications,and (b) while a lack of former test data lowered the validity of evaluation results,the Weibayes method ensured the results of small sample analysis did not deviate from the truth. 展开更多
关键词 AEROENGINE LIFE small sample Weibull distribution
下载PDF
Consensus of heterogeneous multi-agent systems based on sampled data with a small sampling delay
15
作者 王娜 吴治海 彭力 《Chinese Physics B》 SCIE EI CAS CSCD 2014年第10期617-625,共9页
In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay fo... In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay for heterogeneous multi-agent systems is proposed. Then, the algebra graph theory, the matrix method, the stability theory of linear systems, and some other techniques are employed to derive the necessary and sufficient conditions guaranteeing heterogeneous multi-agent systems to asymptotically achieve the stationary consensus. Finally, simulations are performed to demonstrate the correctness of the theoretical results. 展开更多
关键词 heterogeneous multi-agent systems CONSENSUS SAMPLED-DATA small sampling delay
下载PDF
Model-data-driven seismic inversion method based on small sample data
16
作者 LIU Jinshui SUN Yuhang LIU Yang 《Petroleum Exploration and Development》 CSCD 2022年第5期1046-1055,共10页
As sandstone layers in thin interbedded section are difficult to identify,conventional model-driven seismic inversion and data-driven seismic prediction methods have low precision in predicting them.To solve this prob... As sandstone layers in thin interbedded section are difficult to identify,conventional model-driven seismic inversion and data-driven seismic prediction methods have low precision in predicting them.To solve this problem,a model-data-driven seismic AVO(amplitude variation with offset)inversion method based on a space-variant objective function has been worked out.In this method,zero delay cross-correlation function and F norm are used to establish objective function.Based on inverse distance weighting theory,change of the objective function is controlled according to the location of the target CDP(common depth point),to change the constraint weights of training samples,initial low-frequency models,and seismic data on the inversion.Hence,the proposed method can get high resolution and high-accuracy velocity and density from inversion of small sample data,and is suitable for identifying thin interbedded sand bodies.Tests with thin interbedded geological models show that the proposed method has high inversion accuracy and resolution for small sample data,and can identify sandstone and mudstone layers of about one-30th of the dominant wavelength thick.Tests on the field data of Lishui sag show that the inversion results of the proposed method have small relative error with well-log data,and can identify thin interbedded sandstone layers of about one-15th of the dominant wavelength thick with small sample data. 展开更多
关键词 small sample data space-variant objective function model-data-driven neural network seismic AVO inversion thin interbedded sandstone identification Paleocene Lishui sag
下载PDF
Reliability-Based Optimization:Small Sample Optimization Strategy
17
作者 Drahomir Novak Ondrej Slowik Maosen Cao 《Journal of Computer and Communications》 2014年第11期31-37,共7页
The aim of the paper is to present a newly developed approach for reliability-based design optimization. It is based on double loop framework where the outer loop of algorithm covers the optimization part of process o... The aim of the paper is to present a newly developed approach for reliability-based design optimization. It is based on double loop framework where the outer loop of algorithm covers the optimization part of process of reliability-based optimization and reliability constrains are calculated in inner loop. Innovation of suggested approach is in application of newly developed optimization strategy based on multilevel simulation using an advanced Latin Hypercube Sampling technique. This method is called Aimed multilevel sampling and it is designated for optimization of problems where only limited number of simulations is possible to perform due to enormous com- putational demands. 展开更多
关键词 OPTIMIZATION Reliability Assessment Aimed Multilevel Sampling Monte Carlo Latin Hypercube Sampling Probability of Failure Reliability-Based Design Optimization small Sample Analysis
下载PDF
Meshfree-based physics-informed neural networks for the unsteady Oseen equations
18
作者 彭珂依 岳靖 +1 位作者 张文 李剑 《Chinese Physics B》 SCIE EI CAS CSCD 2023年第4期151-159,共9页
We propose the meshfree-based physics-informed neural networks for solving the unsteady Oseen equations.Firstly,based on the ideas of meshfree and small sample learning,we only randomly select a small number of spatio... We propose the meshfree-based physics-informed neural networks for solving the unsteady Oseen equations.Firstly,based on the ideas of meshfree and small sample learning,we only randomly select a small number of spatiotemporal points to train the neural network instead of forming a mesh.Specifically,we optimize the neural network by minimizing the loss function to satisfy the differential operators,initial condition and boundary condition.Then,we prove the convergence of the loss function and the convergence of the neural network.In addition,the feasibility and effectiveness of the method are verified by the results of numerical experiments,and the theoretical derivation is verified by the relative error between the neural network solution and the analytical solution. 展开更多
关键词 physics-informed neural networks the unsteady Oseen equation convergence small sample learning
下载PDF
Fault Diagnosis of 5G Networks Based on Digital Twin Model
19
作者 Xiaorong Zhu Lingyu Zhao +1 位作者 Jiaming Cao Jianhong Cai 《China Communications》 SCIE CSCD 2023年第7期175-191,共17页
Fault diagnosis of 5G networks faces the challenges of heavy reliance on human experience and insufficient fault samples and relevant monitoring data.The digital twin technology can realize the interaction between vir... Fault diagnosis of 5G networks faces the challenges of heavy reliance on human experience and insufficient fault samples and relevant monitoring data.The digital twin technology can realize the interaction between virtual space and physical space through the fusion of model and data,providing a new paradigm for fault diagnosis.In this paper,we first propose a network digital twin model and apply it to 5G network diagnosis.We then use an improved Average Wasserstein GAN with Gradient Penalty(AWGAN-GP)method to discover and predict failures in the twin network.Finally,we use XGBoost algorithm to locate the faults in physical network in real time.Extensive simulation results show that the proposed approach can significantly increase fault prediction and diagnosis accuracy in the case of a small number of labeled failure samples in 5G networks. 展开更多
关键词 5G networks fault diagnosis digital twin AWGAN-GP a small number of samples
下载PDF
基于内容图像检索中相关反馈技术的回顾 被引量:52
20
作者 吴洪 卢汉清 马颂德 《计算机学报》 EI CSCD 北大核心 2005年第12期1969-1979,共11页
由于相关反馈技术能有效地提高基于内容图像检索的性能,使它成为图像检索系统中不可少的一部分.近年来相关反馈技术的研究正吸引着越来越多的关注,涌现出了许多算法.在简要介绍了基于内容图像检索后,文中讨论了相关反馈的交互过程和其... 由于相关反馈技术能有效地提高基于内容图像检索的性能,使它成为图像检索系统中不可少的一部分.近年来相关反馈技术的研究正吸引着越来越多的关注,涌现出了许多算法.在简要介绍了基于内容图像检索后,文中讨论了相关反馈的交互过程和其中的重要环节,进一步分析了相关反馈中的学习问题及其特点,根据相关反馈算法所采用的检索模型把算法分为基于距离度量的方法、基于概率框架的方法和基于机器学习的方法,并在这个分类下对近年来有代表性的一些算法进行了分析和探讨,最后展望了相关反馈技术未来的发展方向. 展开更多
关键词 相关反馈 基于内容图像检索 监督学习 小样本 用户相关判断
下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部