For accurately identifying the distribution charac-teristic of Gaussian-like noises in unmanned aerial vehicle(UAV)state estimation,this paper proposes a non-parametric scheme based on curve similarity matching.In the...For accurately identifying the distribution charac-teristic of Gaussian-like noises in unmanned aerial vehicle(UAV)state estimation,this paper proposes a non-parametric scheme based on curve similarity matching.In the framework of the pro-posed scheme,a Parzen window(kernel density estimation,KDE)method on sliding window technology is applied for roughly esti-mating the sample probability density,a precise data probability density function(PDF)model is constructed with the least square method on K-fold cross validation,and the testing result based on evaluation method is obtained based on some data characteristic analyses of curve shape,abruptness and symmetry.Some com-parison simulations with classical methods and UAV flight exper-iment shows that the proposed scheme has higher recognition accuracy than classical methods for some kinds of Gaussian-like data,which provides better reference for the design of Kalman filter(KF)in complex water environment.展开更多
This study aimed to examine the performance of the Siegel-Tukey and Savage tests on data sets with heterogeneous variances. The analysis, considering Normal, Platykurtic, and Skewed distributions and a standard deviat...This study aimed to examine the performance of the Siegel-Tukey and Savage tests on data sets with heterogeneous variances. The analysis, considering Normal, Platykurtic, and Skewed distributions and a standard deviation ratio of 1, was conducted for both small and large sample sizes. For small sample sizes, two main categories were established: equal and different sample sizes. Analyses were performed using Monte Carlo simulations with 20,000 repetitions for each scenario, and the simulations were evaluated using SAS software. For small sample sizes, the I. type error rate of the Siegel-Tukey test generally ranged from 0.045 to 0.055, while the I. type error rate of the Savage test was observed to range from 0.016 to 0.041. Similar trends were observed for Platykurtic and Skewed distributions. In scenarios with different sample sizes, the Savage test generally exhibited lower I. type error rates. For large sample sizes, two main categories were established: equal and different sample sizes. For large sample sizes, the I. type error rate of the Siegel-Tukey test ranged from 0.047 to 0.052, while the I. type error rate of the Savage test ranged from 0.043 to 0.051. In cases of equal sample sizes, both tests generally had lower error rates, with the Savage test providing more consistent results for large sample sizes. In conclusion, it was determined that the Savage test provides lower I. type error rates for small sample sizes and that both tests have similar error rates for large sample sizes. These findings suggest that the Savage test could be a more reliable option when analyzing variance differences.展开更多
Coastal sediment type map has been widely used in marine economic and engineering activities, but the traditional mapping methods had some limitations due to their intrinsic assumption or subjectivity. In this paper, ...Coastal sediment type map has been widely used in marine economic and engineering activities, but the traditional mapping methods had some limitations due to their intrinsic assumption or subjectivity. In this paper, a non-parametric indicator Kriging method has been proposed for generating coastal sediment map. The method can effectively avoid mapping subjectivity, has no special requirements for the sample data to meet second-order stationary or normal distribution, and can also provide useful information on the quantitative evaluation of mapping uncertainty. The application of the method in the southern sea area of Lianyungang showed that much more convincing mapping results could be obtained compared with the traditional methods such as IDW, Kriging and Voronoi diagram under the same condition, so the proposed method was applicable with great utilization value.展开更多
Short-term traffic flow is one of the core technologies to realize traffic flow guidance. In this article, in view of the characteristics that the traffic flow changes repeatedly, a short-term traffic flow forecasting...Short-term traffic flow is one of the core technologies to realize traffic flow guidance. In this article, in view of the characteristics that the traffic flow changes repeatedly, a short-term traffic flow forecasting method based on a three-layer K-nearest neighbor non-parametric regression algorithm is proposed. Specifically, two screening layers based on shape similarity were introduced in K-nearest neighbor non-parametric regression method, and the forecasting results were output using the weighted averaging on the reciprocal values of the shape similarity distances and the most-similar-point distance adjustment method. According to the experimental results, the proposed algorithm has improved the predictive ability of the traditional K-nearest neighbor non-parametric regression method, and greatly enhanced the accuracy and real-time performance of short-term traffic flow forecasting.展开更多
Detecting moving objects in the stationary background is an important problem in visual surveillance systems.However,the traditional background subtraction method fails when the background is not completely stationary...Detecting moving objects in the stationary background is an important problem in visual surveillance systems.However,the traditional background subtraction method fails when the background is not completely stationary and involves certain dynamic changes.In this paper,according to the basic steps of the background subtraction method,a novel non-parametric moving object detection method is proposed based on an improved ant colony algorithm by using the Markov random field.Concretely,the contributions are as follows:1)A new nonparametric strategy is utilized to model the background,based on an improved kernel density estimation;this approach uses an adaptive bandwidth,and the fused features combine the colours,gradients and positions.2)A Markov random field method based on this adaptive background model via the constraint of the spatial context is proposed to extract objects.3)The posterior function is maximized efficiently by using an improved ant colony system algorithm.Extensive experiments show that the proposed method demonstrates a better performance than many existing state-of-the-art methods.展开更多
The ability to build an imaging process is crucial to vision measurement.The non-parametric imaging model describes an imaging process as a pixel cluster,in which each pixel is related to a spatial ray originated from...The ability to build an imaging process is crucial to vision measurement.The non-parametric imaging model describes an imaging process as a pixel cluster,in which each pixel is related to a spatial ray originated from an object point.However,a non-parametric model requires a sophisticated calculation process or high-cost devices to obtain a massive quantity of parameters.These disadvantages limit the application of camera models.Therefore,we propose a novel camera model calibration method based on a single-axis rotational target.The rotational vision target offers 3D control points with no need for detailed information of poses of the rotational target.Radial basis function(RBF)network is introduced to map 3D coordinates to 2D image coordinates.We subsequently derive the optimization formulization of imaging model parameters and compute the parameter from the given control points.The model is extended to adapt the stereo camera that is widely used in vision measurement.Experiments have been done to evaluate the performance of the proposed camera calibration method.The results show that the proposed method has superiority in accuracy and effectiveness in comparison with the traditional methods.展开更多
This paper addresses the design of an exponential function-based learning law for artificial neural networks(ANNs)with continuous dynamics.The ANN structure is used to obtain a non-parametric model of systems with unc...This paper addresses the design of an exponential function-based learning law for artificial neural networks(ANNs)with continuous dynamics.The ANN structure is used to obtain a non-parametric model of systems with uncertainties,which are described by a set of nonlinear ordinary differential equations.Two novel adaptive algorithms with predefined exponential convergence rate adjust the weights of the ANN.The first algorithm includes an adaptive gain depending on the identification error which accelerated the convergence of the weights and promotes a faster convergence between the states of the uncertain system and the trajectories of the neural identifier.The second approach uses a time-dependent sigmoidal gain that forces the convergence of the identification error to an invariant set characterized by an ellipsoid.The generalized volume of this ellipsoid depends on the upper bounds of uncertainties,perturbations and modeling errors.The application of the invariant ellipsoid method yields to obtain an algorithm to reduce the volume of the convergence region for the identification error.Both adaptive algorithms are derived from the application of a non-standard exponential dependent function and an associated controlled Lyapunov function.Numerical examples demonstrate the improvements enforced by the algorithms introduced in this study by comparing the convergence settings concerning classical schemes with non-exponential continuous learning methods.The proposed identifiers overcome the results of the classical identifier achieving a faster convergence to an invariant set of smaller dimensions.展开更多
A quantitative study was used in the study of the tendency to change drought indicators in Vietnam through the Ninh Thuan province case study. The research data are temperature and precipitation data of 11 stations fr...A quantitative study was used in the study of the tendency to change drought indicators in Vietnam through the Ninh Thuan province case study. The research data are temperature and precipitation data of 11 stations from 1986 to 2016 inside and outside Ninh Thuan province. To do the research, the author uses a non-parametric analysis method and the drought index calculation method. Specifically, with the non-parametric method, the author uses the analysis, Mann-Kendall (MK) and Theil-Sen (Sen’s slope), and to analyze drought, the author uses the Standardized Precipitation Index (SPI) and the Moisture Index (MI). Two Softwares calculated in this study are ProUCL 5.1 and MAKENSEN 1.0 by the US Environmental Protection Agency and Finnish Meteorological Institute. The calculation results show that meteorological drought will decrease in the future with areas such as Phan Rang, Song Pha, Quan The, Ba Thap tend to increase very clearly, while Tam My and Nhi Ha tend to increase very clearly short. With the agricultural drought, the average MI results increased 0.013 per year, of which Song Pha station tended to increase the highest with 0.03 per year and lower with Nhi Ha with 0.001 per year. The forecast results also show that by the end of the 21st century, the SPI tends to decrease with SPI 1 being <span style="white-space:nowrap;">−</span>0.68, SPI 3 being <span style="white-space:nowrap;">−</span>0.40, SPI 6 being <span style="white-space:nowrap;">−</span>0.25, SPI 12 is 0.42. Along with that is the forecast that the MI index will increase 0.013 per year to 2035, the MI index is 0.93, in 2050 it is 1.13, in 2075 it will be 1.46, and by 2100 it is 1.79. Research results will be used in policymaking, environmental resources management agencies, and researchers to develop and study solutions to adapt and mitigate drought in the context of variable climate change.展开更多
The effect of treatment on patient’s outcome can easily be determined through the impact of the treatment on biological events. Observing the treatment for patients for a certain period of time can help in determinin...The effect of treatment on patient’s outcome can easily be determined through the impact of the treatment on biological events. Observing the treatment for patients for a certain period of time can help in determining whether there is any change in the biomarker of the patient. It is important to study how the biomarker changes due to treatment and whether for different individuals located in separate centers can be clustered together since they might have different distributions. The study is motivated by a Bayesian non-parametric mixture model, which is more flexible when compared to the Bayesian Parametric models and is capable of borrowing information across different centers allowing them to be grouped together. To this end, this research modeled Biological markers taking into consideration the Surrogate markers. The study employed the nested Dirichlet process prior, which is easily peaceable on different distributions for several centers, with centers from the same Dirichlet process component clustered automatically together. The study sampled from the posterior by use of Markov chain Monte carol algorithm. The model is illustrated using a simulation study to see how it performs on simulated data. Clearly, from the simulation study it was clear that, the model was capable of clustering data into different clusters.展开更多
The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques we...The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques were applied to identify the most important input parameters for mapping debris flow susceptibility in the southern mountain area of Chengde City in Hebei Province,China,by using machine learning algorithms.In total,133 historical debris flow records and 16 related factors were selected.The support vector machine(SVM)was first used as the base classifier,and then a hybrid model was introduced by a two-step process.First,the particle swarm optimization(PSO)algorithm was employed to select the SVM model hyperparameters.Second,two feature selection algorithms,namely principal component analysis(PCA)and PSO,were integrated into the PSO-based SVM model,which generated the PCA-PSO-SVM and FS-PSO-SVM models,respectively.Three statistical metrics(accuracy,recall,and specificity)and the area under the receiver operating characteristic curve(AUC)were employed to evaluate and validate the performance of the models.The results indicated that the feature selection-based models exhibited the best performance,followed by the PSO-based SVM and SVM models.Moreover,the performance of the FS-PSO-SVM model was better than that of the PCA-PSO-SVM model,showing the highest AUC,accuracy,recall,and specificity values in both the training and testing processes.It was found that the selection of optimal features is crucial to improving the reliability of debris flow susceptibility assessment results.Moreover,the PSO algorithm was found to be not only an effective tool for hyperparameter optimization,but also a useful feature selection algorithm to improve prediction accuracies of debris flow susceptibility by using machine learning algorithms.The high and very high debris flow susceptibility zone appropriately covers 38.01%of the study area,where debris flow may occur under intensive human activities and heavy rainfall events.展开更多
Among steganalysis techniques,detection against MV(motion vector)domain-based video steganography in the HEVC(High Efficiency Video Coding)standard remains a challenging issue.For the purpose of improving the detectio...Among steganalysis techniques,detection against MV(motion vector)domain-based video steganography in the HEVC(High Efficiency Video Coding)standard remains a challenging issue.For the purpose of improving the detection performance,this paper proposes a steganalysis method that can perfectly detectMV-based steganography in HEVC.Firstly,we define the local optimality of MVP(Motion Vector Prediction)based on the technology of AMVP(Advanced Motion Vector Prediction).Secondly,we analyze that in HEVC video,message embedding either usingMVP index orMVD(Motion Vector Difference)may destroy the above optimality of MVP.And then,we define the optimal rate of MVP as a steganalysis feature.Finally,we conduct steganalysis detection experiments on two general datasets for three popular steganographymethods and compare the performance with four state-ofthe-art steganalysis methods.The experimental results demonstrate the effectiveness of the proposed feature set.Furthermore,our method stands out for its practical applicability,requiring no model training and exhibiting low computational complexity,making it a viable solution for real-world scenarios.展开更多
Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework...Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework for aircraft geo-localization in a large range that only requires a downward-facing monocular camera,an altimeter,a compass,and an open-source Vector Map(VMAP).The algorithm combines the matching and particle filter methods.Shape vector and correlation between two building contour vectors are defined,and a coarse-to-fine building vector matching(CFBVM)method is proposed in the matching stage,for which the original matching results are described by the Gaussian mixture model(GMM).Subsequently,an improved resampling strategy is designed to reduce computing expenses with a huge number of initial particles,and a credibility indicator is designed to avoid location mistakes in the particle filter stage.An experimental evaluation of the approach based on flight data is provided.On a flight at a height of 0.2 km over a flight distance of 2 km,the aircraft is geo-localized in a reference map of 11,025 km~2using 0.09 km~2aerial images without any prior information.The absolute localization error is less than 10 m.展开更多
Hamilton energy,which reflects the energy variation of systems,is one of the crucial instruments used to analyze the characteristics of dynamical systems.Here we propose a method to deduce Hamilton energy based on the...Hamilton energy,which reflects the energy variation of systems,is one of the crucial instruments used to analyze the characteristics of dynamical systems.Here we propose a method to deduce Hamilton energy based on the existing systems.This derivation process consists of three steps:step 1,decomposing the vector field;step 2,solving the Hamilton energy function;and step 3,verifying uniqueness.In order to easily choose an appropriate decomposition method,we propose a classification criterion based on the form of system state variables,i.e.,type-I vector fields that can be directly decomposed and type-II vector fields decomposed via exterior differentiation.Moreover,exterior differentiation is used to represent the curl of low-high dimension vector fields in the process of decomposition.Finally,we exemplify the Hamilton energy function of six classical systems and analyze the relationship between Hamilton energy and dynamic behavior.This solution provides a new approach for deducing the Hamilton energy function,especially in high-dimensional systems.展开更多
Ensemble prediction is widely used to represent the uncertainty of single deterministic Numerical Weather Prediction(NWP) caused by errors in initial conditions(ICs). The traditional Singular Vector(SV) initial pertur...Ensemble prediction is widely used to represent the uncertainty of single deterministic Numerical Weather Prediction(NWP) caused by errors in initial conditions(ICs). The traditional Singular Vector(SV) initial perturbation method tends only to capture synoptic scale initial uncertainty rather than mesoscale uncertainty in global ensemble prediction. To address this issue, a multiscale SV initial perturbation method based on the China Meteorological Administration Global Ensemble Prediction System(CMA-GEPS) is proposed to quantify multiscale initial uncertainty. The multiscale SV initial perturbation approach entails calculating multiscale SVs at different resolutions with multiple linearized physical processes to capture fast-growing perturbations from mesoscale to synoptic scale in target areas and combining these SVs by using a Gaussian sampling method with amplitude coefficients to generate initial perturbations. Following that, the energy norm,energy spectrum, and structure of multiscale SVs and their impact on GEPS are analyzed based on a batch experiment in different seasons. The results show that the multiscale SV initial perturbations can possess more energy and capture more mesoscale uncertainties than the traditional single-SV method. Meanwhile, multiscale SV initial perturbations can reflect the strongest dynamical instability in target areas. Their performances in global ensemble prediction when compared to single-scale SVs are shown to(i) improve the relationship between the ensemble spread and the root-mean-square error and(ii) provide a better probability forecast skill for atmospheric circulation during the late forecast period and for short-to medium-range precipitation. This study provides scientific evidence and application foundations for the design and development of a multiscale SV initial perturbation method for the GEPS.展开更多
With the widespread data collection and processing,privacy-preserving machine learning has become increasingly important in addressing privacy risks related to individuals.Support vector machine(SVM)is one of the most...With the widespread data collection and processing,privacy-preserving machine learning has become increasingly important in addressing privacy risks related to individuals.Support vector machine(SVM)is one of the most elementary learning models of machine learning.Privacy issues surrounding SVM classifier training have attracted increasing attention.In this paper,we investigate Differential Privacy-compliant Federated Machine Learning with Dimensionality Reduction,called FedDPDR-DPML,which greatly improves data utility while providing strong privacy guarantees.Considering in distributed learning scenarios,multiple participants usually hold unbalanced or small amounts of data.Therefore,FedDPDR-DPML enables multiple participants to collaboratively learn a global model based on weighted model averaging and knowledge aggregation and then the server distributes the global model to each participant to improve local data utility.Aiming at high-dimensional data,we adopt differential privacy in both the principal component analysis(PCA)-based dimensionality reduction phase and SVM classifiers training phase,which improves model accuracy while achieving strict differential privacy protection.Besides,we train Differential privacy(DP)-compliant SVM classifiers by adding noise to the objective function itself,thus leading to better data utility.Extensive experiments on three high-dimensional datasets demonstrate that FedDPDR-DPML can achieve high accuracy while ensuring strong privacy protection.展开更多
The distribution of data has a significant impact on the results of classification.When the distribution of one class is insignificant compared to the distribution of another class,data imbalance occurs.This will resu...The distribution of data has a significant impact on the results of classification.When the distribution of one class is insignificant compared to the distribution of another class,data imbalance occurs.This will result in rising outlier values and noise.Therefore,the speed and performance of classification could be greatly affected.Given the above problems,this paper starts with the motivation and mathematical representing of classification,puts forward a new classification method based on the relationship between different classification formulations.Combined with the vector characteristics of the actual problem and the choice of matrix characteristics,we firstly analyze the orderly regression to introduce slack variables to solve the constraint problem of the lone point.Then we introduce the fuzzy factors to solve the problem of the gap between the isolated points on the basis of the support vector machine.We introduce the cost control to solve the problem of sample skew.Finally,based on the bi-boundary support vector machine,a twostep weight setting twin classifier is constructed.This can help to identify multitasks with feature-selected patterns without the need for additional optimizers,which solves the problem of large-scale classification that can’t deal effectively with the very low category distribution gap.展开更多
The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably a...The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably affected by atmospheric turbulence(AT)when it propagates through the free-space optical communication environment,which results in detection errors at the receiver.In this paper,we propose a VVB classification scheme to detect VVBs with continuously changing polarization states under AT,where a diffractive deep neural network(DDNN)is designed and trained to classify the intensity distribution of the input distorted VVBs,and the horizontal direction of polarization of the input distorted beam is adopted as the feature for the classification through the DDNN.The numerical simulations and experimental results demonstrate that the proposed scheme has high accuracy in classification tasks.The energy distribution percentage remains above 95%from weak to medium AT,and the classification accuracy can remain above 95%for various strengths of turbulence.It has a faster convergence and better accuracy than that based on a convolutional neural network.展开更多
This article delves into the analysis of performance and utilization of Support Vector Machines (SVMs) for the critical task of forest fire detection using image datasets. With the increasing threat of forest fires to...This article delves into the analysis of performance and utilization of Support Vector Machines (SVMs) for the critical task of forest fire detection using image datasets. With the increasing threat of forest fires to ecosystems and human settlements, the need for rapid and accurate detection systems is of utmost importance. SVMs, renowned for their strong classification capabilities, exhibit proficiency in recognizing patterns associated with fire within images. By training on labeled data, SVMs acquire the ability to identify distinctive attributes associated with fire, such as flames, smoke, or alterations in the visual characteristics of the forest area. The document thoroughly examines the use of SVMs, covering crucial elements like data preprocessing, feature extraction, and model training. It rigorously evaluates parameters such as accuracy, efficiency, and practical applicability. The knowledge gained from this study aids in the development of efficient forest fire detection systems, enabling prompt responses and improving disaster management. Moreover, the correlation between SVM accuracy and the difficulties presented by high-dimensional datasets is carefully investigated, demonstrated through a revealing case study. The relationship between accuracy scores and the different resolutions used for resizing the training datasets has also been discussed in this article. These comprehensive studies result in a definitive overview of the difficulties faced and the potential sectors requiring further improvement and focus.展开更多
Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial i...Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial information,and these methods have made it feasible to handle a wide range of problems associated with image analysis.Images with little information or low payload are used by information embedding methods,but the goal of all contemporary research is to employ high-payload images for classification.To address the need for both low-and high-payload images,this work provides a machine-learning approach to steganography image classification that uses Curvelet transformation to efficiently extract characteristics from both type of images.Support Vector Machine(SVM),a commonplace classification technique,has been employed to determine whether the image is a stego or cover.The Wavelet Obtained Weights(WOW),Spatial Universal Wavelet Relative Distortion(S-UNIWARD),Highly Undetectable Steganography(HUGO),and Minimizing the Power of Optimal Detector(MiPOD)steganography techniques are used in a variety of experimental scenarios to evaluate the performance of the proposedmethod.Using WOW at several payloads,the proposed approach proves its classification accuracy of 98.60%.It exhibits its superiority over SOTA methods.展开更多
The perfect hybrid vector vortex beam(PHVVB)with helical phase wavefront structure has aroused significant concern in recent years,as its beam waist does not expand with the topological charge(TC).In this work,we inve...The perfect hybrid vector vortex beam(PHVVB)with helical phase wavefront structure has aroused significant concern in recent years,as its beam waist does not expand with the topological charge(TC).In this work,we investigate the spatial quantum coherent modulation effect with PHVVB based on the atomic medium,and we observe the absorption characteristic of the PHVVB with different TCs under variant magnetic fields.We find that the transmission spectrum linewidth of PHVVB can be effectively maintained regardless of the TC.Still,the width of transmission peaks increases slightly as the beam size expands in hot atomic vapor.This distinctive quantum coherence phenomenon,demonstrated by the interaction of an atomic medium with a hybrid vector-structured beam,might be anticipated to open up new opportunities for quantum coherence modulation and accurate magnetic field measurement.展开更多
基金supported by the National Natural Science Foundation of China(62033010)Qing Lan Project of Jiangsu Province(R2023Q07)。
文摘For accurately identifying the distribution charac-teristic of Gaussian-like noises in unmanned aerial vehicle(UAV)state estimation,this paper proposes a non-parametric scheme based on curve similarity matching.In the framework of the pro-posed scheme,a Parzen window(kernel density estimation,KDE)method on sliding window technology is applied for roughly esti-mating the sample probability density,a precise data probability density function(PDF)model is constructed with the least square method on K-fold cross validation,and the testing result based on evaluation method is obtained based on some data characteristic analyses of curve shape,abruptness and symmetry.Some com-parison simulations with classical methods and UAV flight exper-iment shows that the proposed scheme has higher recognition accuracy than classical methods for some kinds of Gaussian-like data,which provides better reference for the design of Kalman filter(KF)in complex water environment.
文摘This study aimed to examine the performance of the Siegel-Tukey and Savage tests on data sets with heterogeneous variances. The analysis, considering Normal, Platykurtic, and Skewed distributions and a standard deviation ratio of 1, was conducted for both small and large sample sizes. For small sample sizes, two main categories were established: equal and different sample sizes. Analyses were performed using Monte Carlo simulations with 20,000 repetitions for each scenario, and the simulations were evaluated using SAS software. For small sample sizes, the I. type error rate of the Siegel-Tukey test generally ranged from 0.045 to 0.055, while the I. type error rate of the Savage test was observed to range from 0.016 to 0.041. Similar trends were observed for Platykurtic and Skewed distributions. In scenarios with different sample sizes, the Savage test generally exhibited lower I. type error rates. For large sample sizes, two main categories were established: equal and different sample sizes. For large sample sizes, the I. type error rate of the Siegel-Tukey test ranged from 0.047 to 0.052, while the I. type error rate of the Savage test ranged from 0.043 to 0.051. In cases of equal sample sizes, both tests generally had lower error rates, with the Savage test providing more consistent results for large sample sizes. In conclusion, it was determined that the Savage test provides lower I. type error rates for small sample sizes and that both tests have similar error rates for large sample sizes. These findings suggest that the Savage test could be a more reliable option when analyzing variance differences.
基金supported by Natural Science Fund for colleges and universities in Jiangsu Province(No. 07KJD170012)Natural Science Fund of Huaihai Institute of Technology (No. Z2008009)
文摘Coastal sediment type map has been widely used in marine economic and engineering activities, but the traditional mapping methods had some limitations due to their intrinsic assumption or subjectivity. In this paper, a non-parametric indicator Kriging method has been proposed for generating coastal sediment map. The method can effectively avoid mapping subjectivity, has no special requirements for the sample data to meet second-order stationary or normal distribution, and can also provide useful information on the quantitative evaluation of mapping uncertainty. The application of the method in the southern sea area of Lianyungang showed that much more convincing mapping results could be obtained compared with the traditional methods such as IDW, Kriging and Voronoi diagram under the same condition, so the proposed method was applicable with great utilization value.
文摘Short-term traffic flow is one of the core technologies to realize traffic flow guidance. In this article, in view of the characteristics that the traffic flow changes repeatedly, a short-term traffic flow forecasting method based on a three-layer K-nearest neighbor non-parametric regression algorithm is proposed. Specifically, two screening layers based on shape similarity were introduced in K-nearest neighbor non-parametric regression method, and the forecasting results were output using the weighted averaging on the reciprocal values of the shape similarity distances and the most-similar-point distance adjustment method. According to the experimental results, the proposed algorithm has improved the predictive ability of the traditional K-nearest neighbor non-parametric regression method, and greatly enhanced the accuracy and real-time performance of short-term traffic flow forecasting.
基金supported in part by the National Natural Science Foundation of China under Grants 61841103,61673164,and 61602397in part by the Natural Science Foundation of Hunan Provincial under Grants 2016JJ2041 and 2019JJ50106+1 种基金in part by the Key Project of Education Department of Hunan Provincial under Grant 18B385and in part by the Graduate Research Innovation Projects of Hunan Province under Grants CX2018B805 and CX2018B813.
文摘Detecting moving objects in the stationary background is an important problem in visual surveillance systems.However,the traditional background subtraction method fails when the background is not completely stationary and involves certain dynamic changes.In this paper,according to the basic steps of the background subtraction method,a novel non-parametric moving object detection method is proposed based on an improved ant colony algorithm by using the Markov random field.Concretely,the contributions are as follows:1)A new nonparametric strategy is utilized to model the background,based on an improved kernel density estimation;this approach uses an adaptive bandwidth,and the fused features combine the colours,gradients and positions.2)A Markov random field method based on this adaptive background model via the constraint of the spatial context is proposed to extract objects.3)The posterior function is maximized efficiently by using an improved ant colony system algorithm.Extensive experiments show that the proposed method demonstrates a better performance than many existing state-of-the-art methods.
基金Science and Technology on Electro-Optic Control Laboratory and the Fund of Aeronautical Science(No.201951048001)。
文摘The ability to build an imaging process is crucial to vision measurement.The non-parametric imaging model describes an imaging process as a pixel cluster,in which each pixel is related to a spatial ray originated from an object point.However,a non-parametric model requires a sophisticated calculation process or high-cost devices to obtain a massive quantity of parameters.These disadvantages limit the application of camera models.Therefore,we propose a novel camera model calibration method based on a single-axis rotational target.The rotational vision target offers 3D control points with no need for detailed information of poses of the rotational target.Radial basis function(RBF)network is introduced to map 3D coordinates to 2D image coordinates.We subsequently derive the optimization formulization of imaging model parameters and compute the parameter from the given control points.The model is extended to adapt the stereo camera that is widely used in vision measurement.Experiments have been done to evaluate the performance of the proposed camera calibration method.The results show that the proposed method has superiority in accuracy and effectiveness in comparison with the traditional methods.
基金supported by the National Polytechnic Institute(SIP-20221151,SIP-20220916)。
文摘This paper addresses the design of an exponential function-based learning law for artificial neural networks(ANNs)with continuous dynamics.The ANN structure is used to obtain a non-parametric model of systems with uncertainties,which are described by a set of nonlinear ordinary differential equations.Two novel adaptive algorithms with predefined exponential convergence rate adjust the weights of the ANN.The first algorithm includes an adaptive gain depending on the identification error which accelerated the convergence of the weights and promotes a faster convergence between the states of the uncertain system and the trajectories of the neural identifier.The second approach uses a time-dependent sigmoidal gain that forces the convergence of the identification error to an invariant set characterized by an ellipsoid.The generalized volume of this ellipsoid depends on the upper bounds of uncertainties,perturbations and modeling errors.The application of the invariant ellipsoid method yields to obtain an algorithm to reduce the volume of the convergence region for the identification error.Both adaptive algorithms are derived from the application of a non-standard exponential dependent function and an associated controlled Lyapunov function.Numerical examples demonstrate the improvements enforced by the algorithms introduced in this study by comparing the convergence settings concerning classical schemes with non-exponential continuous learning methods.The proposed identifiers overcome the results of the classical identifier achieving a faster convergence to an invariant set of smaller dimensions.
文摘A quantitative study was used in the study of the tendency to change drought indicators in Vietnam through the Ninh Thuan province case study. The research data are temperature and precipitation data of 11 stations from 1986 to 2016 inside and outside Ninh Thuan province. To do the research, the author uses a non-parametric analysis method and the drought index calculation method. Specifically, with the non-parametric method, the author uses the analysis, Mann-Kendall (MK) and Theil-Sen (Sen’s slope), and to analyze drought, the author uses the Standardized Precipitation Index (SPI) and the Moisture Index (MI). Two Softwares calculated in this study are ProUCL 5.1 and MAKENSEN 1.0 by the US Environmental Protection Agency and Finnish Meteorological Institute. The calculation results show that meteorological drought will decrease in the future with areas such as Phan Rang, Song Pha, Quan The, Ba Thap tend to increase very clearly, while Tam My and Nhi Ha tend to increase very clearly short. With the agricultural drought, the average MI results increased 0.013 per year, of which Song Pha station tended to increase the highest with 0.03 per year and lower with Nhi Ha with 0.001 per year. The forecast results also show that by the end of the 21st century, the SPI tends to decrease with SPI 1 being <span style="white-space:nowrap;">−</span>0.68, SPI 3 being <span style="white-space:nowrap;">−</span>0.40, SPI 6 being <span style="white-space:nowrap;">−</span>0.25, SPI 12 is 0.42. Along with that is the forecast that the MI index will increase 0.013 per year to 2035, the MI index is 0.93, in 2050 it is 1.13, in 2075 it will be 1.46, and by 2100 it is 1.79. Research results will be used in policymaking, environmental resources management agencies, and researchers to develop and study solutions to adapt and mitigate drought in the context of variable climate change.
文摘The effect of treatment on patient’s outcome can easily be determined through the impact of the treatment on biological events. Observing the treatment for patients for a certain period of time can help in determining whether there is any change in the biomarker of the patient. It is important to study how the biomarker changes due to treatment and whether for different individuals located in separate centers can be clustered together since they might have different distributions. The study is motivated by a Bayesian non-parametric mixture model, which is more flexible when compared to the Bayesian Parametric models and is capable of borrowing information across different centers allowing them to be grouped together. To this end, this research modeled Biological markers taking into consideration the Surrogate markers. The study employed the nested Dirichlet process prior, which is easily peaceable on different distributions for several centers, with centers from the same Dirichlet process component clustered automatically together. The study sampled from the posterior by use of Markov chain Monte carol algorithm. The model is illustrated using a simulation study to see how it performs on simulated data. Clearly, from the simulation study it was clear that, the model was capable of clustering data into different clusters.
基金supported by the Second Tibetan Plateau Scientific Expedition and Research Program(Grant no.2019QZKK0904)Natural Science Foundation of Hebei Province(Grant no.D2022403032)S&T Program of Hebei(Grant no.E2021403001).
文摘The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques were applied to identify the most important input parameters for mapping debris flow susceptibility in the southern mountain area of Chengde City in Hebei Province,China,by using machine learning algorithms.In total,133 historical debris flow records and 16 related factors were selected.The support vector machine(SVM)was first used as the base classifier,and then a hybrid model was introduced by a two-step process.First,the particle swarm optimization(PSO)algorithm was employed to select the SVM model hyperparameters.Second,two feature selection algorithms,namely principal component analysis(PCA)and PSO,were integrated into the PSO-based SVM model,which generated the PCA-PSO-SVM and FS-PSO-SVM models,respectively.Three statistical metrics(accuracy,recall,and specificity)and the area under the receiver operating characteristic curve(AUC)were employed to evaluate and validate the performance of the models.The results indicated that the feature selection-based models exhibited the best performance,followed by the PSO-based SVM and SVM models.Moreover,the performance of the FS-PSO-SVM model was better than that of the PCA-PSO-SVM model,showing the highest AUC,accuracy,recall,and specificity values in both the training and testing processes.It was found that the selection of optimal features is crucial to improving the reliability of debris flow susceptibility assessment results.Moreover,the PSO algorithm was found to be not only an effective tool for hyperparameter optimization,but also a useful feature selection algorithm to improve prediction accuracies of debris flow susceptibility by using machine learning algorithms.The high and very high debris flow susceptibility zone appropriately covers 38.01%of the study area,where debris flow may occur under intensive human activities and heavy rainfall events.
基金the National Natural Science Foundation of China(Grant Nos.62272478,62202496,61872384).
文摘Among steganalysis techniques,detection against MV(motion vector)domain-based video steganography in the HEVC(High Efficiency Video Coding)standard remains a challenging issue.For the purpose of improving the detection performance,this paper proposes a steganalysis method that can perfectly detectMV-based steganography in HEVC.Firstly,we define the local optimality of MVP(Motion Vector Prediction)based on the technology of AMVP(Advanced Motion Vector Prediction).Secondly,we analyze that in HEVC video,message embedding either usingMVP index orMVD(Motion Vector Difference)may destroy the above optimality of MVP.And then,we define the optimal rate of MVP as a steganalysis feature.Finally,we conduct steganalysis detection experiments on two general datasets for three popular steganographymethods and compare the performance with four state-ofthe-art steganalysis methods.The experimental results demonstrate the effectiveness of the proposed feature set.Furthermore,our method stands out for its practical applicability,requiring no model training and exhibiting low computational complexity,making it a viable solution for real-world scenarios.
文摘Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework for aircraft geo-localization in a large range that only requires a downward-facing monocular camera,an altimeter,a compass,and an open-source Vector Map(VMAP).The algorithm combines the matching and particle filter methods.Shape vector and correlation between two building contour vectors are defined,and a coarse-to-fine building vector matching(CFBVM)method is proposed in the matching stage,for which the original matching results are described by the Gaussian mixture model(GMM).Subsequently,an improved resampling strategy is designed to reduce computing expenses with a huge number of initial particles,and a credibility indicator is designed to avoid location mistakes in the particle filter stage.An experimental evaluation of the approach based on flight data is provided.On a flight at a height of 0.2 km over a flight distance of 2 km,the aircraft is geo-localized in a reference map of 11,025 km~2using 0.09 km~2aerial images without any prior information.The absolute localization error is less than 10 m.
基金the National Natural Science Foundation of China(Grant Nos.12305054,12172340,and 12371506)。
文摘Hamilton energy,which reflects the energy variation of systems,is one of the crucial instruments used to analyze the characteristics of dynamical systems.Here we propose a method to deduce Hamilton energy based on the existing systems.This derivation process consists of three steps:step 1,decomposing the vector field;step 2,solving the Hamilton energy function;and step 3,verifying uniqueness.In order to easily choose an appropriate decomposition method,we propose a classification criterion based on the form of system state variables,i.e.,type-I vector fields that can be directly decomposed and type-II vector fields decomposed via exterior differentiation.Moreover,exterior differentiation is used to represent the curl of low-high dimension vector fields in the process of decomposition.Finally,we exemplify the Hamilton energy function of six classical systems and analyze the relationship between Hamilton energy and dynamic behavior.This solution provides a new approach for deducing the Hamilton energy function,especially in high-dimensional systems.
基金supported by the Joint Funds of the Chinese National Natural Science Foundation (NSFC)(Grant No.U2242213)the National Key Research and Development (R&D)Program of the Ministry of Science and Technology of China(Grant No. 2021YFC3000902)the National Science Foundation for Young Scholars (Grant No. 42205166)。
文摘Ensemble prediction is widely used to represent the uncertainty of single deterministic Numerical Weather Prediction(NWP) caused by errors in initial conditions(ICs). The traditional Singular Vector(SV) initial perturbation method tends only to capture synoptic scale initial uncertainty rather than mesoscale uncertainty in global ensemble prediction. To address this issue, a multiscale SV initial perturbation method based on the China Meteorological Administration Global Ensemble Prediction System(CMA-GEPS) is proposed to quantify multiscale initial uncertainty. The multiscale SV initial perturbation approach entails calculating multiscale SVs at different resolutions with multiple linearized physical processes to capture fast-growing perturbations from mesoscale to synoptic scale in target areas and combining these SVs by using a Gaussian sampling method with amplitude coefficients to generate initial perturbations. Following that, the energy norm,energy spectrum, and structure of multiscale SVs and their impact on GEPS are analyzed based on a batch experiment in different seasons. The results show that the multiscale SV initial perturbations can possess more energy and capture more mesoscale uncertainties than the traditional single-SV method. Meanwhile, multiscale SV initial perturbations can reflect the strongest dynamical instability in target areas. Their performances in global ensemble prediction when compared to single-scale SVs are shown to(i) improve the relationship between the ensemble spread and the root-mean-square error and(ii) provide a better probability forecast skill for atmospheric circulation during the late forecast period and for short-to medium-range precipitation. This study provides scientific evidence and application foundations for the design and development of a multiscale SV initial perturbation method for the GEPS.
基金supported in part by National Natural Science Foundation of China(Nos.62102311,62202377,62272385)in part by Natural Science Basic Research Program of Shaanxi(Nos.2022JQ-600,2022JM-353,2023-JC-QN-0327)+2 种基金in part by Shaanxi Distinguished Youth Project(No.2022JC-47)in part by Scientific Research Program Funded by Shaanxi Provincial Education Department(No.22JK0560)in part by Distinguished Youth Talents of Shaanxi Universities,and in part by Youth Innovation Team of Shaanxi Universities.
文摘With the widespread data collection and processing,privacy-preserving machine learning has become increasingly important in addressing privacy risks related to individuals.Support vector machine(SVM)is one of the most elementary learning models of machine learning.Privacy issues surrounding SVM classifier training have attracted increasing attention.In this paper,we investigate Differential Privacy-compliant Federated Machine Learning with Dimensionality Reduction,called FedDPDR-DPML,which greatly improves data utility while providing strong privacy guarantees.Considering in distributed learning scenarios,multiple participants usually hold unbalanced or small amounts of data.Therefore,FedDPDR-DPML enables multiple participants to collaboratively learn a global model based on weighted model averaging and knowledge aggregation and then the server distributes the global model to each participant to improve local data utility.Aiming at high-dimensional data,we adopt differential privacy in both the principal component analysis(PCA)-based dimensionality reduction phase and SVM classifiers training phase,which improves model accuracy while achieving strict differential privacy protection.Besides,we train Differential privacy(DP)-compliant SVM classifiers by adding noise to the objective function itself,thus leading to better data utility.Extensive experiments on three high-dimensional datasets demonstrate that FedDPDR-DPML can achieve high accuracy while ensuring strong privacy protection.
基金Hebei Province Key Research and Development Project(No.20313701D)Hebei Province Key Research and Development Project(No.19210404D)+13 种基金Mobile computing and universal equipment for the Beijing Key Laboratory Open Project,The National Social Science Fund of China(17AJL014)Beijing University of Posts and Telecommunications Construction of World-Class Disciplines and Characteristic Development Guidance Special Fund “Cultural Inheritance and Innovation”Project(No.505019221)National Natural Science Foundation of China(No.U1536112)National Natural Science Foundation of China(No.81673697)National Natural Science Foundation of China(61872046)The National Social Science Fund Key Project of China(No.17AJL014)“Blue Fire Project”(Huizhou)University of Technology Joint Innovation Project(CXZJHZ201729)Industry-University Cooperation Cooperative Education Project of the Ministry of Education(No.201902218004)Industry-University Cooperation Cooperative Education Project of the Ministry of Education(No.201902024006)Industry-University Cooperation Cooperative Education Project of the Ministry of Education(No.201901197007)Industry-University Cooperation Collaborative Education Project of the Ministry of Education(No.201901199005)The Ministry of Education Industry-University Cooperation Collaborative Education Project(No.201901197001)Shijiazhuang science and technology plan project(236240267A)Hebei Province key research and development plan project(20312701D)。
文摘The distribution of data has a significant impact on the results of classification.When the distribution of one class is insignificant compared to the distribution of another class,data imbalance occurs.This will result in rising outlier values and noise.Therefore,the speed and performance of classification could be greatly affected.Given the above problems,this paper starts with the motivation and mathematical representing of classification,puts forward a new classification method based on the relationship between different classification formulations.Combined with the vector characteristics of the actual problem and the choice of matrix characteristics,we firstly analyze the orderly regression to introduce slack variables to solve the constraint problem of the lone point.Then we introduce the fuzzy factors to solve the problem of the gap between the isolated points on the basis of the support vector machine.We introduce the cost control to solve the problem of sample skew.Finally,based on the bi-boundary support vector machine,a twostep weight setting twin classifier is constructed.This can help to identify multitasks with feature-selected patterns without the need for additional optimizers,which solves the problem of large-scale classification that can’t deal effectively with the very low category distribution gap.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.62375140 and 62001249)the Open Research Fund of National Laboratory of Solid State Microstructures(Grant No.M36055).
文摘The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably affected by atmospheric turbulence(AT)when it propagates through the free-space optical communication environment,which results in detection errors at the receiver.In this paper,we propose a VVB classification scheme to detect VVBs with continuously changing polarization states under AT,where a diffractive deep neural network(DDNN)is designed and trained to classify the intensity distribution of the input distorted VVBs,and the horizontal direction of polarization of the input distorted beam is adopted as the feature for the classification through the DDNN.The numerical simulations and experimental results demonstrate that the proposed scheme has high accuracy in classification tasks.The energy distribution percentage remains above 95%from weak to medium AT,and the classification accuracy can remain above 95%for various strengths of turbulence.It has a faster convergence and better accuracy than that based on a convolutional neural network.
文摘This article delves into the analysis of performance and utilization of Support Vector Machines (SVMs) for the critical task of forest fire detection using image datasets. With the increasing threat of forest fires to ecosystems and human settlements, the need for rapid and accurate detection systems is of utmost importance. SVMs, renowned for their strong classification capabilities, exhibit proficiency in recognizing patterns associated with fire within images. By training on labeled data, SVMs acquire the ability to identify distinctive attributes associated with fire, such as flames, smoke, or alterations in the visual characteristics of the forest area. The document thoroughly examines the use of SVMs, covering crucial elements like data preprocessing, feature extraction, and model training. It rigorously evaluates parameters such as accuracy, efficiency, and practical applicability. The knowledge gained from this study aids in the development of efficient forest fire detection systems, enabling prompt responses and improving disaster management. Moreover, the correlation between SVM accuracy and the difficulties presented by high-dimensional datasets is carefully investigated, demonstrated through a revealing case study. The relationship between accuracy scores and the different resolutions used for resizing the training datasets has also been discussed in this article. These comprehensive studies result in a definitive overview of the difficulties faced and the potential sectors requiring further improvement and focus.
基金financially supported by the Deanship of Scientific Research at King Khalid University under Research Grant Number(R.G.P.2/549/44).
文摘Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial information,and these methods have made it feasible to handle a wide range of problems associated with image analysis.Images with little information or low payload are used by information embedding methods,but the goal of all contemporary research is to employ high-payload images for classification.To address the need for both low-and high-payload images,this work provides a machine-learning approach to steganography image classification that uses Curvelet transformation to efficiently extract characteristics from both type of images.Support Vector Machine(SVM),a commonplace classification technique,has been employed to determine whether the image is a stego or cover.The Wavelet Obtained Weights(WOW),Spatial Universal Wavelet Relative Distortion(S-UNIWARD),Highly Undetectable Steganography(HUGO),and Minimizing the Power of Optimal Detector(MiPOD)steganography techniques are used in a variety of experimental scenarios to evaluate the performance of the proposedmethod.Using WOW at several payloads,the proposed approach proves its classification accuracy of 98.60%.It exhibits its superiority over SOTA methods.
基金Project supported by the Youth Innovation Promotion Association CASState Key Laboratory of Transient Optics and Photonics Open Topics (Grant No. SKLST202222)
文摘The perfect hybrid vector vortex beam(PHVVB)with helical phase wavefront structure has aroused significant concern in recent years,as its beam waist does not expand with the topological charge(TC).In this work,we investigate the spatial quantum coherent modulation effect with PHVVB based on the atomic medium,and we observe the absorption characteristic of the PHVVB with different TCs under variant magnetic fields.We find that the transmission spectrum linewidth of PHVVB can be effectively maintained regardless of the TC.Still,the width of transmission peaks increases slightly as the beam size expands in hot atomic vapor.This distinctive quantum coherence phenomenon,demonstrated by the interaction of an atomic medium with a hybrid vector-structured beam,might be anticipated to open up new opportunities for quantum coherence modulation and accurate magnetic field measurement.