Among steganalysis techniques,detection against MV(motion vector)domain-based video steganography in the HEVC(High Efficiency Video Coding)standard remains a challenging issue.For the purpose of improving the detectio...Among steganalysis techniques,detection against MV(motion vector)domain-based video steganography in the HEVC(High Efficiency Video Coding)standard remains a challenging issue.For the purpose of improving the detection performance,this paper proposes a steganalysis method that can perfectly detectMV-based steganography in HEVC.Firstly,we define the local optimality of MVP(Motion Vector Prediction)based on the technology of AMVP(Advanced Motion Vector Prediction).Secondly,we analyze that in HEVC video,message embedding either usingMVP index orMVD(Motion Vector Difference)may destroy the above optimality of MVP.And then,we define the optimal rate of MVP as a steganalysis feature.Finally,we conduct steganalysis detection experiments on two general datasets for three popular steganographymethods and compare the performance with four state-ofthe-art steganalysis methods.The experimental results demonstrate the effectiveness of the proposed feature set.Furthermore,our method stands out for its practical applicability,requiring no model training and exhibiting low computational complexity,making it a viable solution for real-world scenarios.展开更多
Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework...Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework for aircraft geo-localization in a large range that only requires a downward-facing monocular camera,an altimeter,a compass,and an open-source Vector Map(VMAP).The algorithm combines the matching and particle filter methods.Shape vector and correlation between two building contour vectors are defined,and a coarse-to-fine building vector matching(CFBVM)method is proposed in the matching stage,for which the original matching results are described by the Gaussian mixture model(GMM).Subsequently,an improved resampling strategy is designed to reduce computing expenses with a huge number of initial particles,and a credibility indicator is designed to avoid location mistakes in the particle filter stage.An experimental evaluation of the approach based on flight data is provided.On a flight at a height of 0.2 km over a flight distance of 2 km,the aircraft is geo-localized in a reference map of 11,025 km~2using 0.09 km~2aerial images without any prior information.The absolute localization error is less than 10 m.展开更多
Hamilton energy,which reflects the energy variation of systems,is one of the crucial instruments used to analyze the characteristics of dynamical systems.Here we propose a method to deduce Hamilton energy based on the...Hamilton energy,which reflects the energy variation of systems,is one of the crucial instruments used to analyze the characteristics of dynamical systems.Here we propose a method to deduce Hamilton energy based on the existing systems.This derivation process consists of three steps:step 1,decomposing the vector field;step 2,solving the Hamilton energy function;and step 3,verifying uniqueness.In order to easily choose an appropriate decomposition method,we propose a classification criterion based on the form of system state variables,i.e.,type-I vector fields that can be directly decomposed and type-II vector fields decomposed via exterior differentiation.Moreover,exterior differentiation is used to represent the curl of low-high dimension vector fields in the process of decomposition.Finally,we exemplify the Hamilton energy function of six classical systems and analyze the relationship between Hamilton energy and dynamic behavior.This solution provides a new approach for deducing the Hamilton energy function,especially in high-dimensional systems.展开更多
Ensemble prediction is widely used to represent the uncertainty of single deterministic Numerical Weather Prediction(NWP) caused by errors in initial conditions(ICs). The traditional Singular Vector(SV) initial pertur...Ensemble prediction is widely used to represent the uncertainty of single deterministic Numerical Weather Prediction(NWP) caused by errors in initial conditions(ICs). The traditional Singular Vector(SV) initial perturbation method tends only to capture synoptic scale initial uncertainty rather than mesoscale uncertainty in global ensemble prediction. To address this issue, a multiscale SV initial perturbation method based on the China Meteorological Administration Global Ensemble Prediction System(CMA-GEPS) is proposed to quantify multiscale initial uncertainty. The multiscale SV initial perturbation approach entails calculating multiscale SVs at different resolutions with multiple linearized physical processes to capture fast-growing perturbations from mesoscale to synoptic scale in target areas and combining these SVs by using a Gaussian sampling method with amplitude coefficients to generate initial perturbations. Following that, the energy norm,energy spectrum, and structure of multiscale SVs and their impact on GEPS are analyzed based on a batch experiment in different seasons. The results show that the multiscale SV initial perturbations can possess more energy and capture more mesoscale uncertainties than the traditional single-SV method. Meanwhile, multiscale SV initial perturbations can reflect the strongest dynamical instability in target areas. Their performances in global ensemble prediction when compared to single-scale SVs are shown to(i) improve the relationship between the ensemble spread and the root-mean-square error and(ii) provide a better probability forecast skill for atmospheric circulation during the late forecast period and for short-to medium-range precipitation. This study provides scientific evidence and application foundations for the design and development of a multiscale SV initial perturbation method for the GEPS.展开更多
The distribution of data has a significant impact on the results of classification.When the distribution of one class is insignificant compared to the distribution of another class,data imbalance occurs.This will resu...The distribution of data has a significant impact on the results of classification.When the distribution of one class is insignificant compared to the distribution of another class,data imbalance occurs.This will result in rising outlier values and noise.Therefore,the speed and performance of classification could be greatly affected.Given the above problems,this paper starts with the motivation and mathematical representing of classification,puts forward a new classification method based on the relationship between different classification formulations.Combined with the vector characteristics of the actual problem and the choice of matrix characteristics,we firstly analyze the orderly regression to introduce slack variables to solve the constraint problem of the lone point.Then we introduce the fuzzy factors to solve the problem of the gap between the isolated points on the basis of the support vector machine.We introduce the cost control to solve the problem of sample skew.Finally,based on the bi-boundary support vector machine,a twostep weight setting twin classifier is constructed.This can help to identify multitasks with feature-selected patterns without the need for additional optimizers,which solves the problem of large-scale classification that can’t deal effectively with the very low category distribution gap.展开更多
With the widespread data collection and processing,privacy-preserving machine learning has become increasingly important in addressing privacy risks related to individuals.Support vector machine(SVM)is one of the most...With the widespread data collection and processing,privacy-preserving machine learning has become increasingly important in addressing privacy risks related to individuals.Support vector machine(SVM)is one of the most elementary learning models of machine learning.Privacy issues surrounding SVM classifier training have attracted increasing attention.In this paper,we investigate Differential Privacy-compliant Federated Machine Learning with Dimensionality Reduction,called FedDPDR-DPML,which greatly improves data utility while providing strong privacy guarantees.Considering in distributed learning scenarios,multiple participants usually hold unbalanced or small amounts of data.Therefore,FedDPDR-DPML enables multiple participants to collaboratively learn a global model based on weighted model averaging and knowledge aggregation and then the server distributes the global model to each participant to improve local data utility.Aiming at high-dimensional data,we adopt differential privacy in both the principal component analysis(PCA)-based dimensionality reduction phase and SVM classifiers training phase,which improves model accuracy while achieving strict differential privacy protection.Besides,we train Differential privacy(DP)-compliant SVM classifiers by adding noise to the objective function itself,thus leading to better data utility.Extensive experiments on three high-dimensional datasets demonstrate that FedDPDR-DPML can achieve high accuracy while ensuring strong privacy protection.展开更多
The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably a...The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably affected by atmospheric turbulence(AT)when it propagates through the free-space optical communication environment,which results in detection errors at the receiver.In this paper,we propose a VVB classification scheme to detect VVBs with continuously changing polarization states under AT,where a diffractive deep neural network(DDNN)is designed and trained to classify the intensity distribution of the input distorted VVBs,and the horizontal direction of polarization of the input distorted beam is adopted as the feature for the classification through the DDNN.The numerical simulations and experimental results demonstrate that the proposed scheme has high accuracy in classification tasks.The energy distribution percentage remains above 95%from weak to medium AT,and the classification accuracy can remain above 95%for various strengths of turbulence.It has a faster convergence and better accuracy than that based on a convolutional neural network.展开更多
This article delves into the analysis of performance and utilization of Support Vector Machines (SVMs) for the critical task of forest fire detection using image datasets. With the increasing threat of forest fires to...This article delves into the analysis of performance and utilization of Support Vector Machines (SVMs) for the critical task of forest fire detection using image datasets. With the increasing threat of forest fires to ecosystems and human settlements, the need for rapid and accurate detection systems is of utmost importance. SVMs, renowned for their strong classification capabilities, exhibit proficiency in recognizing patterns associated with fire within images. By training on labeled data, SVMs acquire the ability to identify distinctive attributes associated with fire, such as flames, smoke, or alterations in the visual characteristics of the forest area. The document thoroughly examines the use of SVMs, covering crucial elements like data preprocessing, feature extraction, and model training. It rigorously evaluates parameters such as accuracy, efficiency, and practical applicability. The knowledge gained from this study aids in the development of efficient forest fire detection systems, enabling prompt responses and improving disaster management. Moreover, the correlation between SVM accuracy and the difficulties presented by high-dimensional datasets is carefully investigated, demonstrated through a revealing case study. The relationship between accuracy scores and the different resolutions used for resizing the training datasets has also been discussed in this article. These comprehensive studies result in a definitive overview of the difficulties faced and the potential sectors requiring further improvement and focus.展开更多
Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial i...Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial information,and these methods have made it feasible to handle a wide range of problems associated with image analysis.Images with little information or low payload are used by information embedding methods,but the goal of all contemporary research is to employ high-payload images for classification.To address the need for both low-and high-payload images,this work provides a machine-learning approach to steganography image classification that uses Curvelet transformation to efficiently extract characteristics from both type of images.Support Vector Machine(SVM),a commonplace classification technique,has been employed to determine whether the image is a stego or cover.The Wavelet Obtained Weights(WOW),Spatial Universal Wavelet Relative Distortion(S-UNIWARD),Highly Undetectable Steganography(HUGO),and Minimizing the Power of Optimal Detector(MiPOD)steganography techniques are used in a variety of experimental scenarios to evaluate the performance of the proposedmethod.Using WOW at several payloads,the proposed approach proves its classification accuracy of 98.60%.It exhibits its superiority over SOTA methods.展开更多
The turbidite channel of South China Sea has been highly concerned.Influenced by the complex fault and the rapid phase change of lithofacies,predicting the channel through conventional seismic attributes is not accura...The turbidite channel of South China Sea has been highly concerned.Influenced by the complex fault and the rapid phase change of lithofacies,predicting the channel through conventional seismic attributes is not accurate enough.In response to this disadvantage,this study used a method combining grey relational analysis(GRA)and support vectormachine(SVM)and established a set of prediction technical procedures suitable for reservoirs with complex geological conditions.In the case study of the Huangliu Formation in Qiongdongnan Basin,South China Sea,this study first dimensionalized the conventional seismic attributes of Gas Layer Group I and then used the GRA method to obtain the main relational factors.A higher relational degree indicates a higher probability of responding to the attributes of the turbidite channel.This study then accumulated the optimized attributes with the highest relational factors to obtain a first-order accumulated sequence,which was used as the input training sample of the SVM model,thus successfully constructing the SVM turbidite channel model.Drilling results prove that the GRA-SVMmethod has a high drilling coincidence rate.Utilizing the core and logging data and taking full use of the advantages of seismic inversion in predicting the sand boundary of water channels,this study divides the sedimentary microfacies of the Huangliu Formation in the Lingshui 17-2 Gas Field.This comprehensive study has shown that the GRA-SVM method has high accuracy for predicting turbidite channels and can be used as a superior turbidite channel prediction method under complex geological conditions.展开更多
The perfect hybrid vector vortex beam(PHVVB)with helical phase wavefront structure has aroused significant concern in recent years,as its beam waist does not expand with the topological charge(TC).In this work,we inve...The perfect hybrid vector vortex beam(PHVVB)with helical phase wavefront structure has aroused significant concern in recent years,as its beam waist does not expand with the topological charge(TC).In this work,we investigate the spatial quantum coherent modulation effect with PHVVB based on the atomic medium,and we observe the absorption characteristic of the PHVVB with different TCs under variant magnetic fields.We find that the transmission spectrum linewidth of PHVVB can be effectively maintained regardless of the TC.Still,the width of transmission peaks increases slightly as the beam size expands in hot atomic vapor.This distinctive quantum coherence phenomenon,demonstrated by the interaction of an atomic medium with a hybrid vector-structured beam,might be anticipated to open up new opportunities for quantum coherence modulation and accurate magnetic field measurement.展开更多
Imbalanced datasets are common in practical applications,and oversampling methods using fuzzy rules have been shown to enhance the classification performance of imbalanced data by taking into account the relationship ...Imbalanced datasets are common in practical applications,and oversampling methods using fuzzy rules have been shown to enhance the classification performance of imbalanced data by taking into account the relationship between data attributes.However,the creation of fuzzy rules typically depends on expert knowledge,which may not fully leverage the label information in training data and may be subjective.To address this issue,a novel fuzzy rule oversampling approach is developed based on the learning vector quantization(LVQ)algorithm.In this method,the label information of the training data is utilized to determine the antecedent part of If-Then fuzzy rules by dynamically dividing attribute intervals using LVQ.Subsequently,fuzzy rules are generated and adjusted to calculate rule weights.The number of new samples to be synthesized for each rule is then computed,and samples from the minority class are synthesized based on the newly generated fuzzy rules.This results in the establishment of a fuzzy rule oversampling method based on LVQ.To evaluate the effectiveness of this method,comparative experiments are conducted on 12 publicly available imbalance datasets with five other sampling techniques in combination with the support function machine.The experimental results demonstrate that the proposed method can significantly enhance the classification algorithm across seven performance indicators,including a boost of 2.15%to 12.34%in Accuracy,6.11%to 27.06%in G-mean,and 4.69%to 18.78%in AUC.These show that the proposed method is capable of more efficiently improving the classification performance of imbalanced data.展开更多
Effective fault diagnosis and fault-tolerant control method for aeronautics electromechanical actuator is concerned in this paper.By borrowing the advantages of model-driven and data-driven methods,a fault tolerant no...Effective fault diagnosis and fault-tolerant control method for aeronautics electromechanical actuator is concerned in this paper.By borrowing the advantages of model-driven and data-driven methods,a fault tolerant nonsingular terminal sliding mode control method based on support vector machine(SVM)is proposed.A SVM is designed to estimate the fault by off-line learning from small sample data with solving convex quadratic programming method and is introduced into a high-gain observer,so as to improve the state estimation and fault detection accuracy when the fault occurs.The state estimation value of the observer is used for state reconfiguration.A novel nonsingular terminal sliding mode surface is designed,and Lyapunov theorem is used to derive a parameter adaptation law and a control law.It is guaranteed that the proposed controller can achieve asymptotical stability which is superior to many advanced fault-tolerant controllers.In addition,the parameter estimation also can help to diagnose the system faults because the faults can be reflected by the parameters variation.Extensive comparative simulation and experimental results illustrate the effectiveness and advancement of the proposed controller compared with several other main-stream controllers.展开更多
The ocean plays an important role in maintaining the equilibrium of Earth’s ecology and providing humans access to a wealth of resources.To obtain a high-precision underwater image classification model,we propose a c...The ocean plays an important role in maintaining the equilibrium of Earth’s ecology and providing humans access to a wealth of resources.To obtain a high-precision underwater image classification model,we propose a classification model that combines an EfficientnetB0 neural network and a two-hidden-layer random vector functional link network(EfficientnetB0-TRVFL).The features of underwater images were extracted using the EfficientnetB0 neural network pretrained via ImageNet,and a new fully connected layer was trained on the underwater image dataset using the transfer learning method.Transfer learning ensures the initial performance of the network and helps in the development of a high-precision classification model.Subsequently,a TRVFL was proposed to improve the classification property of the model.Net construction of the two hidden layers exhibited a high accuracy when the same hidden layer nodes were used.The parameters of the second hidden layer were obtained using a novel calculation method,which reduced the outcome error to improve the performance instability caused by the random generation of parameters of RVFL.Finally,the TRVFL classifier was used to classify features and obtain classification results.The proposed EfficientnetB0-TRVFL classification model achieved 87.28%,74.06%,and 99.59%accuracy on the MLC2008,MLC2009,and Fish-gres datasets,respectively.The best convolutional neural networks and existing methods were stacked up through box plots and Kolmogorov-Smirnov tests,respectively.The increases imply improved systematization properties in underwater image classification tasks.The image classification model offers important performance advantages and better stability compared with existing methods.展开更多
High-vertical-resolution radiosonde wind data are highly valuable for describing the dynamics of the meso-and microscale atmosphere. However, the current algorithm used in China's L-band radar sounding system for ...High-vertical-resolution radiosonde wind data are highly valuable for describing the dynamics of the meso-and microscale atmosphere. However, the current algorithm used in China's L-band radar sounding system for calculating highvertical-resolution wind vectors excessively smooths the data, resulting in significant underestimation of the calculated kinetic energy of gravity waves compared to similar products from other countries, which greatly limits the effective utilization of the data. To address this issue, this study proposes a novel method to calculate high-vertical-resolution wind vectors that utilizes the elevation angle, azimuth angle, and slant range from L-band radar. In order to obtain wind data with a stable quality, a two-step automatic quality control procedure, including the RMSE-F(root-mean-square error F) test and elemental consistency test are first applied to the slant range data, to eliminate continuous erroneous data caused by unstable signals or radar malfunctions. Then, a wind calculation scheme based on a sliding second-order polynomial fitting is utilized to derive the high-vertical-resolution radiosonde wind vectors. The evaluation results demonstrate that the wind data obtained through the proposed method show a high level of consistency with the high-resolution wind data observed using the Vaisala Global Positioning System and the data observed by the new Beidou Navigation Sounding System. The calculation of the kinetic energy of gravity waves in the recalculated wind data also reaches a level comparable to the Vaisala observations.展开更多
Accurately estimating blasting vibration during rock blasting is the foundation of blasting vibration management.In this study,Tuna Swarm Optimization(TSO),Whale Optimization Algorithm(WOA),and Cuckoo Search(CS)were u...Accurately estimating blasting vibration during rock blasting is the foundation of blasting vibration management.In this study,Tuna Swarm Optimization(TSO),Whale Optimization Algorithm(WOA),and Cuckoo Search(CS)were used to optimize two hyperparameters in support vector regression(SVR).Based on these methods,three hybrid models to predict peak particle velocity(PPV)for bench blasting were developed.Eighty-eight samples were collected to establish the PPV database,eight initial blasting parameters were chosen as input parameters for the predictionmodel,and the PPV was the output parameter.As predictive performance evaluation indicators,the coefficient of determination(R2),rootmean square error(RMSE),mean absolute error(MAE),and a10-index were selected.The normalizedmutual information value is then used to evaluate the impact of various input parameters on the PPV prediction outcomes.According to the research findings,TSO,WOA,and CS can all enhance the predictive performance of the SVR model.The TSO-SVR model provides the most accurate predictions.The performances of the optimized hybrid SVR models are superior to the unoptimized traditional prediction model.The maximum charge per delay impacts the PPV prediction value the most.展开更多
Medical imaging plays a key role within modern hospital management systems for diagnostic purposes.Compression methodologies are extensively employed to mitigate storage demands and enhance transmission speed,all whil...Medical imaging plays a key role within modern hospital management systems for diagnostic purposes.Compression methodologies are extensively employed to mitigate storage demands and enhance transmission speed,all while upholding image quality.Moreover,an increasing number of hospitals are embracing cloud computing for patient data storage,necessitating meticulous scrutiny of server security and privacy protocols.Nevertheless,considering the widespread availability of multimedia tools,the preservation of digital data integrity surpasses the significance of compression alone.In response to this concern,we propose a secure storage and transmission solution for compressed medical image sequences,such as ultrasound images,utilizing a motion vector watermarking scheme.The watermark is generated employing an error-correcting code known as Bose-Chaudhuri-Hocquenghem(BCH)and is subsequently embedded into the compressed sequence via block-based motion vectors.In the process of watermark embedding,motion vectors are selected based on their magnitude and phase angle.When embedding watermarks,no specific spatial area,such as a region of interest(ROI),is used in the images.The embedding of watermark bits is dependent on motion vectors.Although reversible watermarking allows the restoration of the original image sequences,we use the irreversible watermarking method.The reason for this is that the use of reversible watermarks may impede the claims of ownership and legal rights.The restoration of original data or images may call into question ownership or other legal claims.The peak signal-to-noise ratio(PSNR)and structural similarity index(SSIM)serve as metrics for evaluating the watermarked image quality.Across all images,the PSNR value exceeds 46 dB,and the SSIM value exceeds 0.92.Experimental results substantiate the efficacy of the proposed technique in preserving data integrity.展开更多
The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques we...The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques were applied to identify the most important input parameters for mapping debris flow susceptibility in the southern mountain area of Chengde City in Hebei Province,China,by using machine learning algorithms.In total,133 historical debris flow records and 16 related factors were selected.The support vector machine(SVM)was first used as the base classifier,and then a hybrid model was introduced by a two-step process.First,the particle swarm optimization(PSO)algorithm was employed to select the SVM model hyperparameters.Second,two feature selection algorithms,namely principal component analysis(PCA)and PSO,were integrated into the PSO-based SVM model,which generated the PCA-PSO-SVM and FS-PSO-SVM models,respectively.Three statistical metrics(accuracy,recall,and specificity)and the area under the receiver operating characteristic curve(AUC)were employed to evaluate and validate the performance of the models.The results indicated that the feature selection-based models exhibited the best performance,followed by the PSO-based SVM and SVM models.Moreover,the performance of the FS-PSO-SVM model was better than that of the PCA-PSO-SVM model,showing the highest AUC,accuracy,recall,and specificity values in both the training and testing processes.It was found that the selection of optimal features is crucial to improving the reliability of debris flow susceptibility assessment results.Moreover,the PSO algorithm was found to be not only an effective tool for hyperparameter optimization,but also a useful feature selection algorithm to improve prediction accuracies of debris flow susceptibility by using machine learning algorithms.The high and very high debris flow susceptibility zone appropriately covers 38.01%of the study area,where debris flow may occur under intensive human activities and heavy rainfall events.展开更多
The application of the vector magnetometry based on nitrogen-vacancy(NV)ensembles has been widely investigatedin multiple areas.It has the superiority of high sensitivity and high stability in ambient conditions with ...The application of the vector magnetometry based on nitrogen-vacancy(NV)ensembles has been widely investigatedin multiple areas.It has the superiority of high sensitivity and high stability in ambient conditions with microscale spatialresolution.However,a bias magnetic field is necessary to fully separate the resonance lines of optically detected magneticresonance(ODMR)spectrum of NV ensembles.This brings disturbances in samples being detected and limits the rangeof application.Here,we demonstrate a method of vector magnetometry in zero bias magnetic field using NV ensembles.By utilizing the anisotropy property of fluorescence excited from NV centers,we analyzed the ODMR spectrum of NVensembles under various polarized angles of excitation laser in zero bias magnetic field with a quantitative numerical modeland reconstructed the magnetic field vector.The minimum magnetic field modulus that can be resolved accurately is downto~0.64 G theoretically depending on the ODMR spectral line width(1.8 MHz),and~2 G experimentally due to noisesin fluorescence signals and errors in calibration.By using 13C purified and low nitrogen concentration diamond combinedwith improving calibration of unknown parameters,the ODMR spectral line width can be further decreased below 0.5 MHz,corresponding to~0.18 G minimum resolvable magnetic field modulus.展开更多
A naïve discussion of Fermat’s last theorem conundrum is described. The present theorem’s proof is grounded on the well-known properties of sums of powers of the sine and cosine functions, the Minkowski norm de...A naïve discussion of Fermat’s last theorem conundrum is described. The present theorem’s proof is grounded on the well-known properties of sums of powers of the sine and cosine functions, the Minkowski norm definition, and some vector-specific structures.展开更多
基金the National Natural Science Foundation of China(Grant Nos.62272478,62202496,61872384).
文摘Among steganalysis techniques,detection against MV(motion vector)domain-based video steganography in the HEVC(High Efficiency Video Coding)standard remains a challenging issue.For the purpose of improving the detection performance,this paper proposes a steganalysis method that can perfectly detectMV-based steganography in HEVC.Firstly,we define the local optimality of MVP(Motion Vector Prediction)based on the technology of AMVP(Advanced Motion Vector Prediction).Secondly,we analyze that in HEVC video,message embedding either usingMVP index orMVD(Motion Vector Difference)may destroy the above optimality of MVP.And then,we define the optimal rate of MVP as a steganalysis feature.Finally,we conduct steganalysis detection experiments on two general datasets for three popular steganographymethods and compare the performance with four state-ofthe-art steganalysis methods.The experimental results demonstrate the effectiveness of the proposed feature set.Furthermore,our method stands out for its practical applicability,requiring no model training and exhibiting low computational complexity,making it a viable solution for real-world scenarios.
文摘Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework for aircraft geo-localization in a large range that only requires a downward-facing monocular camera,an altimeter,a compass,and an open-source Vector Map(VMAP).The algorithm combines the matching and particle filter methods.Shape vector and correlation between two building contour vectors are defined,and a coarse-to-fine building vector matching(CFBVM)method is proposed in the matching stage,for which the original matching results are described by the Gaussian mixture model(GMM).Subsequently,an improved resampling strategy is designed to reduce computing expenses with a huge number of initial particles,and a credibility indicator is designed to avoid location mistakes in the particle filter stage.An experimental evaluation of the approach based on flight data is provided.On a flight at a height of 0.2 km over a flight distance of 2 km,the aircraft is geo-localized in a reference map of 11,025 km~2using 0.09 km~2aerial images without any prior information.The absolute localization error is less than 10 m.
基金the National Natural Science Foundation of China(Grant Nos.12305054,12172340,and 12371506)。
文摘Hamilton energy,which reflects the energy variation of systems,is one of the crucial instruments used to analyze the characteristics of dynamical systems.Here we propose a method to deduce Hamilton energy based on the existing systems.This derivation process consists of three steps:step 1,decomposing the vector field;step 2,solving the Hamilton energy function;and step 3,verifying uniqueness.In order to easily choose an appropriate decomposition method,we propose a classification criterion based on the form of system state variables,i.e.,type-I vector fields that can be directly decomposed and type-II vector fields decomposed via exterior differentiation.Moreover,exterior differentiation is used to represent the curl of low-high dimension vector fields in the process of decomposition.Finally,we exemplify the Hamilton energy function of six classical systems and analyze the relationship between Hamilton energy and dynamic behavior.This solution provides a new approach for deducing the Hamilton energy function,especially in high-dimensional systems.
基金supported by the Joint Funds of the Chinese National Natural Science Foundation (NSFC)(Grant No.U2242213)the National Key Research and Development (R&D)Program of the Ministry of Science and Technology of China(Grant No. 2021YFC3000902)the National Science Foundation for Young Scholars (Grant No. 42205166)。
文摘Ensemble prediction is widely used to represent the uncertainty of single deterministic Numerical Weather Prediction(NWP) caused by errors in initial conditions(ICs). The traditional Singular Vector(SV) initial perturbation method tends only to capture synoptic scale initial uncertainty rather than mesoscale uncertainty in global ensemble prediction. To address this issue, a multiscale SV initial perturbation method based on the China Meteorological Administration Global Ensemble Prediction System(CMA-GEPS) is proposed to quantify multiscale initial uncertainty. The multiscale SV initial perturbation approach entails calculating multiscale SVs at different resolutions with multiple linearized physical processes to capture fast-growing perturbations from mesoscale to synoptic scale in target areas and combining these SVs by using a Gaussian sampling method with amplitude coefficients to generate initial perturbations. Following that, the energy norm,energy spectrum, and structure of multiscale SVs and their impact on GEPS are analyzed based on a batch experiment in different seasons. The results show that the multiscale SV initial perturbations can possess more energy and capture more mesoscale uncertainties than the traditional single-SV method. Meanwhile, multiscale SV initial perturbations can reflect the strongest dynamical instability in target areas. Their performances in global ensemble prediction when compared to single-scale SVs are shown to(i) improve the relationship between the ensemble spread and the root-mean-square error and(ii) provide a better probability forecast skill for atmospheric circulation during the late forecast period and for short-to medium-range precipitation. This study provides scientific evidence and application foundations for the design and development of a multiscale SV initial perturbation method for the GEPS.
基金Hebei Province Key Research and Development Project(No.20313701D)Hebei Province Key Research and Development Project(No.19210404D)+13 种基金Mobile computing and universal equipment for the Beijing Key Laboratory Open Project,The National Social Science Fund of China(17AJL014)Beijing University of Posts and Telecommunications Construction of World-Class Disciplines and Characteristic Development Guidance Special Fund “Cultural Inheritance and Innovation”Project(No.505019221)National Natural Science Foundation of China(No.U1536112)National Natural Science Foundation of China(No.81673697)National Natural Science Foundation of China(61872046)The National Social Science Fund Key Project of China(No.17AJL014)“Blue Fire Project”(Huizhou)University of Technology Joint Innovation Project(CXZJHZ201729)Industry-University Cooperation Cooperative Education Project of the Ministry of Education(No.201902218004)Industry-University Cooperation Cooperative Education Project of the Ministry of Education(No.201902024006)Industry-University Cooperation Cooperative Education Project of the Ministry of Education(No.201901197007)Industry-University Cooperation Collaborative Education Project of the Ministry of Education(No.201901199005)The Ministry of Education Industry-University Cooperation Collaborative Education Project(No.201901197001)Shijiazhuang science and technology plan project(236240267A)Hebei Province key research and development plan project(20312701D)。
文摘The distribution of data has a significant impact on the results of classification.When the distribution of one class is insignificant compared to the distribution of another class,data imbalance occurs.This will result in rising outlier values and noise.Therefore,the speed and performance of classification could be greatly affected.Given the above problems,this paper starts with the motivation and mathematical representing of classification,puts forward a new classification method based on the relationship between different classification formulations.Combined with the vector characteristics of the actual problem and the choice of matrix characteristics,we firstly analyze the orderly regression to introduce slack variables to solve the constraint problem of the lone point.Then we introduce the fuzzy factors to solve the problem of the gap between the isolated points on the basis of the support vector machine.We introduce the cost control to solve the problem of sample skew.Finally,based on the bi-boundary support vector machine,a twostep weight setting twin classifier is constructed.This can help to identify multitasks with feature-selected patterns without the need for additional optimizers,which solves the problem of large-scale classification that can’t deal effectively with the very low category distribution gap.
基金supported in part by National Natural Science Foundation of China(Nos.62102311,62202377,62272385)in part by Natural Science Basic Research Program of Shaanxi(Nos.2022JQ-600,2022JM-353,2023-JC-QN-0327)+2 种基金in part by Shaanxi Distinguished Youth Project(No.2022JC-47)in part by Scientific Research Program Funded by Shaanxi Provincial Education Department(No.22JK0560)in part by Distinguished Youth Talents of Shaanxi Universities,and in part by Youth Innovation Team of Shaanxi Universities.
文摘With the widespread data collection and processing,privacy-preserving machine learning has become increasingly important in addressing privacy risks related to individuals.Support vector machine(SVM)is one of the most elementary learning models of machine learning.Privacy issues surrounding SVM classifier training have attracted increasing attention.In this paper,we investigate Differential Privacy-compliant Federated Machine Learning with Dimensionality Reduction,called FedDPDR-DPML,which greatly improves data utility while providing strong privacy guarantees.Considering in distributed learning scenarios,multiple participants usually hold unbalanced or small amounts of data.Therefore,FedDPDR-DPML enables multiple participants to collaboratively learn a global model based on weighted model averaging and knowledge aggregation and then the server distributes the global model to each participant to improve local data utility.Aiming at high-dimensional data,we adopt differential privacy in both the principal component analysis(PCA)-based dimensionality reduction phase and SVM classifiers training phase,which improves model accuracy while achieving strict differential privacy protection.Besides,we train Differential privacy(DP)-compliant SVM classifiers by adding noise to the objective function itself,thus leading to better data utility.Extensive experiments on three high-dimensional datasets demonstrate that FedDPDR-DPML can achieve high accuracy while ensuring strong privacy protection.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.62375140 and 62001249)the Open Research Fund of National Laboratory of Solid State Microstructures(Grant No.M36055).
文摘The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably affected by atmospheric turbulence(AT)when it propagates through the free-space optical communication environment,which results in detection errors at the receiver.In this paper,we propose a VVB classification scheme to detect VVBs with continuously changing polarization states under AT,where a diffractive deep neural network(DDNN)is designed and trained to classify the intensity distribution of the input distorted VVBs,and the horizontal direction of polarization of the input distorted beam is adopted as the feature for the classification through the DDNN.The numerical simulations and experimental results demonstrate that the proposed scheme has high accuracy in classification tasks.The energy distribution percentage remains above 95%from weak to medium AT,and the classification accuracy can remain above 95%for various strengths of turbulence.It has a faster convergence and better accuracy than that based on a convolutional neural network.
文摘This article delves into the analysis of performance and utilization of Support Vector Machines (SVMs) for the critical task of forest fire detection using image datasets. With the increasing threat of forest fires to ecosystems and human settlements, the need for rapid and accurate detection systems is of utmost importance. SVMs, renowned for their strong classification capabilities, exhibit proficiency in recognizing patterns associated with fire within images. By training on labeled data, SVMs acquire the ability to identify distinctive attributes associated with fire, such as flames, smoke, or alterations in the visual characteristics of the forest area. The document thoroughly examines the use of SVMs, covering crucial elements like data preprocessing, feature extraction, and model training. It rigorously evaluates parameters such as accuracy, efficiency, and practical applicability. The knowledge gained from this study aids in the development of efficient forest fire detection systems, enabling prompt responses and improving disaster management. Moreover, the correlation between SVM accuracy and the difficulties presented by high-dimensional datasets is carefully investigated, demonstrated through a revealing case study. The relationship between accuracy scores and the different resolutions used for resizing the training datasets has also been discussed in this article. These comprehensive studies result in a definitive overview of the difficulties faced and the potential sectors requiring further improvement and focus.
基金financially supported by the Deanship of Scientific Research at King Khalid University under Research Grant Number(R.G.P.2/549/44).
文摘Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial information,and these methods have made it feasible to handle a wide range of problems associated with image analysis.Images with little information or low payload are used by information embedding methods,but the goal of all contemporary research is to employ high-payload images for classification.To address the need for both low-and high-payload images,this work provides a machine-learning approach to steganography image classification that uses Curvelet transformation to efficiently extract characteristics from both type of images.Support Vector Machine(SVM),a commonplace classification technique,has been employed to determine whether the image is a stego or cover.The Wavelet Obtained Weights(WOW),Spatial Universal Wavelet Relative Distortion(S-UNIWARD),Highly Undetectable Steganography(HUGO),and Minimizing the Power of Optimal Detector(MiPOD)steganography techniques are used in a variety of experimental scenarios to evaluate the performance of the proposedmethod.Using WOW at several payloads,the proposed approach proves its classification accuracy of 98.60%.It exhibits its superiority over SOTA methods.
基金grateful for Science and Technology Innovation Ability Cultivation Project of Hebei Provincial Planning for College and Middle School Students(22E50590D)Priority Research Project of Langfang Education Sciences Planning(JCJY202130).
文摘The turbidite channel of South China Sea has been highly concerned.Influenced by the complex fault and the rapid phase change of lithofacies,predicting the channel through conventional seismic attributes is not accurate enough.In response to this disadvantage,this study used a method combining grey relational analysis(GRA)and support vectormachine(SVM)and established a set of prediction technical procedures suitable for reservoirs with complex geological conditions.In the case study of the Huangliu Formation in Qiongdongnan Basin,South China Sea,this study first dimensionalized the conventional seismic attributes of Gas Layer Group I and then used the GRA method to obtain the main relational factors.A higher relational degree indicates a higher probability of responding to the attributes of the turbidite channel.This study then accumulated the optimized attributes with the highest relational factors to obtain a first-order accumulated sequence,which was used as the input training sample of the SVM model,thus successfully constructing the SVM turbidite channel model.Drilling results prove that the GRA-SVMmethod has a high drilling coincidence rate.Utilizing the core and logging data and taking full use of the advantages of seismic inversion in predicting the sand boundary of water channels,this study divides the sedimentary microfacies of the Huangliu Formation in the Lingshui 17-2 Gas Field.This comprehensive study has shown that the GRA-SVM method has high accuracy for predicting turbidite channels and can be used as a superior turbidite channel prediction method under complex geological conditions.
基金Project supported by the Youth Innovation Promotion Association CASState Key Laboratory of Transient Optics and Photonics Open Topics (Grant No. SKLST202222)
文摘The perfect hybrid vector vortex beam(PHVVB)with helical phase wavefront structure has aroused significant concern in recent years,as its beam waist does not expand with the topological charge(TC).In this work,we investigate the spatial quantum coherent modulation effect with PHVVB based on the atomic medium,and we observe the absorption characteristic of the PHVVB with different TCs under variant magnetic fields.We find that the transmission spectrum linewidth of PHVVB can be effectively maintained regardless of the TC.Still,the width of transmission peaks increases slightly as the beam size expands in hot atomic vapor.This distinctive quantum coherence phenomenon,demonstrated by the interaction of an atomic medium with a hybrid vector-structured beam,might be anticipated to open up new opportunities for quantum coherence modulation and accurate magnetic field measurement.
基金funded by the National Science Foundation of China(62006068)Hebei Natural Science Foundation(A2021402008),Natural Science Foundation of Scientific Research Project of Higher Education in Hebei Province(ZD2020185,QN2020188)333 Talent Supported Project of Hebei Province(C20221026).
文摘Imbalanced datasets are common in practical applications,and oversampling methods using fuzzy rules have been shown to enhance the classification performance of imbalanced data by taking into account the relationship between data attributes.However,the creation of fuzzy rules typically depends on expert knowledge,which may not fully leverage the label information in training data and may be subjective.To address this issue,a novel fuzzy rule oversampling approach is developed based on the learning vector quantization(LVQ)algorithm.In this method,the label information of the training data is utilized to determine the antecedent part of If-Then fuzzy rules by dynamically dividing attribute intervals using LVQ.Subsequently,fuzzy rules are generated and adjusted to calculate rule weights.The number of new samples to be synthesized for each rule is then computed,and samples from the minority class are synthesized based on the newly generated fuzzy rules.This results in the establishment of a fuzzy rule oversampling method based on LVQ.To evaluate the effectiveness of this method,comparative experiments are conducted on 12 publicly available imbalance datasets with five other sampling techniques in combination with the support function machine.The experimental results demonstrate that the proposed method can significantly enhance the classification algorithm across seven performance indicators,including a boost of 2.15%to 12.34%in Accuracy,6.11%to 27.06%in G-mean,and 4.69%to 18.78%in AUC.These show that the proposed method is capable of more efficiently improving the classification performance of imbalanced data.
基金Supported by National Natural Science Foundation of China (Grant No.51975294)Fundamental Research Funds for the Central Universities of China (Grant No.30922010706)。
文摘Effective fault diagnosis and fault-tolerant control method for aeronautics electromechanical actuator is concerned in this paper.By borrowing the advantages of model-driven and data-driven methods,a fault tolerant nonsingular terminal sliding mode control method based on support vector machine(SVM)is proposed.A SVM is designed to estimate the fault by off-line learning from small sample data with solving convex quadratic programming method and is introduced into a high-gain observer,so as to improve the state estimation and fault detection accuracy when the fault occurs.The state estimation value of the observer is used for state reconfiguration.A novel nonsingular terminal sliding mode surface is designed,and Lyapunov theorem is used to derive a parameter adaptation law and a control law.It is guaranteed that the proposed controller can achieve asymptotical stability which is superior to many advanced fault-tolerant controllers.In addition,the parameter estimation also can help to diagnose the system faults because the faults can be reflected by the parameters variation.Extensive comparative simulation and experimental results illustrate the effectiveness and advancement of the proposed controller compared with several other main-stream controllers.
基金support of the National Key R&D Program of China(No.2022YFC2803903)the Key R&D Program of Zhejiang Province(No.2021C03013)the Zhejiang Provincial Natural Science Foundation of China(No.LZ20F020003).
文摘The ocean plays an important role in maintaining the equilibrium of Earth’s ecology and providing humans access to a wealth of resources.To obtain a high-precision underwater image classification model,we propose a classification model that combines an EfficientnetB0 neural network and a two-hidden-layer random vector functional link network(EfficientnetB0-TRVFL).The features of underwater images were extracted using the EfficientnetB0 neural network pretrained via ImageNet,and a new fully connected layer was trained on the underwater image dataset using the transfer learning method.Transfer learning ensures the initial performance of the network and helps in the development of a high-precision classification model.Subsequently,a TRVFL was proposed to improve the classification property of the model.Net construction of the two hidden layers exhibited a high accuracy when the same hidden layer nodes were used.The parameters of the second hidden layer were obtained using a novel calculation method,which reduced the outcome error to improve the performance instability caused by the random generation of parameters of RVFL.Finally,the TRVFL classifier was used to classify features and obtain classification results.The proposed EfficientnetB0-TRVFL classification model achieved 87.28%,74.06%,and 99.59%accuracy on the MLC2008,MLC2009,and Fish-gres datasets,respectively.The best convolutional neural networks and existing methods were stacked up through box plots and Kolmogorov-Smirnov tests,respectively.The increases imply improved systematization properties in underwater image classification tasks.The image classification model offers important performance advantages and better stability compared with existing methods.
基金funded by an NSFC Major Project (Grant No. 42090033)the China Meteorological Administration Youth Innovation Team “High-Value Climate Change Data Product Development and Application Services”(Grant No. CMA2023QN08)the National Meteorological Information Centre Surplus Funds Program (Grant NMICJY202310)。
文摘High-vertical-resolution radiosonde wind data are highly valuable for describing the dynamics of the meso-and microscale atmosphere. However, the current algorithm used in China's L-band radar sounding system for calculating highvertical-resolution wind vectors excessively smooths the data, resulting in significant underestimation of the calculated kinetic energy of gravity waves compared to similar products from other countries, which greatly limits the effective utilization of the data. To address this issue, this study proposes a novel method to calculate high-vertical-resolution wind vectors that utilizes the elevation angle, azimuth angle, and slant range from L-band radar. In order to obtain wind data with a stable quality, a two-step automatic quality control procedure, including the RMSE-F(root-mean-square error F) test and elemental consistency test are first applied to the slant range data, to eliminate continuous erroneous data caused by unstable signals or radar malfunctions. Then, a wind calculation scheme based on a sliding second-order polynomial fitting is utilized to derive the high-vertical-resolution radiosonde wind vectors. The evaluation results demonstrate that the wind data obtained through the proposed method show a high level of consistency with the high-resolution wind data observed using the Vaisala Global Positioning System and the data observed by the new Beidou Navigation Sounding System. The calculation of the kinetic energy of gravity waves in the recalculated wind data also reaches a level comparable to the Vaisala observations.
基金financially supported by the NationalNatural Science Foundation of China(Grant No.42072309)the Fundamental Research Funds for National University,China University of Geosciences(Wuhan)(Grant No.CUGDCJJ202217)+1 种基金the Knowledge Innovation Program of Wuhan-Basic Research(Grant No.2022020801010199)the Hubei Key Laboratory of Blasting Engineering Foundation(HKLBEF202002).
文摘Accurately estimating blasting vibration during rock blasting is the foundation of blasting vibration management.In this study,Tuna Swarm Optimization(TSO),Whale Optimization Algorithm(WOA),and Cuckoo Search(CS)were used to optimize two hyperparameters in support vector regression(SVR).Based on these methods,three hybrid models to predict peak particle velocity(PPV)for bench blasting were developed.Eighty-eight samples were collected to establish the PPV database,eight initial blasting parameters were chosen as input parameters for the predictionmodel,and the PPV was the output parameter.As predictive performance evaluation indicators,the coefficient of determination(R2),rootmean square error(RMSE),mean absolute error(MAE),and a10-index were selected.The normalizedmutual information value is then used to evaluate the impact of various input parameters on the PPV prediction outcomes.According to the research findings,TSO,WOA,and CS can all enhance the predictive performance of the SVR model.The TSO-SVR model provides the most accurate predictions.The performances of the optimized hybrid SVR models are superior to the unoptimized traditional prediction model.The maximum charge per delay impacts the PPV prediction value the most.
基金supported by the Yayasan Universiti Teknologi PETRONAS Grants,YUTP-PRG(015PBC-027)YUTP-FRG(015LC0-311),Hilmi Hasan,www.utp.edu.my.
文摘Medical imaging plays a key role within modern hospital management systems for diagnostic purposes.Compression methodologies are extensively employed to mitigate storage demands and enhance transmission speed,all while upholding image quality.Moreover,an increasing number of hospitals are embracing cloud computing for patient data storage,necessitating meticulous scrutiny of server security and privacy protocols.Nevertheless,considering the widespread availability of multimedia tools,the preservation of digital data integrity surpasses the significance of compression alone.In response to this concern,we propose a secure storage and transmission solution for compressed medical image sequences,such as ultrasound images,utilizing a motion vector watermarking scheme.The watermark is generated employing an error-correcting code known as Bose-Chaudhuri-Hocquenghem(BCH)and is subsequently embedded into the compressed sequence via block-based motion vectors.In the process of watermark embedding,motion vectors are selected based on their magnitude and phase angle.When embedding watermarks,no specific spatial area,such as a region of interest(ROI),is used in the images.The embedding of watermark bits is dependent on motion vectors.Although reversible watermarking allows the restoration of the original image sequences,we use the irreversible watermarking method.The reason for this is that the use of reversible watermarks may impede the claims of ownership and legal rights.The restoration of original data or images may call into question ownership or other legal claims.The peak signal-to-noise ratio(PSNR)and structural similarity index(SSIM)serve as metrics for evaluating the watermarked image quality.Across all images,the PSNR value exceeds 46 dB,and the SSIM value exceeds 0.92.Experimental results substantiate the efficacy of the proposed technique in preserving data integrity.
基金supported by the Second Tibetan Plateau Scientific Expedition and Research Program(Grant no.2019QZKK0904)Natural Science Foundation of Hebei Province(Grant no.D2022403032)S&T Program of Hebei(Grant no.E2021403001).
文摘The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques were applied to identify the most important input parameters for mapping debris flow susceptibility in the southern mountain area of Chengde City in Hebei Province,China,by using machine learning algorithms.In total,133 historical debris flow records and 16 related factors were selected.The support vector machine(SVM)was first used as the base classifier,and then a hybrid model was introduced by a two-step process.First,the particle swarm optimization(PSO)algorithm was employed to select the SVM model hyperparameters.Second,two feature selection algorithms,namely principal component analysis(PCA)and PSO,were integrated into the PSO-based SVM model,which generated the PCA-PSO-SVM and FS-PSO-SVM models,respectively.Three statistical metrics(accuracy,recall,and specificity)and the area under the receiver operating characteristic curve(AUC)were employed to evaluate and validate the performance of the models.The results indicated that the feature selection-based models exhibited the best performance,followed by the PSO-based SVM and SVM models.Moreover,the performance of the FS-PSO-SVM model was better than that of the PCA-PSO-SVM model,showing the highest AUC,accuracy,recall,and specificity values in both the training and testing processes.It was found that the selection of optimal features is crucial to improving the reliability of debris flow susceptibility assessment results.Moreover,the PSO algorithm was found to be not only an effective tool for hyperparameter optimization,but also a useful feature selection algorithm to improve prediction accuracies of debris flow susceptibility by using machine learning algorithms.The high and very high debris flow susceptibility zone appropriately covers 38.01%of the study area,where debris flow may occur under intensive human activities and heavy rainfall events.
基金supported by the National Key R&D Program of China(Grant Nos.2021YFB3202800 and 2023YF0718400)Chinese Academy of Sciences(Grant No.ZDZBGCH2021002)+2 种基金Chinese Academy of Sciences(Grant No.GJJSTD20200001)Innovation Program for Quantum Science and Technology(Grant No.2021ZD0303204)Anhui Initiative in Quantum Information Technologies,USTC Tang Scholar,and the Fundamental Research Funds for the Central Universities.
文摘The application of the vector magnetometry based on nitrogen-vacancy(NV)ensembles has been widely investigatedin multiple areas.It has the superiority of high sensitivity and high stability in ambient conditions with microscale spatialresolution.However,a bias magnetic field is necessary to fully separate the resonance lines of optically detected magneticresonance(ODMR)spectrum of NV ensembles.This brings disturbances in samples being detected and limits the rangeof application.Here,we demonstrate a method of vector magnetometry in zero bias magnetic field using NV ensembles.By utilizing the anisotropy property of fluorescence excited from NV centers,we analyzed the ODMR spectrum of NVensembles under various polarized angles of excitation laser in zero bias magnetic field with a quantitative numerical modeland reconstructed the magnetic field vector.The minimum magnetic field modulus that can be resolved accurately is downto~0.64 G theoretically depending on the ODMR spectral line width(1.8 MHz),and~2 G experimentally due to noisesin fluorescence signals and errors in calibration.By using 13C purified and low nitrogen concentration diamond combinedwith improving calibration of unknown parameters,the ODMR spectral line width can be further decreased below 0.5 MHz,corresponding to~0.18 G minimum resolvable magnetic field modulus.
文摘A naïve discussion of Fermat’s last theorem conundrum is described. The present theorem’s proof is grounded on the well-known properties of sums of powers of the sine and cosine functions, the Minkowski norm definition, and some vector-specific structures.