Quantum singular value thresholding(QSVT) algorithm,as a core module of many mathematical models,seeks the singular values of a sparse and low rank matrix exceeding a threshold and their associated singular vectors.Th...Quantum singular value thresholding(QSVT) algorithm,as a core module of many mathematical models,seeks the singular values of a sparse and low rank matrix exceeding a threshold and their associated singular vectors.The existing all-qubit QSVT algorithm demands lots of ancillary qubits,remaining a huge challenge for realization on nearterm intermediate-scale quantum computers.In this paper,we propose a hybrid QSVT(HQSVT) algorithm utilizing both discrete variables(DVs) and continuous variables(CVs).In our algorithm,raw data vectors are encoded into a qubit system and the following data processing is fulfilled by hybrid quantum operations.Our algorithm requires O [log(MN)] qubits with0(1) qumodes and totally performs 0(1) operations,which significantly reduces the space and runtime consumption.展开更多
To segment defects from the quad flat non-lead QFN package surface a multilevel Otsu thresholding method based on the firefly algorithm with opposition-learning is proposed. First the Otsu thresholding algorithm is ex...To segment defects from the quad flat non-lead QFN package surface a multilevel Otsu thresholding method based on the firefly algorithm with opposition-learning is proposed. First the Otsu thresholding algorithm is expanded to a multilevel Otsu thresholding algorithm. Secondly a firefly algorithm with opposition-learning OFA is proposed.In the OFA opposite fireflies are generated to increase the diversity of the fireflies and improve the global search ability. Thirdly the OFA is applied to searching multilevel thresholds for image segmentation. Finally the proposed method is implemented to segment the QFN images with defects and the results are compared with three methods i.e. the exhaustive search method the multilevel Otsu thresholding method based on particle swarm optimization and the multilevel Otsu thresholding method based on the firefly algorithm. Experimental results show that the proposed method can segment QFN surface defects images more efficiently and at a greater speed than that of the other three methods.展开更多
Matrix completion is the extension of compressed sensing.In compressed sensing,we solve the underdetermined equations using sparsity prior of the unknown signals.However,in matrix completion,we solve the underdetermin...Matrix completion is the extension of compressed sensing.In compressed sensing,we solve the underdetermined equations using sparsity prior of the unknown signals.However,in matrix completion,we solve the underdetermined equations based on sparsity prior in singular values set of the unknown matrix,which also calls low-rank prior of the unknown matrix.This paper firstly introduces basic concept of matrix completion,analyses the matrix suitably used in matrix completion,and shows that such matrix should satisfy two conditions:low rank and incoherence property.Then the paper provides three reconstruction algorithms commonly used in matrix completion:singular value thresholding algorithm,singular value projection,and atomic decomposition for minimum rank approximation,puts forward their shortcoming to know the rank of original matrix.The Projected Gradient Descent based on Soft Thresholding(STPGD),proposed in this paper predicts the rank of unknown matrix using soft thresholding,and iteratives based on projected gradient descent,thus it could estimate the rank of unknown matrix exactly with low computational complexity,this is verified by numerical experiments.We also analyze the convergence and computational complexity of the STPGD algorithm,point out this algorithm is guaranteed to converge,and analyse the number of iterations needed to reach reconstruction error.Compared the computational complexity of the STPGD algorithm to other algorithms,we draw the conclusion that the STPGD algorithm not only reduces the computational complexity,but also improves the precision of the reconstruction solution.展开更多
An improved artificial immune algorithm with a dynamic threshold is presented. The calculation for the affinity function in the real-valued coding artificial immune algorithm is modified through considering the antib...An improved artificial immune algorithm with a dynamic threshold is presented. The calculation for the affinity function in the real-valued coding artificial immune algorithm is modified through considering the antibody's fitness and setting the dynamic threshold value. Numerical experiments show that compared with the genetic algorithm and the originally real-valued coding artificial immune algorithm, the improved algorithm possesses high speed of convergence and good performance for preventing premature convergence.展开更多
The following mini-review attempts to guide researchers in the quantification of fluorescently-labelled proteins within cultured thick or chromogenically-stained proteins within thin sections of brain tissue.It follow...The following mini-review attempts to guide researchers in the quantification of fluorescently-labelled proteins within cultured thick or chromogenically-stained proteins within thin sections of brain tissue.It follows from our examination of the utility of Fiji Image J thresholding and binarization algorithms.Describing how we identified the maximum intensity projection as the best of six tested for two dimensional(2 D)-rendering of three-dimensional(3 D) images derived from a series of z-stacked micrographs,the review summarises our comparison of 16 global and 9 local algorithms for their ability to accurately quantify the expression of astrocytic glial fibrillary acidic protein(GFAP),microglial ionized calcium binding adapter molecule 1(IBA1) and oligodendrocyte lineage Olig2 within fixed cultured rat hippocampal brain slices.The application of these algorithms to chromogenically-stained GFAP and IBA1 within thin tissue sections,is also described.Fiji’s Bio Voxxel plugin allowed categorisation of algorithms according to their sensitivity,specificity accuracy and relative quality.The Percentile algorithm was deemed best for quantifying levels of GFAP,the Li algorithm was best when quantifying IBA expression,while the Otsu algorithm was optimum for Olig2 staining,albeit with over-quantification of oligodendrocyte number when compared to a stereological approach.Also,GFAP and IBA expression in 3,3′-diaminobenzidine(DAB)/haematoxylin-stained cerebellar tissue was best quantified with Default,Isodata and Moments algorithms.The workflow presented in Figure 1 could help to improve the quality of research outcomes that are based on the quantification of protein with brain tissue.展开更多
Aim To fuse the fluorescence image and transmission image of a cell into a single image containing more information than any of the individual image. Methods Image fusion technology was applied to biological cell imag...Aim To fuse the fluorescence image and transmission image of a cell into a single image containing more information than any of the individual image. Methods Image fusion technology was applied to biological cell imaging processing. It could match the images and improve the confidence and spatial resolution of the images. Using two algorithms, double thresholds algorithm and denoising algorithm based on wavelet transform,the fluorescence image and transmission image of a Cell were merged into a composite image. Results and Conclusion The position of fluorescence and the structure of cell can be displyed in the composite image. The signal-to-noise ratio of the exultant image is improved to a large extent. The algorithms are not only useful to investigate the fluorescence and transmission images, but also suitable to observing two or more fluoascent label proes in a single cell.展开更多
The grain production prediction is one of the most important links in precision agriculture. In the process of grain production prediction, mechanical noise caused by the factors of difference in field topography and ...The grain production prediction is one of the most important links in precision agriculture. In the process of grain production prediction, mechanical noise caused by the factors of difference in field topography and mechanical vibration will be mixed in the original signal, which undoubtedly will affect the prediction accuracy. Therefore, in order to reduce the influence of vibration noise on the prediction accuracy, an adaptive Ensemble Empirical Mode Decomposition(EEMD) threshold filtering algorithm was applied to the original signal in this paper: the output signal was decomposed into a finite number of Intrinsic Mode Functions(IMF) from high frequency to low frequency by using the Empirical Mode Decomposition(EMD) algorithm which could effectively restrain the mode mixing phenomenon; then the demarcation point of high and low frequency IMF components were determined by Continuous Mean Square Error criterion(CMSE), the high frequency IMF components were denoised by wavelet threshold algorithm, and finally the signal was reconstructed. The algorithm was an improved algorithm based on the commonly used wavelet threshold. The two algorithms were used to denoise the original production signal respectively, the adaptive EEMD threshold filtering algorithm had significant advantages in three denoising performance indexes of signal denoising ratio, root mean square error and smoothness. The five field verification tests showed that the average error of field experiment was 1.994% and the maximum relative error was less than 3%. According to the test results, the relative error of the predicted yield per hectare was 2.97%, which was relative to the actual yield. The test results showed that the algorithm could effectively resist noise and improve the accuracy of prediction.展开更多
The China Infectious Disease Automated-alert and Response System(CIDARS) was successfully implemented and became operational nationwide in 2008. The CIDARS plays an important role in and has been integrated into the...The China Infectious Disease Automated-alert and Response System(CIDARS) was successfully implemented and became operational nationwide in 2008. The CIDARS plays an important role in and has been integrated into the routine outbreak monitoring efforts of the Center for Disease Control(CDC) at all levels in China. In the CIDARS, thresholds are determined using the ?Mean+2SD? in the early stage which have limitations. This study compared the performance of optimized thresholds defined using the ?Mean +2SD? method to the performance of 5 novel algorithms to select optimal ?Outbreak Gold Standard(OGS)? and corresponding thresholds for outbreak detection. Data for infectious disease were organized by calendar week and year. The ?Mean+2 SD?, C1, C2, moving average(MA), seasonal model(SM), and cumulative sum(CUSUM) algorithms were applied. Outbreak signals for the predicted value(Px) were calculated using a percentile-based moving window. When the outbreak signals generated by an algorithm were in line with a Px generated outbreak signal for each week, this Px was then defined as the optimized threshold for that algorithm. In this study, six infectious diseases were selected and classified into TYPE A(chickenpox and mumps), TYPE B(influenza and rubella) and TYPE C [hand foot and mouth disease(HFMD) and scarlet fever]. Optimized thresholds for chickenpox(P_(55)), mumps(P_(50)), influenza(P_(40), P_(55), and P_(75)), rubella(P_(45) and P_(75)), HFMD(P_(65) and P_(70)), and scarlet fever(P_(75) and P_(80)) were identified. The C1, C2, CUSUM, SM, and MA algorithms were appropriate for TYPE A. All 6 algorithms were appropriate for TYPE B. C1 and CUSUM algorithms were appropriate for TYPE C. It is critical to incorporate more flexible algorithms as OGS into the CIDRAS and to identify the proper OGS and corresponding recommended optimized threshold by different infectious disease types.展开更多
Electric signals are acquired and analyzed in order to monitor the underwater arc welding process. Voltage break point and magnitude are extracted by detecting arc voltage singularity through the modulus maximum wavel...Electric signals are acquired and analyzed in order to monitor the underwater arc welding process. Voltage break point and magnitude are extracted by detecting arc voltage singularity through the modulus maximum wavelet (MMW) method. A novel threshold algorithm, which compromises the hard-threshold wavelet (HTW) and soft-threshold wavelet (STW) methods, is investigated to eliminate welding current noise. Finally, advantages over traditional wavelet methods are verified by both simulation and experimental results.展开更多
The semidefinite matrix completion(SMC) problem is to recover a low-rank positive semidefinite matrix from a small subset of its entries. It is well known but NP-hard in general. We first show that under some cases, S...The semidefinite matrix completion(SMC) problem is to recover a low-rank positive semidefinite matrix from a small subset of its entries. It is well known but NP-hard in general. We first show that under some cases, SMC problem and S1/2relaxation model share a unique solution. Then we prove that the global optimal solutions of S1/2regularization model are fixed points of a symmetric matrix half thresholding operator. We give an iterative scheme for solving S1/2regularization model and state convergence analysis of the iterative sequence.Through the optimal regularization parameter setting together with truncation techniques, we develop an HTE algorithm for S1/2regularization model, and numerical experiments confirm the efficiency and robustness of the proposed algorithm.展开更多
This paper studied the thermal physical properties of foundation materials in the molten salt tank of thermal energy storage system after molten salt leakage by Transient plane source experiment and X-ray computed mic...This paper studied the thermal physical properties of foundation materials in the molten salt tank of thermal energy storage system after molten salt leakage by Transient plane source experiment and X-ray computed microtomography simulation methods.The microstructure,thermal properties and pressure resistance with different particle diameters were addressed.The measured heat conductivities from Transient plane source experiment for three cases are 0.49 W/(m·K),0.48 W/(m·K),and 0.51 W/(m·K),and the porosity is 30.1%,30.7%,and 31.2% respectively.The heat conductivity simulating results of three cases are 0.471 W/(m·K),0.482W/(m·K),and 0.513 W/(m·K).The ratio of difference between the results of simulation and Transient plane source measurement is as low as 1.2%,verifying the reliability of experimental and simulation results to a certain degree.Compared with the heat conductivity of 0.097-0.129 W/(m·K) and porosity of 71.6%-78.9% without leaking salt,the porosity is reduced by more than 50% while the heat conductivity increased by 4 to 5 times after molten salt leakage.This significant increase in heat conductivity has a great impact on security operation,structure design,and modeling of the tank foundation for solar power plants.展开更多
We investigate the problem of maximizing the sum of submodular and supermodular functions under a fairness constraint.This sum function is non-submodular in general.For an offline model,we introduce two approximation ...We investigate the problem of maximizing the sum of submodular and supermodular functions under a fairness constraint.This sum function is non-submodular in general.For an offline model,we introduce two approximation algorithms:A greedy algorithm and a threshold greedy algorithm.For a streaming model,we propose a one-pass streaming algorithm.We also analyze the approximation ratios of these algorithms,which all depend on the total curvature of the supermodular function.The total curvature is computable in polynomial time and widely utilized in the literature.展开更多
In view of the fact that the current ground wheel velocimetry of the peanut precision fertilizer control system cannot solve the phenomenon of ground wheel slippage,and signal interference and delay loss cannot be exc...In view of the fact that the current ground wheel velocimetry of the peanut precision fertilizer control system cannot solve the phenomenon of ground wheel slippage,and signal interference and delay loss cannot be excluded by BeiDou positioning velocimetry,a set of peanut precision fertilizer control system was designed based on the threshold speed algorithm.The system used STM32F103ZET6 microcontroller as the main controller,and touch screen for setting the operating parameters such as operating width,fertilizer type,and fertilizer application amount.The threshold speed algorithm combined with BeiDou and ground wheel velocimetry was adopted to obtain the forward speed of the tractor and adjust the speed of the DC drive motor of the fertilizer applicator in real time to achieve precise fertilizer application.First,through the threshold speed algorithm test,the optimal value of the length N of the ground wheel speed measurement queue was determined as 3,and the threshold of the speed variation coefficient was set to 4.6%.Then,the response performance of the threshold speed algorithm was verified by comparative test with different fertilization amounts(40 kg/hm^(2),50 kg/hm^(2),60 kg/hm^(2),70 kg/hm^(2))under two speed acquisition methods of ground wheel speed measurement and threshold speed algorithm(combination of Beidou single-point speed and ground wheel speed measurement)in different operation speeds(3 km/h,4 km/h,5 km/h).The response performance test results showed that the average value of the velocimetry delay distance of the BeiDou single-point positioning velocimetry method was 0.58 m,while the average value of that with the threshold velocity algorithm was 0.27 m,which decreased by 0.31 m and indicated more accurate with the threshold velocity algorithm.The field comparison test for fertilizer application performance turned out an over 96.08%accuracy rate of fertilizer discharge by applied with the threshold speed algorithm,which effectively avoided the inaccurate fertilizer application caused by wheel slippage and raised the accuracy of fertilizer discharge by at least 1.2%compared with that of using the ground wheel velocimetry alone.The results showed that the threshold speed algorithm can meet the requirements of precise fertilizer application.展开更多
We investigate the efficiency of weak greedy algorithms for m-term expansional approximation with respect to quasi-greedy bases in general Banach spaces.We estimate the corresponding Lebesgue constants for the weak th...We investigate the efficiency of weak greedy algorithms for m-term expansional approximation with respect to quasi-greedy bases in general Banach spaces.We estimate the corresponding Lebesgue constants for the weak thresholding greedy algorithm(WTGA) and weak Chebyshev thresholding greedy algorithm.Then we discuss the greedy approximation on some function classes.For some sparse classes induced by uniformly bounded quasi-greedy bases of L_p,12 the WCGA is better than the TGA.展开更多
A uniform experimental design(UED)is an extremely used powerful and efficient methodology for designing experiments with high-dimensional inputs,limited resources and unknown underlying models.A UED enjoys the followi...A uniform experimental design(UED)is an extremely used powerful and efficient methodology for designing experiments with high-dimensional inputs,limited resources and unknown underlying models.A UED enjoys the following two significant advantages:(i)It is a robust design,since it does not require to specify a model before experimenters conduct their experiments;and(ii)it provides uniformly scatter design points in the experimental domain,thus it gives a good representation of this domain with fewer experimental trials(runs).Many real-life experiments involve hundreds or thousands of active factors and thus large UEDs are needed.Constructing large UEDs using the existing techniques is an NP-hard problem,an extremely time-consuming heuristic search process and a satisfactory result is not guaranteed.This paper presents a new effective and easy technique,adjusted Gray map technique(AGMT),for constructing(nearly)UEDs with large numbers of four-level factors and runs by converting designs with s two-level factors and n runs to(nearly)UEDs with 2^(t−1)s four-level factors and 2tn runs for any t≥0 using two simple transformation functions.Theoretical justifications for the uniformity of the resulting four-level designs are given,which provide some necessary and/or sufficient conditions for obtaining(nearly)uniform four-level designs.The results show that the AGMT is much easier and better than the existing widely used techniques and it can be effectively used to simply generate new recommended large(nearly)UEDs with four-level factors.展开更多
In this paper,we deduced an iteration formula for the computation of central composite discrepancy.By using the iteration formula,the computational complexity of uniform design construction in flexible region can be g...In this paper,we deduced an iteration formula for the computation of central composite discrepancy.By using the iteration formula,the computational complexity of uniform design construction in flexible region can be greatly reduced.And we also made a refinement to threshold accepting algorithm to accelerate the algorithm's convergence rate.Examples show that the refined algorithm can converge to the lower discrepancy design more stably.展开更多
Urban greenery has positive impacts on the well-being of residents and provides vital ecosystem services.A quantitative evaluation of full-view green coverage at the human scale can guide green space planning and mana...Urban greenery has positive impacts on the well-being of residents and provides vital ecosystem services.A quantitative evaluation of full-view green coverage at the human scale can guide green space planning and management.We developed a still camera to collect hemisphere-view panoramas(HVPs)to obtain in situ heterogeneous scenes and established a panoramic green cover index(PGCI)model to measure human-scale green coverage.A case study was conducted in Xicheng District,Beijing,to analyze the quantitative relationships of PGCI with the normalized difference vegetation index(NDVI)and land surface temperature(LST)in different land use scenarios.The results show that the HVP is a useful quantization tool:(1)the method adaptively distinguishes the green cover characteristics of the four functional areas,and the PGCI values are ranked as follows:recreational area(29.6)>residential area(19.0)>traffic area(15.9)>commercial area(12.5);(2)PGCI strongly explains NDVI and LST,and for each unit(1%)increase in PGCI,NDVI tends to increase by 0.007,and(3)LST tends to decrease by 0.21 degrees Celsius.This research provides government managers and urban planners with tools to evaluate green coverage in complex urban environments and assistance in optimizing human-scale greenery and microclimate.展开更多
To improve the analysis methods for the measurement of the sediment particle sizes with a wide distribution and of irregular shapes, a sediment particle image measurement, an analysis system, and an extraction algorit...To improve the analysis methods for the measurement of the sediment particle sizes with a wide distribution and of irregular shapes, a sediment particle image measurement, an analysis system, and an extraction algorithm of the optimal threshold based on the gray histogram peak values are proposed. Recording the pixels of the sediment particles by labeling them, the algorithm can effectively separate the sediment particle images from the background images using the equivalent pixel circles with the same diameters to represent the sediment particles. Compared with the laser analyzer for the case of blue plastic sands, the measurement results of the system are shown to be reasonably similar. The errors are mainly due to the small size of the particles and the limitation of the apparatus. The measurement accuracy can be improved by increasing the Charge-Coupled Devices (CCD) camera resolution. The analysis method of the sediment particle images can provide a technical support for the rapid measurement of the sediment particle size and its distribution.展开更多
基金Project supported by the Key Research and Development Program of Guangdong Province,China(Grant No.2018B030326001)the National Natural Science Foundation of China(Grant Nos.61521001,12074179,and 11890704)。
文摘Quantum singular value thresholding(QSVT) algorithm,as a core module of many mathematical models,seeks the singular values of a sparse and low rank matrix exceeding a threshold and their associated singular vectors.The existing all-qubit QSVT algorithm demands lots of ancillary qubits,remaining a huge challenge for realization on nearterm intermediate-scale quantum computers.In this paper,we propose a hybrid QSVT(HQSVT) algorithm utilizing both discrete variables(DVs) and continuous variables(CVs).In our algorithm,raw data vectors are encoded into a qubit system and the following data processing is fulfilled by hybrid quantum operations.Our algorithm requires O [log(MN)] qubits with0(1) qumodes and totally performs 0(1) operations,which significantly reduces the space and runtime consumption.
基金The National Natural Science Foundation of China(No.50805023)the Science and Technology Support Program of Jiangsu Province(No.BE2008081)+1 种基金the Transformation Program of Science and Technology Achievements of Jiangsu Province(No.BA2010093)the Program for Special Talent in Six Fields of Jiangsu Province(No.2008144)
文摘To segment defects from the quad flat non-lead QFN package surface a multilevel Otsu thresholding method based on the firefly algorithm with opposition-learning is proposed. First the Otsu thresholding algorithm is expanded to a multilevel Otsu thresholding algorithm. Secondly a firefly algorithm with opposition-learning OFA is proposed.In the OFA opposite fireflies are generated to increase the diversity of the fireflies and improve the global search ability. Thirdly the OFA is applied to searching multilevel thresholds for image segmentation. Finally the proposed method is implemented to segment the QFN images with defects and the results are compared with three methods i.e. the exhaustive search method the multilevel Otsu thresholding method based on particle swarm optimization and the multilevel Otsu thresholding method based on the firefly algorithm. Experimental results show that the proposed method can segment QFN surface defects images more efficiently and at a greater speed than that of the other three methods.
基金Supported by the National Natural Science Foundation ofChina(No.61271240)Jiangsu Province Natural Science Fund Project(No.BK2010077)Subject of Twelfth Five Years Plans in Jiangsu Second Normal University(No.417103)
文摘Matrix completion is the extension of compressed sensing.In compressed sensing,we solve the underdetermined equations using sparsity prior of the unknown signals.However,in matrix completion,we solve the underdetermined equations based on sparsity prior in singular values set of the unknown matrix,which also calls low-rank prior of the unknown matrix.This paper firstly introduces basic concept of matrix completion,analyses the matrix suitably used in matrix completion,and shows that such matrix should satisfy two conditions:low rank and incoherence property.Then the paper provides three reconstruction algorithms commonly used in matrix completion:singular value thresholding algorithm,singular value projection,and atomic decomposition for minimum rank approximation,puts forward their shortcoming to know the rank of original matrix.The Projected Gradient Descent based on Soft Thresholding(STPGD),proposed in this paper predicts the rank of unknown matrix using soft thresholding,and iteratives based on projected gradient descent,thus it could estimate the rank of unknown matrix exactly with low computational complexity,this is verified by numerical experiments.We also analyze the convergence and computational complexity of the STPGD algorithm,point out this algorithm is guaranteed to converge,and analyse the number of iterations needed to reach reconstruction error.Compared the computational complexity of the STPGD algorithm to other algorithms,we draw the conclusion that the STPGD algorithm not only reduces the computational complexity,but also improves the precision of the reconstruction solution.
文摘An improved artificial immune algorithm with a dynamic threshold is presented. The calculation for the affinity function in the real-valued coding artificial immune algorithm is modified through considering the antibody's fitness and setting the dynamic threshold value. Numerical experiments show that compared with the genetic algorithm and the originally real-valued coding artificial immune algorithm, the improved algorithm possesses high speed of convergence and good performance for preventing premature convergence.
基金supported by a grant from Thomas Crawford Hayes Research Fundthe NUI Galway College of Science scholarship to SHa grant from NUI Galway Foundation Office to JM
文摘The following mini-review attempts to guide researchers in the quantification of fluorescently-labelled proteins within cultured thick or chromogenically-stained proteins within thin sections of brain tissue.It follows from our examination of the utility of Fiji Image J thresholding and binarization algorithms.Describing how we identified the maximum intensity projection as the best of six tested for two dimensional(2 D)-rendering of three-dimensional(3 D) images derived from a series of z-stacked micrographs,the review summarises our comparison of 16 global and 9 local algorithms for their ability to accurately quantify the expression of astrocytic glial fibrillary acidic protein(GFAP),microglial ionized calcium binding adapter molecule 1(IBA1) and oligodendrocyte lineage Olig2 within fixed cultured rat hippocampal brain slices.The application of these algorithms to chromogenically-stained GFAP and IBA1 within thin tissue sections,is also described.Fiji’s Bio Voxxel plugin allowed categorisation of algorithms according to their sensitivity,specificity accuracy and relative quality.The Percentile algorithm was deemed best for quantifying levels of GFAP,the Li algorithm was best when quantifying IBA expression,while the Otsu algorithm was optimum for Olig2 staining,albeit with over-quantification of oligodendrocyte number when compared to a stereological approach.Also,GFAP and IBA expression in 3,3′-diaminobenzidine(DAB)/haematoxylin-stained cerebellar tissue was best quantified with Default,Isodata and Moments algorithms.The workflow presented in Figure 1 could help to improve the quality of research outcomes that are based on the quantification of protein with brain tissue.
文摘Aim To fuse the fluorescence image and transmission image of a cell into a single image containing more information than any of the individual image. Methods Image fusion technology was applied to biological cell imaging processing. It could match the images and improve the confidence and spatial resolution of the images. Using two algorithms, double thresholds algorithm and denoising algorithm based on wavelet transform,the fluorescence image and transmission image of a Cell were merged into a composite image. Results and Conclusion The position of fluorescence and the structure of cell can be displyed in the composite image. The signal-to-noise ratio of the exultant image is improved to a large extent. The algorithms are not only useful to investigate the fluorescence and transmission images, but also suitable to observing two or more fluoascent label proes in a single cell.
基金Supported by National Science and Technology Support Program(2014BAD06B04-1-09)China Postdoctoral Fund(2016M601406)Heilongjiang Postdoctoral Fund(LBHZ15024)
文摘The grain production prediction is one of the most important links in precision agriculture. In the process of grain production prediction, mechanical noise caused by the factors of difference in field topography and mechanical vibration will be mixed in the original signal, which undoubtedly will affect the prediction accuracy. Therefore, in order to reduce the influence of vibration noise on the prediction accuracy, an adaptive Ensemble Empirical Mode Decomposition(EEMD) threshold filtering algorithm was applied to the original signal in this paper: the output signal was decomposed into a finite number of Intrinsic Mode Functions(IMF) from high frequency to low frequency by using the Empirical Mode Decomposition(EMD) algorithm which could effectively restrain the mode mixing phenomenon; then the demarcation point of high and low frequency IMF components were determined by Continuous Mean Square Error criterion(CMSE), the high frequency IMF components were denoised by wavelet threshold algorithm, and finally the signal was reconstructed. The algorithm was an improved algorithm based on the commonly used wavelet threshold. The two algorithms were used to denoise the original production signal respectively, the adaptive EEMD threshold filtering algorithm had significant advantages in three denoising performance indexes of signal denoising ratio, root mean square error and smoothness. The five field verification tests showed that the average error of field experiment was 1.994% and the maximum relative error was less than 3%. According to the test results, the relative error of the predicted yield per hectare was 2.97%, which was relative to the actual yield. The test results showed that the algorithm could effectively resist noise and improve the accuracy of prediction.
基金supported by the Key Laboratory of Public Health Safety of the Ministry of Education,Fudan University,China(No.GW2015-1)
文摘The China Infectious Disease Automated-alert and Response System(CIDARS) was successfully implemented and became operational nationwide in 2008. The CIDARS plays an important role in and has been integrated into the routine outbreak monitoring efforts of the Center for Disease Control(CDC) at all levels in China. In the CIDARS, thresholds are determined using the ?Mean+2SD? in the early stage which have limitations. This study compared the performance of optimized thresholds defined using the ?Mean +2SD? method to the performance of 5 novel algorithms to select optimal ?Outbreak Gold Standard(OGS)? and corresponding thresholds for outbreak detection. Data for infectious disease were organized by calendar week and year. The ?Mean+2 SD?, C1, C2, moving average(MA), seasonal model(SM), and cumulative sum(CUSUM) algorithms were applied. Outbreak signals for the predicted value(Px) were calculated using a percentile-based moving window. When the outbreak signals generated by an algorithm were in line with a Px generated outbreak signal for each week, this Px was then defined as the optimized threshold for that algorithm. In this study, six infectious diseases were selected and classified into TYPE A(chickenpox and mumps), TYPE B(influenza and rubella) and TYPE C [hand foot and mouth disease(HFMD) and scarlet fever]. Optimized thresholds for chickenpox(P_(55)), mumps(P_(50)), influenza(P_(40), P_(55), and P_(75)), rubella(P_(45) and P_(75)), HFMD(P_(65) and P_(70)), and scarlet fever(P_(75) and P_(80)) were identified. The C1, C2, CUSUM, SM, and MA algorithms were appropriate for TYPE A. All 6 algorithms were appropriate for TYPE B. C1 and CUSUM algorithms were appropriate for TYPE C. It is critical to incorporate more flexible algorithms as OGS into the CIDRAS and to identify the proper OGS and corresponding recommended optimized threshold by different infectious disease types.
文摘Electric signals are acquired and analyzed in order to monitor the underwater arc welding process. Voltage break point and magnitude are extracted by detecting arc voltage singularity through the modulus maximum wavelet (MMW) method. A novel threshold algorithm, which compromises the hard-threshold wavelet (HTW) and soft-threshold wavelet (STW) methods, is investigated to eliminate welding current noise. Finally, advantages over traditional wavelet methods are verified by both simulation and experimental results.
基金supported by National Natural Science Foundation of China(Grant Nos.11431002,71271021 and 11301022)the Fundamental Research Funds for the Central Universities of China(Grant No.2012YJS118)
文摘The semidefinite matrix completion(SMC) problem is to recover a low-rank positive semidefinite matrix from a small subset of its entries. It is well known but NP-hard in general. We first show that under some cases, SMC problem and S1/2relaxation model share a unique solution. Then we prove that the global optimal solutions of S1/2regularization model are fixed points of a symmetric matrix half thresholding operator. We give an iterative scheme for solving S1/2regularization model and state convergence analysis of the iterative sequence.Through the optimal regularization parameter setting together with truncation techniques, we develop an HTE algorithm for S1/2regularization model, and numerical experiments confirm the efficiency and robustness of the proposed algorithm.
基金supported by the National Natural Science Foundation of China (52036008)。
文摘This paper studied the thermal physical properties of foundation materials in the molten salt tank of thermal energy storage system after molten salt leakage by Transient plane source experiment and X-ray computed microtomography simulation methods.The microstructure,thermal properties and pressure resistance with different particle diameters were addressed.The measured heat conductivities from Transient plane source experiment for three cases are 0.49 W/(m·K),0.48 W/(m·K),and 0.51 W/(m·K),and the porosity is 30.1%,30.7%,and 31.2% respectively.The heat conductivity simulating results of three cases are 0.471 W/(m·K),0.482W/(m·K),and 0.513 W/(m·K).The ratio of difference between the results of simulation and Transient plane source measurement is as low as 1.2%,verifying the reliability of experimental and simulation results to a certain degree.Compared with the heat conductivity of 0.097-0.129 W/(m·K) and porosity of 71.6%-78.9% without leaking salt,the porosity is reduced by more than 50% while the heat conductivity increased by 4 to 5 times after molten salt leakage.This significant increase in heat conductivity has a great impact on security operation,structure design,and modeling of the tank foundation for solar power plants.
基金The first author was supported by the National Natural Science Foundation of China(Nos.12001025 and 12131003)The second author was supported by the Spark Fund of Beijing University of Technology(No.XH-2021-06-03)+2 种基金The third author was supported by the Natural Sciences and Engineering Research Council of Canada(No.283106)the Natural Science Foundation of China(Nos.11771386 and 11728104)The fourth author is supported by the National Natural Science Foundation of China(No.12001335).
文摘We investigate the problem of maximizing the sum of submodular and supermodular functions under a fairness constraint.This sum function is non-submodular in general.For an offline model,we introduce two approximation algorithms:A greedy algorithm and a threshold greedy algorithm.For a streaming model,we propose a one-pass streaming algorithm.We also analyze the approximation ratios of these algorithms,which all depend on the total curvature of the supermodular function.The total curvature is computable in polynomial time and widely utilized in the literature.
基金financially supported by the Key Research and Development Program of Shandong Province(Grant No.2018YF008-02)Introduction and Education Program for young Talents in Shandong Colleges and Universities.
文摘In view of the fact that the current ground wheel velocimetry of the peanut precision fertilizer control system cannot solve the phenomenon of ground wheel slippage,and signal interference and delay loss cannot be excluded by BeiDou positioning velocimetry,a set of peanut precision fertilizer control system was designed based on the threshold speed algorithm.The system used STM32F103ZET6 microcontroller as the main controller,and touch screen for setting the operating parameters such as operating width,fertilizer type,and fertilizer application amount.The threshold speed algorithm combined with BeiDou and ground wheel velocimetry was adopted to obtain the forward speed of the tractor and adjust the speed of the DC drive motor of the fertilizer applicator in real time to achieve precise fertilizer application.First,through the threshold speed algorithm test,the optimal value of the length N of the ground wheel speed measurement queue was determined as 3,and the threshold of the speed variation coefficient was set to 4.6%.Then,the response performance of the threshold speed algorithm was verified by comparative test with different fertilization amounts(40 kg/hm^(2),50 kg/hm^(2),60 kg/hm^(2),70 kg/hm^(2))under two speed acquisition methods of ground wheel speed measurement and threshold speed algorithm(combination of Beidou single-point speed and ground wheel speed measurement)in different operation speeds(3 km/h,4 km/h,5 km/h).The response performance test results showed that the average value of the velocimetry delay distance of the BeiDou single-point positioning velocimetry method was 0.58 m,while the average value of that with the threshold velocity algorithm was 0.27 m,which decreased by 0.31 m and indicated more accurate with the threshold velocity algorithm.The field comparison test for fertilizer application performance turned out an over 96.08%accuracy rate of fertilizer discharge by applied with the threshold speed algorithm,which effectively avoided the inaccurate fertilizer application caused by wheel slippage and raised the accuracy of fertilizer discharge by at least 1.2%compared with that of using the ground wheel velocimetry alone.The results showed that the threshold speed algorithm can meet the requirements of precise fertilizer application.
文摘We investigate the efficiency of weak greedy algorithms for m-term expansional approximation with respect to quasi-greedy bases in general Banach spaces.We estimate the corresponding Lebesgue constants for the weak thresholding greedy algorithm(WTGA) and weak Chebyshev thresholding greedy algorithm.Then we discuss the greedy approximation on some function classes.For some sparse classes induced by uniformly bounded quasi-greedy bases of L_p,12 the WCGA is better than the TGA.
基金supported by the UIC Research Grants with No.of(R201912 and R202010)the Curriculum Development and Teaching Enhancement with No.of(UICR0400046-21CTL)+1 种基金the Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science,BNU-HKBU United International College with No.of(2022B1212010006)Guangdong Higher Education Upgrading Plan(2021-2025)with No.of(UICR0400001-22).
文摘A uniform experimental design(UED)is an extremely used powerful and efficient methodology for designing experiments with high-dimensional inputs,limited resources and unknown underlying models.A UED enjoys the following two significant advantages:(i)It is a robust design,since it does not require to specify a model before experimenters conduct their experiments;and(ii)it provides uniformly scatter design points in the experimental domain,thus it gives a good representation of this domain with fewer experimental trials(runs).Many real-life experiments involve hundreds or thousands of active factors and thus large UEDs are needed.Constructing large UEDs using the existing techniques is an NP-hard problem,an extremely time-consuming heuristic search process and a satisfactory result is not guaranteed.This paper presents a new effective and easy technique,adjusted Gray map technique(AGMT),for constructing(nearly)UEDs with large numbers of four-level factors and runs by converting designs with s two-level factors and n runs to(nearly)UEDs with 2^(t−1)s four-level factors and 2tn runs for any t≥0 using two simple transformation functions.Theoretical justifications for the uniformity of the resulting four-level designs are given,which provide some necessary and/or sufficient conditions for obtaining(nearly)uniform four-level designs.The results show that the AGMT is much easier and better than the existing widely used techniques and it can be effectively used to simply generate new recommended large(nearly)UEDs with four-level factors.
基金supported by the National Natural Science Foundation of China(No.11571133 and 11101173)。
文摘In this paper,we deduced an iteration formula for the computation of central composite discrepancy.By using the iteration formula,the computational complexity of uniform design construction in flexible region can be greatly reduced.And we also made a refinement to threshold accepting algorithm to accelerate the algorithm's convergence rate.Examples show that the refined algorithm can converge to the lower discrepancy design more stably.
基金The National Key Research and Development Programme of China(2016YFC0503605).
文摘Urban greenery has positive impacts on the well-being of residents and provides vital ecosystem services.A quantitative evaluation of full-view green coverage at the human scale can guide green space planning and management.We developed a still camera to collect hemisphere-view panoramas(HVPs)to obtain in situ heterogeneous scenes and established a panoramic green cover index(PGCI)model to measure human-scale green coverage.A case study was conducted in Xicheng District,Beijing,to analyze the quantitative relationships of PGCI with the normalized difference vegetation index(NDVI)and land surface temperature(LST)in different land use scenarios.The results show that the HVP is a useful quantization tool:(1)the method adaptively distinguishes the green cover characteristics of the four functional areas,and the PGCI values are ranked as follows:recreational area(29.6)>residential area(19.0)>traffic area(15.9)>commercial area(12.5);(2)PGCI strongly explains NDVI and LST,and for each unit(1%)increase in PGCI,NDVI tends to increase by 0.007,and(3)LST tends to decrease by 0.21 degrees Celsius.This research provides government managers and urban planners with tools to evaluate green coverage in complex urban environments and assistance in optimizing human-scale greenery and microclimate.
基金supported by the National Key Basic Research and Development of China(973Program,Grant No.2011CB403303)the China National Funds for Distinguished Young Scientists(Grant No.51125034)the National Natural Science Foundation of China(Grant Nos.50909036,50879019)
文摘To improve the analysis methods for the measurement of the sediment particle sizes with a wide distribution and of irregular shapes, a sediment particle image measurement, an analysis system, and an extraction algorithm of the optimal threshold based on the gray histogram peak values are proposed. Recording the pixels of the sediment particles by labeling them, the algorithm can effectively separate the sediment particle images from the background images using the equivalent pixel circles with the same diameters to represent the sediment particles. Compared with the laser analyzer for the case of blue plastic sands, the measurement results of the system are shown to be reasonably similar. The errors are mainly due to the small size of the particles and the limitation of the apparatus. The measurement accuracy can be improved by increasing the Charge-Coupled Devices (CCD) camera resolution. The analysis method of the sediment particle images can provide a technical support for the rapid measurement of the sediment particle size and its distribution.