This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm...This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm that can continuously detect the miscalibration in each frame is designed,leveraging the rotational motion each individual sensor observes.Then,as sensor drift occurs,the projection constraints between visual feature points and LiDAR 3-D points are used to compute the scaled camera motion,which is further utilized to align the drifted LiDAR scan with the camera image.Finally,the proposed method is sufficiently compared with two representative approaches in the online experiments with varying levels of random drift,then the method is further extended to the offline calibration experiment and is demonstrated by a comparison with two existing benchmark methods.展开更多
Thanks to its light weight,low power consumption,and low price,the inertial measurement units(IMUs)have been widely used in civil and military applications such as autopilot,robotics,and tactical weapons.The calibrati...Thanks to its light weight,low power consumption,and low price,the inertial measurement units(IMUs)have been widely used in civil and military applications such as autopilot,robotics,and tactical weapons.The calibration is an essential procedure before the IMU is put in use,which is generally used to estimate the error parameters such as the bias,installation error,scale factor of the IMU.Currently,the manual one-by-one calibration is still the mostly used manner,which is low in efficiency,time-consuming,and easy to introduce mis-operation.Aiming at this issue,this paper designs an automatic batch calibration method for a set of IMUs.The designed automatic calibration master controller can control the turntable and the data acquisition system at the same time.Each data acquisition front-end can complete data acquisition of eight IMUs one time.And various scenarios of experimental tests have been carried out to validate the proposed design,such as the multi-position tests,the rate tests and swaying tests.The results illustrate the reliability of each function module and the feasibility automatic batch calibration.Compared with the traditional calibration method,the proposed design can reduce errors caused by the manual calibration and greatly improve the efficiency of IMU calibration.展开更多
As the requirement of non-radioactivity measurement has increased in recent years,various energy cali- bration methods applied in portable X-ray fluorescence (XRF) spectrometers have been developed. In this paper,a sa...As the requirement of non-radioactivity measurement has increased in recent years,various energy cali- bration methods applied in portable X-ray fluorescence (XRF) spectrometers have been developed. In this paper,a sampling based correction energy calibration has been discussed. In this method both history information and current state of the instrument are considered and relative high precision and reliability can be obtained.展开更多
Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronar...Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronary artery disease(CAD).The automatic classification of SPECT images for different techniques has achieved near-optimal accuracy when using convolutional neural networks(CNNs).This paper uses a SPECT classification framework with three steps:1)Image denoising,2)Attenuation correction,and 3)Image classification.Image denoising is done by a U-Net architecture that ensures effective image denoising.Attenuation correction is implemented by a convolution neural network model that can remove the attenuation that affects the feature extraction process of classification.Finally,a novel multi-scale diluted convolution(MSDC)network is proposed.It merges the features extracted in different scales and makes the model learn the features more efficiently.Three scales of filters with size 3×3 are used to extract features.All three steps are compared with state-of-the-art methods.The proposed denoising architecture ensures a high-quality image with the highest peak signal-to-noise ratio(PSNR)value of 39.7.The proposed classification method is compared with the five different CNN models,and the proposed method ensures better classification with an accuracy of 96%,precision of 87%,sensitivity of 87%,specificity of 89%,and F1-score of 87%.To demonstrate the importance of preprocessing,the classification model was analyzed without denoising and attenuation correction.展开更多
In visual measurement,high-precision camera calibration often employs circular targets.To address issues in mainstream methods,such as the eccentricity error of the circle from using the circle’s center for calibrati...In visual measurement,high-precision camera calibration often employs circular targets.To address issues in mainstream methods,such as the eccentricity error of the circle from using the circle’s center for calibration,overfitting or local minimum from fullparameter optimization,and calibration errors due to neglecting the center of distortion,a stepwise camera calibration method incorporating compensation for eccentricity error was proposed to enhance monocular camera calibration precision.Initially,the multiimage distortion correction method calculated the common center of distortion and coefficients,improving precision,stability,and efficiency compared to single-image distortion correction methods.Subsequently,the projection point of the circle’s center was compared with the center of the contour’s projection to iteratively correct the eccentricity error,leading to more precise and stable calibration.Finally,nonlinear optimization refined the calibration parameters to minimize reprojection error and boosts precision.These processes achieved stepwise camera calibration,which enhanced robustness.In addition,the module comparison experiment showed that both the eccentricity error compensation and the camera parameter optimization could improve the calibration precision,but the latter had a greater impact.The combined use of the two methods further improved the precision and stability.Simulations and experiments confirmed that the proposed method achieved high precision,stability,and robustness,suitable for high-precision visual measurements.展开更多
In order to obtain more accurate precipitation data and better simulate the precipitation on the Tibetan Plateau,the simulation capability of 14 Coupled Model Intercomparison Project Phase 6(CMIP6)models of historical...In order to obtain more accurate precipitation data and better simulate the precipitation on the Tibetan Plateau,the simulation capability of 14 Coupled Model Intercomparison Project Phase 6(CMIP6)models of historical precipitation(1982-2014)on the Qinghai-Tibetan Plateau was evaluated in this study.Results indicate that all models exhibit an overestimation of precipitation through the analysis of the Taylor index,temporal and spatial statistical parameters.To correct the overestimation,a fusion correction method combining the Backpropagation Neural Network Correction(BP)and Quantum Mapping(QM)correction,named BQ method,was proposed.With this method,the historical precipitation of each model was corrected in space and time,respectively.The correction results were then analyzed in time,space,and analysis of variance(ANOVA)with those corrected by the BP and QM methods,respectively.Finally,the fusion correction method results for each model were compared with the Climatic Research Unit(CRU)data for significance analysis to obtain the trends of precipitation increase and decrease for each model.The results show that the IPSL-CM6A-LR model is relatively good in simulating historical precipitation on the Qinghai-Tibetan Plateau(R=0.7,RSME=0.15)among the uncorrected data.In terms of time,the total precipitation corrected by the fusion method has the same interannual trend and the closest precipitation values to the CRU data;In terms of space,the annual average precipitation corrected by the fusion method has the smallest difference with the CRU data,and the total historical annual average precipitation is not significantly different from the CRU data,which is better than BP and QM.Therefore,the correction effect of the fusion method on the historical precipitation of each model is better than that of the QM and BP methods.The precipitation in the central and northeastern parts of the plateau shows a significant increasing trend.The correlation coefficients between monthly precipitation and site-detected precipitation for all models after BQ correction exceed 0.8.展开更多
Objective: This study aims to evaluate the efficacy and safety of using a strip-shaped cymba conchae orthosis for the nonsurgical correction of complex auricular deformities. Methods: Clinical data were collected from...Objective: This study aims to evaluate the efficacy and safety of using a strip-shaped cymba conchae orthosis for the nonsurgical correction of complex auricular deformities. Methods: Clinical data were collected from 2020 to 2021 for 6 patients who underwent correction using a stripshaped cymba conchae orthosis. The indications, corrective effects, and complications associated with use of the orthosis were analyzed. Results: There were four indications for treatment: cryptotia with helix adhesion;cryptotia with grade I microtia;cryptotia with excessive helix thickness;and auricular deformity beyond the treatment time window(≥6 months). Excellent corrective effects were observed in all 6 patients. Complications occurred in one patient, who recovered after symptomatic treatment. Conclusion: The use of a strip-shaped cymba conchae orthosis alone or combined with a U-shaped helix orthosis presents a feasible approach for correcting complex auricular deformities or deformities beyond the treatment time window in pediatric patients.展开更多
Despite the maturity of ensemble numerical weather prediction(NWP),the resulting forecasts are still,more often than not,under-dispersed.As such,forecast calibration tools have become popular.Among those tools,quantil...Despite the maturity of ensemble numerical weather prediction(NWP),the resulting forecasts are still,more often than not,under-dispersed.As such,forecast calibration tools have become popular.Among those tools,quantile regression(QR)is highly competitive in terms of both flexibility and predictive performance.Nevertheless,a long-standing problem of QR is quantile crossing,which greatly limits the interpretability of QR-calibrated forecasts.On this point,this study proposes a non-crossing quantile regression neural network(NCQRNN),for calibrating ensemble NWP forecasts into a set of reliable quantile forecasts without crossing.The overarching design principle of NCQRNN is to add on top of the conventional QRNN structure another hidden layer,which imposes a non-decreasing mapping between the combined output from nodes of the last hidden layer to the nodes of the output layer,through a triangular weight matrix with positive entries.The empirical part of the work considers a solar irradiance case study,in which four years of ensemble irradiance forecasts at seven locations,issued by the European Centre for Medium-Range Weather Forecasts,are calibrated via NCQRNN,as well as via an eclectic mix of benchmarking models,ranging from the naïve climatology to the state-of-the-art deep-learning and other non-crossing models.Formal and stringent forecast verification suggests that the forecasts post-processed via NCQRNN attain the maximum sharpness subject to calibration,amongst all competitors.Furthermore,the proposed conception to resolve quantile crossing is remarkably simple yet general,and thus has broad applicability as it can be integrated with many shallow-and deep-learning-based neural networks.展开更多
The high-intensity heavy-ion accelerator facility(HIAF)is a scientific research facility complex composed of multiple cas-cade accelerators of different types,which pose a scheduling problem for devices distributed ov...The high-intensity heavy-ion accelerator facility(HIAF)is a scientific research facility complex composed of multiple cas-cade accelerators of different types,which pose a scheduling problem for devices distributed over a certain range of 2 km,involving over a hundred devices.The white rabbit,a technology-enhancing Gigabit Ethernet,has shown the capability of scheduling distributed timing devices but still faces the challenge of obtaining real-time synchronization calibration param-eters with high precision.This study presents a calibration system based on a time-to-digital converter implemented on an ARM-based System-on-Chip(SoC).The system consists of four multi-sample delay lines,a bubble-proof encoder,an edge controller for managing data from different channels,and a highly effective calibration module that benefits from the SoC architecture.The performance was evaluated with an average RMS precision of 5.51 ps by measuring the time intervals from 0 to 24,000 ps with 120,000 data for every test.The design presented in this study refines the calibration precision of the HIAF timing system.This eliminates the errors caused by manual calibration without efficiency loss and provides data support for fault diagnosis.It can also be easily tailored or ported to other devices for specific applications and provides more space for developing timing systems for particle accelerators,such as white rabbits on HIAF.展开更多
Raman spectroscopy has found extensive use in monitoring and controlling cell culture processes.In this context,the prediction accuracy of Raman-based models is of paramount importance.However,models established with ...Raman spectroscopy has found extensive use in monitoring and controlling cell culture processes.In this context,the prediction accuracy of Raman-based models is of paramount importance.However,models established with data from manually fed-batch cultures often exhibit poor performance in Raman-controlled cultures.Thus,there is a need for effective methods to rectify these models.The objective of this paper is to investigate the efficacy of Kalman filter(KF)algorithm in correcting Raman-based models during cell culture.Initially,partial least squares(PLS)models for different components were constructed using data from manually fed-batch cultures,and the predictive performance of these models was compared.Subsequently,various correction methods including the PLS-KF-KF method proposed in this study were employed to refine the PLS models.Finally,a case study involving the auto-control of glucose concentration demonstrated the application of optimal model correction method.The results indicated that the original PLS models exhibited differential performance between manually fed-batch cultures and Raman-controlled cultures.For glucose,the root mean square error of prediction(RMSEP)of manually fed-batch culture and Raman-controlled culture was 0.23 and 0.40 g·L^(-1).With the implementation of model correction methods,there was a significant improvement in model performance within Raman-controlled cultures.The RMSEP for glucose from updating-PLS,KF-PLS,and PLS-KF-KF was 0.38,0.36 and 0.17 g·L^(-1),respectively.Notably,the proposed PLS-KF-KF model correction method was found to be more effective and stable,playing a vital role in the automated nutrient feeding of cell cultures.展开更多
A vacuum ultraviolet(VUV)spectroscopy with a focal length of 1 m has been engineered specifically for observing edge impurity emissions in Experimental Advanced Superconducting Tokamak(EAST).In this study,wavelength c...A vacuum ultraviolet(VUV)spectroscopy with a focal length of 1 m has been engineered specifically for observing edge impurity emissions in Experimental Advanced Superconducting Tokamak(EAST).In this study,wavelength calibration for the VUV spectroscopy is achieved utilizing a zinc lamp.The grating angle and charge-coupled device(CCD)position are carefully calibrated for different wavelength positions.The wavelength calibration of the VUV spectroscopy is crucial for improving the accuracy of impurity spectral data,and is required to identify more impurity spectral lines for impurity transport research.Impurity spectra of EAST plasmas have also been obtained in the wavelength range of 50–300 nm with relatively high spectral resolution.It is found that the impurity emissions in the edge region are still dominated by low-Z impurities,such as carbon,oxygen,and nitrogen,albeit with the application of fulltungsten divertors on the EAST tokamak.展开更多
Scanning focused light with corrected aberrations holds great importance in high-precision optical systems.However,conventional optical systems,relying on additional dynamical correctors to eliminate scanning aberrati...Scanning focused light with corrected aberrations holds great importance in high-precision optical systems.However,conventional optical systems,relying on additional dynamical correctors to eliminate scanning aberrations,inevitably result in undesired bulkiness and complexity.In this paper,we propose achieving adaptive aberration corrections coordinated with focus scanning by rotating only two cascaded transmissive metasurfaces.Each metasurface is carefully designed by searching for optimal phase-profile parameters of three coherently worked phase functions,allowing flexible control of both the longitudinal and lateral focal position to scan on any custom-designed curved surfaces.As proof-ofconcept,we engineer and fabricate two all-silicon terahertz meta-devices capable of scanning the focal spot with adaptively corrected aberrations.Experimental results demonstrate that the first one dynamically scans the focal spot on a planar surface,achieving an average scanning aberration of 1.18%within the scanning range of±30°.Meanwhile,the second meta-device scans two focal points on a planar surface and a conical surface with 2.5%and 4.6%scanning aberrations,respectively.Our work pioneers a breakthrough pathway enabling the development of high-precision yet compact optical devices across various practical domains.展开更多
In a crowd density estimation dataset,the annotation of crowd locations is an extremely laborious task,and they are not taken into the evaluation metrics.In this paper,we aim to reduce the annotation cost of crowd dat...In a crowd density estimation dataset,the annotation of crowd locations is an extremely laborious task,and they are not taken into the evaluation metrics.In this paper,we aim to reduce the annotation cost of crowd datasets,and propose a crowd density estimation method based on weakly-supervised learning,in the absence of crowd position supervision information,which directly reduces the number of crowds by using the number of pedestrians in the image as the supervised information.For this purpose,we design a new training method,which exploits the correlation between global and local image features by incremental learning to train the network.Specifically,we design a parent-child network(PC-Net)focusing on the global and local image respectively,and propose a linear feature calibration structure to train the PC-Net simultaneously,and the child network learns feature transfer factors and feature bias weights,and uses the transfer factors and bias weights to linearly feature calibrate the features extracted from the Parent network,to improve the convergence of the network by using local features hidden in the crowd images.In addition,we use the pyramid vision transformer as the backbone of the PC-Net to extract crowd features at different levels,and design a global-local feature loss function(L2).We combine it with a crowd counting loss(LC)to enhance the sensitivity of the network to crowd features during the training process,which effectively improves the accuracy of crowd density estimation.The experimental results show that the PC-Net significantly reduces the gap between fullysupervised and weakly-supervised crowd density estimation,and outperforms the comparison methods on five datasets of Shanghai Tech Part A,ShanghaiTech Part B,UCF_CC_50,UCF_QNRF and JHU-CROWD++.展开更多
We present a class of preconditioners for the linear systems resulting from a finite element or discontinuous Galerkin discretizations of advection-dominated problems.These preconditioners are designed to treat the ca...We present a class of preconditioners for the linear systems resulting from a finite element or discontinuous Galerkin discretizations of advection-dominated problems.These preconditioners are designed to treat the case of geometrically localized stiffness,where the convergence rates of iterative methods are degraded in a localized subregion of the mesh.Slower convergence may be caused by a number of factors,including the mesh size,anisotropy,highly variable coefficients,and more challenging physics.The approach taken in this work is to correct well-known preconditioners such as the block Jacobi and the block incomplete LU(ILU)with an adaptive inner subregion iteration.The goal of these preconditioners is to reduce the number of costly global iterations by accelerating the convergence in the stiff region by iterating on the less expensive reduced problem.The tolerance for the inner iteration is adaptively chosen to minimize subregion-local work while guaranteeing global convergence rates.We present analysis showing that the convergence of these preconditioners,even when combined with an adaptively selected tolerance,is independent of discretization parameters(e.g.,the mesh size and diffusion coefficient)in the subregion.We demonstrate significant performance improvements over black-box preconditioners when applied to several model convection-diffusion problems.Finally,we present performance results of several variations of iterative subregion correction preconditioners applied to the Reynolds number 2.25×10^(6)fluid flow over the NACA 0012 airfoil,as well as massively separated flow at 30°angle of attack.展开更多
Radon observation is an important measurement item of seismic precursor network observation.The radon detector calibration is a key technical link for ensuring radon observation accuracy.At present,the radon detector ...Radon observation is an important measurement item of seismic precursor network observation.The radon detector calibration is a key technical link for ensuring radon observation accuracy.At present,the radon detector calibration in seismic systems in China is faced with a series of bottleneck problems,such as aging and scrap,acquisition difficulties,high supervision costs,and transportation limitations of radon sources.As a result,a large number of radon detectors cannot be accurately calibrated regularly,seriously affecting the accuracy and reliability of radon observation data in China.To solve this problem,a new calibration method for radon detectors was established.The advantage of this method is that the dangerous radioactive substance,i.e.,the radon source,can be avoided,but only“standard instruments”and water samples with certain dissolved radon concentrations can be used to realize radon detector calibration.This method avoids the risk of radioactive leakage and solves the current widespread difficulties and bottleneck of radon detector calibration in seismic systems in China.The comparison experiment with the traditional calibration method shows that the error of the calibration coefficient obtained by the new method is less than 5%compared with that by the traditional method,which meets the requirements of seismic observation systems,confirming the reliability of the new method.This new method can completely replace the traditional calibration method of using a radon source in seismic systems.展开更多
This study presents a kinematic calibration method for exoskeletal inertial motion capture (EI-MoCap) system with considering the random colored noise such as gyroscopic drift.In this method, the geometric parameters ...This study presents a kinematic calibration method for exoskeletal inertial motion capture (EI-MoCap) system with considering the random colored noise such as gyroscopic drift.In this method, the geometric parameters are calibrated by the traditional calibration method at first. Then, in order to calibrate the parameters affected by the random colored noise, the expectation maximization (EM) algorithm is introduced. Through the use of geometric parameters calibrated by the traditional calibration method, the iterations under the EM framework are decreased and the efficiency of the proposed method on embedded system is improved. The performance of the proposed kinematic calibration method is compared to the traditional calibration method. Furthermore, the feasibility of the proposed method is verified on the EI-MoCap system. The simulation and experiment demonstrate that the motion capture precision is significantly improved by 16.79%and 7.16%respectively in comparison to the traditional calibration method.展开更多
Dispersion fuels,knowned for their excellent safety performance,are widely used in advanced reactors,such as hightemperature gas-cooled reactors.Compared with deterministic methods,the Monte Carlo method has more adva...Dispersion fuels,knowned for their excellent safety performance,are widely used in advanced reactors,such as hightemperature gas-cooled reactors.Compared with deterministic methods,the Monte Carlo method has more advantages in the geometric modeling of stochastic media.The explicit modeling method has high computational accuracy and high computational cost.The chord length sampling(CLS)method can improve computational efficiency by sampling the chord length during neutron transport using the matrix chord length?s probability density function.This study shows that the excluded-volume effect in realistic stochastic media can introduce certain deviations into the CLS.A chord length correction approach is proposed to obtain the chord length correction factor by developing the Particle code based on equivalent transmission probability.Through numerical analysis against reference solutions from explicit modeling in the RMC code,it was demonstrated that CLS with the proposed correction method provides good accuracy for addressing the excludedvolume effect in realistic infinite stochastic media.展开更多
The global diabetes surge poses a critical public health challenge,emphasizing the need for effective glycemic control.However,rapid correction of chronic hyperglycemia can unexpectedly trigger microvascular complicat...The global diabetes surge poses a critical public health challenge,emphasizing the need for effective glycemic control.However,rapid correction of chronic hyperglycemia can unexpectedly trigger microvascular complications,necessitating a reevaluation of the speed and intensity of glycemic correction.Theories suggest swift blood sugar reductions may cause inflammation,oxidative stress,and neurovascular changes,resulting in complications.Healthcare providers should cautiously approach aggressive glycemic control,especially in long-standing,poorly controlled diabetes.Preventing and managing these complications requires a personalized,comprehensive approach with education,monitoring,and interdisciplinary care.Diabetes management must balance short and longterm goals,prioritizing overall well-being.This editorial underscores the need for a personalized,nuanced approach,focusing on equilibrium between glycemic control and avoiding overcorrection.展开更多
Global efforts for environmental cleanliness through the control of gaseous emissions from vehicles are gaining momentum and attracting increasing attention. Calibration plays a crucial role in these efforts by ensuri...Global efforts for environmental cleanliness through the control of gaseous emissions from vehicles are gaining momentum and attracting increasing attention. Calibration plays a crucial role in these efforts by ensuring the quantitative assessment of emissions for informed decisions on environmental treatments. This paper describes a method for the calibration of CO/CO<sub>2</sub> monitors used for periodic inspections of vehicles in cites. The calibration was performed in the selected ranges: 900 - 12,000 µmol/mol for CO and 2000 - 20,000 µmol/mol for CO<sub>2</sub>. The traceability of the measurement results to the SI units was ensured by using certified reference materials from CO/N<sub>2</sub> and CO<sub>2</sub>/N<sub>2</sub> primary gas mixtures. The method performance was evaluated by assessing its linearity, accuracy, precision, bias, and uncertainty of the calibration results. The calibration data exhibited a strong linear trend with R² values close to 1, indicating an excellent fit between the measured values and the calibration lines. Precision, expressed as relative standard deviation (%RSD), ranged from 0.48 to 4.56% for CO and from 0.97 to 3.53% for CO<sub>2</sub>, staying well below the 5% threshold for reporting results at a 95% confidence level. Accuracy measured as percent recovery, was consistently high (≥ 99.1%) for CO and ranged from 84.90% to 101.54% across the calibration range for CO<sub>2</sub>. In addition, the method exhibited minimal bias for both CO and CO<sub>2</sub> calibrations and thus provided a reliable and accurate approach for calibrating CO/CO<sub>2</sub> monitors used in vehicle inspections. Thus, it ensures the effectiveness of exhaust emission control for better environment.展开更多
This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding type...This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.展开更多
基金Supported by National Natural Science Foundation of China(Grant Nos.52025121,52394263)National Key R&D Plan of China(Grant No.2023YFD2000301).
文摘This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm that can continuously detect the miscalibration in each frame is designed,leveraging the rotational motion each individual sensor observes.Then,as sensor drift occurs,the projection constraints between visual feature points and LiDAR 3-D points are used to compute the scaled camera motion,which is further utilized to align the drifted LiDAR scan with the camera image.Finally,the proposed method is sufficiently compared with two representative approaches in the online experiments with varying levels of random drift,then the method is further extended to the offline calibration experiment and is demonstrated by a comparison with two existing benchmark methods.
基金This work was supported by the National Natural Science Foundation of China(No.61803203).
文摘Thanks to its light weight,low power consumption,and low price,the inertial measurement units(IMUs)have been widely used in civil and military applications such as autopilot,robotics,and tactical weapons.The calibration is an essential procedure before the IMU is put in use,which is generally used to estimate the error parameters such as the bias,installation error,scale factor of the IMU.Currently,the manual one-by-one calibration is still the mostly used manner,which is low in efficiency,time-consuming,and easy to introduce mis-operation.Aiming at this issue,this paper designs an automatic batch calibration method for a set of IMUs.The designed automatic calibration master controller can control the turntable and the data acquisition system at the same time.Each data acquisition front-end can complete data acquisition of eight IMUs one time.And various scenarios of experimental tests have been carried out to validate the proposed design,such as the multi-position tests,the rate tests and swaying tests.The results illustrate the reliability of each function module and the feasibility automatic batch calibration.Compared with the traditional calibration method,the proposed design can reduce errors caused by the manual calibration and greatly improve the efficiency of IMU calibration.
文摘As the requirement of non-radioactivity measurement has increased in recent years,various energy cali- bration methods applied in portable X-ray fluorescence (XRF) spectrometers have been developed. In this paper,a sampling based correction energy calibration has been discussed. In this method both history information and current state of the instrument are considered and relative high precision and reliability can be obtained.
基金the Research Grant of Kwangwoon University in 2024.
文摘Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronary artery disease(CAD).The automatic classification of SPECT images for different techniques has achieved near-optimal accuracy when using convolutional neural networks(CNNs).This paper uses a SPECT classification framework with three steps:1)Image denoising,2)Attenuation correction,and 3)Image classification.Image denoising is done by a U-Net architecture that ensures effective image denoising.Attenuation correction is implemented by a convolution neural network model that can remove the attenuation that affects the feature extraction process of classification.Finally,a novel multi-scale diluted convolution(MSDC)network is proposed.It merges the features extracted in different scales and makes the model learn the features more efficiently.Three scales of filters with size 3×3 are used to extract features.All three steps are compared with state-of-the-art methods.The proposed denoising architecture ensures a high-quality image with the highest peak signal-to-noise ratio(PSNR)value of 39.7.The proposed classification method is compared with the five different CNN models,and the proposed method ensures better classification with an accuracy of 96%,precision of 87%,sensitivity of 87%,specificity of 89%,and F1-score of 87%.To demonstrate the importance of preprocessing,the classification model was analyzed without denoising and attenuation correction.
文摘In visual measurement,high-precision camera calibration often employs circular targets.To address issues in mainstream methods,such as the eccentricity error of the circle from using the circle’s center for calibration,overfitting or local minimum from fullparameter optimization,and calibration errors due to neglecting the center of distortion,a stepwise camera calibration method incorporating compensation for eccentricity error was proposed to enhance monocular camera calibration precision.Initially,the multiimage distortion correction method calculated the common center of distortion and coefficients,improving precision,stability,and efficiency compared to single-image distortion correction methods.Subsequently,the projection point of the circle’s center was compared with the center of the contour’s projection to iteratively correct the eccentricity error,leading to more precise and stable calibration.Finally,nonlinear optimization refined the calibration parameters to minimize reprojection error and boosts precision.These processes achieved stepwise camera calibration,which enhanced robustness.In addition,the module comparison experiment showed that both the eccentricity error compensation and the camera parameter optimization could improve the calibration precision,but the latter had a greater impact.The combined use of the two methods further improved the precision and stability.Simulations and experiments confirmed that the proposed method achieved high precision,stability,and robustness,suitable for high-precision visual measurements.
文摘In order to obtain more accurate precipitation data and better simulate the precipitation on the Tibetan Plateau,the simulation capability of 14 Coupled Model Intercomparison Project Phase 6(CMIP6)models of historical precipitation(1982-2014)on the Qinghai-Tibetan Plateau was evaluated in this study.Results indicate that all models exhibit an overestimation of precipitation through the analysis of the Taylor index,temporal and spatial statistical parameters.To correct the overestimation,a fusion correction method combining the Backpropagation Neural Network Correction(BP)and Quantum Mapping(QM)correction,named BQ method,was proposed.With this method,the historical precipitation of each model was corrected in space and time,respectively.The correction results were then analyzed in time,space,and analysis of variance(ANOVA)with those corrected by the BP and QM methods,respectively.Finally,the fusion correction method results for each model were compared with the Climatic Research Unit(CRU)data for significance analysis to obtain the trends of precipitation increase and decrease for each model.The results show that the IPSL-CM6A-LR model is relatively good in simulating historical precipitation on the Qinghai-Tibetan Plateau(R=0.7,RSME=0.15)among the uncorrected data.In terms of time,the total precipitation corrected by the fusion method has the same interannual trend and the closest precipitation values to the CRU data;In terms of space,the annual average precipitation corrected by the fusion method has the smallest difference with the CRU data,and the total historical annual average precipitation is not significantly different from the CRU data,which is better than BP and QM.Therefore,the correction effect of the fusion method on the historical precipitation of each model is better than that of the QM and BP methods.The precipitation in the central and northeastern parts of the plateau shows a significant increasing trend.The correlation coefficients between monthly precipitation and site-detected precipitation for all models after BQ correction exceed 0.8.
文摘Objective: This study aims to evaluate the efficacy and safety of using a strip-shaped cymba conchae orthosis for the nonsurgical correction of complex auricular deformities. Methods: Clinical data were collected from 2020 to 2021 for 6 patients who underwent correction using a stripshaped cymba conchae orthosis. The indications, corrective effects, and complications associated with use of the orthosis were analyzed. Results: There were four indications for treatment: cryptotia with helix adhesion;cryptotia with grade I microtia;cryptotia with excessive helix thickness;and auricular deformity beyond the treatment time window(≥6 months). Excellent corrective effects were observed in all 6 patients. Complications occurred in one patient, who recovered after symptomatic treatment. Conclusion: The use of a strip-shaped cymba conchae orthosis alone or combined with a U-shaped helix orthosis presents a feasible approach for correcting complex auricular deformities or deformities beyond the treatment time window in pediatric patients.
基金supported by the National Natural Science Foundation of China (Project No.42375192)the China Meteorological Administration Climate Change Special Program (CMA-CCSP+1 种基金Project No.QBZ202315)support by the Vector Stiftung through the Young Investigator Group"Artificial Intelligence for Probabilistic Weather Forecasting."
文摘Despite the maturity of ensemble numerical weather prediction(NWP),the resulting forecasts are still,more often than not,under-dispersed.As such,forecast calibration tools have become popular.Among those tools,quantile regression(QR)is highly competitive in terms of both flexibility and predictive performance.Nevertheless,a long-standing problem of QR is quantile crossing,which greatly limits the interpretability of QR-calibrated forecasts.On this point,this study proposes a non-crossing quantile regression neural network(NCQRNN),for calibrating ensemble NWP forecasts into a set of reliable quantile forecasts without crossing.The overarching design principle of NCQRNN is to add on top of the conventional QRNN structure another hidden layer,which imposes a non-decreasing mapping between the combined output from nodes of the last hidden layer to the nodes of the output layer,through a triangular weight matrix with positive entries.The empirical part of the work considers a solar irradiance case study,in which four years of ensemble irradiance forecasts at seven locations,issued by the European Centre for Medium-Range Weather Forecasts,are calibrated via NCQRNN,as well as via an eclectic mix of benchmarking models,ranging from the naïve climatology to the state-of-the-art deep-learning and other non-crossing models.Formal and stringent forecast verification suggests that the forecasts post-processed via NCQRNN attain the maximum sharpness subject to calibration,amongst all competitors.Furthermore,the proposed conception to resolve quantile crossing is remarkably simple yet general,and thus has broad applicability as it can be integrated with many shallow-and deep-learning-based neural networks.
基金supported by high-intensity heavy-ion accelerator facility(HIAF)approved by the National Development and Reform Commission of China(2017-000052-73-01-002107)。
文摘The high-intensity heavy-ion accelerator facility(HIAF)is a scientific research facility complex composed of multiple cas-cade accelerators of different types,which pose a scheduling problem for devices distributed over a certain range of 2 km,involving over a hundred devices.The white rabbit,a technology-enhancing Gigabit Ethernet,has shown the capability of scheduling distributed timing devices but still faces the challenge of obtaining real-time synchronization calibration param-eters with high precision.This study presents a calibration system based on a time-to-digital converter implemented on an ARM-based System-on-Chip(SoC).The system consists of four multi-sample delay lines,a bubble-proof encoder,an edge controller for managing data from different channels,and a highly effective calibration module that benefits from the SoC architecture.The performance was evaluated with an average RMS precision of 5.51 ps by measuring the time intervals from 0 to 24,000 ps with 120,000 data for every test.The design presented in this study refines the calibration precision of the HIAF timing system.This eliminates the errors caused by manual calibration without efficiency loss and provides data support for fault diagnosis.It can also be easily tailored or ported to other devices for specific applications and provides more space for developing timing systems for particle accelerators,such as white rabbits on HIAF.
基金supported by the Key Research and Development Program of Zhejiang Province,China(2023C03116).
文摘Raman spectroscopy has found extensive use in monitoring and controlling cell culture processes.In this context,the prediction accuracy of Raman-based models is of paramount importance.However,models established with data from manually fed-batch cultures often exhibit poor performance in Raman-controlled cultures.Thus,there is a need for effective methods to rectify these models.The objective of this paper is to investigate the efficacy of Kalman filter(KF)algorithm in correcting Raman-based models during cell culture.Initially,partial least squares(PLS)models for different components were constructed using data from manually fed-batch cultures,and the predictive performance of these models was compared.Subsequently,various correction methods including the PLS-KF-KF method proposed in this study were employed to refine the PLS models.Finally,a case study involving the auto-control of glucose concentration demonstrated the application of optimal model correction method.The results indicated that the original PLS models exhibited differential performance between manually fed-batch cultures and Raman-controlled cultures.For glucose,the root mean square error of prediction(RMSEP)of manually fed-batch culture and Raman-controlled culture was 0.23 and 0.40 g·L^(-1).With the implementation of model correction methods,there was a significant improvement in model performance within Raman-controlled cultures.The RMSEP for glucose from updating-PLS,KF-PLS,and PLS-KF-KF was 0.38,0.36 and 0.17 g·L^(-1),respectively.Notably,the proposed PLS-KF-KF model correction method was found to be more effective and stable,playing a vital role in the automated nutrient feeding of cell cultures.
基金partially supported by National Natural Science Foundation of China(Nos.U23A2077,12175278,12205072)the National Magnetic Confinement Fusion Science Program of China(Nos.2019YFE0304002,2018YFE0303103)+2 种基金the Comprehensive Research Facility for Fusion Technology Program of China(No.2018-000052-73-01-001228)Major Science and Technology Infrastructure Maintenance and Reconstruction Projects of the Chinese Academy of Sciences(2021)the University Synergy Innovation Program of Anhui Province(No.GXXT2021-029)。
文摘A vacuum ultraviolet(VUV)spectroscopy with a focal length of 1 m has been engineered specifically for observing edge impurity emissions in Experimental Advanced Superconducting Tokamak(EAST).In this study,wavelength calibration for the VUV spectroscopy is achieved utilizing a zinc lamp.The grating angle and charge-coupled device(CCD)position are carefully calibrated for different wavelength positions.The wavelength calibration of the VUV spectroscopy is crucial for improving the accuracy of impurity spectral data,and is required to identify more impurity spectral lines for impurity transport research.Impurity spectra of EAST plasmas have also been obtained in the wavelength range of 50–300 nm with relatively high spectral resolution.It is found that the impurity emissions in the edge region are still dominated by low-Z impurities,such as carbon,oxygen,and nitrogen,albeit with the application of fulltungsten divertors on the EAST tokamak.
基金supported by National Natural Science Foundation of China(62175141)Ministry of Science and Technology(2022YFA1404704)+2 种基金China Scholarship Council(202306890039)Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2022R1A6A1A03052954)Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2019-0-01906,Artificial Intelligence Graduate School Program(POSTECH)).
文摘Scanning focused light with corrected aberrations holds great importance in high-precision optical systems.However,conventional optical systems,relying on additional dynamical correctors to eliminate scanning aberrations,inevitably result in undesired bulkiness and complexity.In this paper,we propose achieving adaptive aberration corrections coordinated with focus scanning by rotating only two cascaded transmissive metasurfaces.Each metasurface is carefully designed by searching for optimal phase-profile parameters of three coherently worked phase functions,allowing flexible control of both the longitudinal and lateral focal position to scan on any custom-designed curved surfaces.As proof-ofconcept,we engineer and fabricate two all-silicon terahertz meta-devices capable of scanning the focal spot with adaptively corrected aberrations.Experimental results demonstrate that the first one dynamically scans the focal spot on a planar surface,achieving an average scanning aberration of 1.18%within the scanning range of±30°.Meanwhile,the second meta-device scans two focal points on a planar surface and a conical surface with 2.5%and 4.6%scanning aberrations,respectively.Our work pioneers a breakthrough pathway enabling the development of high-precision yet compact optical devices across various practical domains.
基金the Humanities and Social Science Fund of the Ministry of Education of China(21YJAZH077)。
文摘In a crowd density estimation dataset,the annotation of crowd locations is an extremely laborious task,and they are not taken into the evaluation metrics.In this paper,we aim to reduce the annotation cost of crowd datasets,and propose a crowd density estimation method based on weakly-supervised learning,in the absence of crowd position supervision information,which directly reduces the number of crowds by using the number of pedestrians in the image as the supervised information.For this purpose,we design a new training method,which exploits the correlation between global and local image features by incremental learning to train the network.Specifically,we design a parent-child network(PC-Net)focusing on the global and local image respectively,and propose a linear feature calibration structure to train the PC-Net simultaneously,and the child network learns feature transfer factors and feature bias weights,and uses the transfer factors and bias weights to linearly feature calibrate the features extracted from the Parent network,to improve the convergence of the network by using local features hidden in the crowd images.In addition,we use the pyramid vision transformer as the backbone of the PC-Net to extract crowd features at different levels,and design a global-local feature loss function(L2).We combine it with a crowd counting loss(LC)to enhance the sensitivity of the network to crowd features during the training process,which effectively improves the accuracy of crowd density estimation.The experimental results show that the PC-Net significantly reduces the gap between fullysupervised and weakly-supervised crowd density estimation,and outperforms the comparison methods on five datasets of Shanghai Tech Part A,ShanghaiTech Part B,UCF_CC_50,UCF_QNRF and JHU-CROWD++.
文摘We present a class of preconditioners for the linear systems resulting from a finite element or discontinuous Galerkin discretizations of advection-dominated problems.These preconditioners are designed to treat the case of geometrically localized stiffness,where the convergence rates of iterative methods are degraded in a localized subregion of the mesh.Slower convergence may be caused by a number of factors,including the mesh size,anisotropy,highly variable coefficients,and more challenging physics.The approach taken in this work is to correct well-known preconditioners such as the block Jacobi and the block incomplete LU(ILU)with an adaptive inner subregion iteration.The goal of these preconditioners is to reduce the number of costly global iterations by accelerating the convergence in the stiff region by iterating on the less expensive reduced problem.The tolerance for the inner iteration is adaptively chosen to minimize subregion-local work while guaranteeing global convergence rates.We present analysis showing that the convergence of these preconditioners,even when combined with an adaptively selected tolerance,is independent of discretization parameters(e.g.,the mesh size and diffusion coefficient)in the subregion.We demonstrate significant performance improvements over black-box preconditioners when applied to several model convection-diffusion problems.Finally,we present performance results of several variations of iterative subregion correction preconditioners applied to the Reynolds number 2.25×10^(6)fluid flow over the NACA 0012 airfoil,as well as massively separated flow at 30°angle of attack.
基金supported by the National Natural Science Foundation of China Study on the Key Technology of Non-radium Source Radon Chamber(No.42274235).
文摘Radon observation is an important measurement item of seismic precursor network observation.The radon detector calibration is a key technical link for ensuring radon observation accuracy.At present,the radon detector calibration in seismic systems in China is faced with a series of bottleneck problems,such as aging and scrap,acquisition difficulties,high supervision costs,and transportation limitations of radon sources.As a result,a large number of radon detectors cannot be accurately calibrated regularly,seriously affecting the accuracy and reliability of radon observation data in China.To solve this problem,a new calibration method for radon detectors was established.The advantage of this method is that the dangerous radioactive substance,i.e.,the radon source,can be avoided,but only“standard instruments”and water samples with certain dissolved radon concentrations can be used to realize radon detector calibration.This method avoids the risk of radioactive leakage and solves the current widespread difficulties and bottleneck of radon detector calibration in seismic systems in China.The comparison experiment with the traditional calibration method shows that the error of the calibration coefficient obtained by the new method is less than 5%compared with that by the traditional method,which meets the requirements of seismic observation systems,confirming the reliability of the new method.This new method can completely replace the traditional calibration method of using a radon source in seismic systems.
基金supported by the National Natural Science Foundation of China (61503392)。
文摘This study presents a kinematic calibration method for exoskeletal inertial motion capture (EI-MoCap) system with considering the random colored noise such as gyroscopic drift.In this method, the geometric parameters are calibrated by the traditional calibration method at first. Then, in order to calibrate the parameters affected by the random colored noise, the expectation maximization (EM) algorithm is introduced. Through the use of geometric parameters calibrated by the traditional calibration method, the iterations under the EM framework are decreased and the efficiency of the proposed method on embedded system is improved. The performance of the proposed kinematic calibration method is compared to the traditional calibration method. Furthermore, the feasibility of the proposed method is verified on the EI-MoCap system. The simulation and experiment demonstrate that the motion capture precision is significantly improved by 16.79%and 7.16%respectively in comparison to the traditional calibration method.
文摘Dispersion fuels,knowned for their excellent safety performance,are widely used in advanced reactors,such as hightemperature gas-cooled reactors.Compared with deterministic methods,the Monte Carlo method has more advantages in the geometric modeling of stochastic media.The explicit modeling method has high computational accuracy and high computational cost.The chord length sampling(CLS)method can improve computational efficiency by sampling the chord length during neutron transport using the matrix chord length?s probability density function.This study shows that the excluded-volume effect in realistic stochastic media can introduce certain deviations into the CLS.A chord length correction approach is proposed to obtain the chord length correction factor by developing the Particle code based on equivalent transmission probability.Through numerical analysis against reference solutions from explicit modeling in the RMC code,it was demonstrated that CLS with the proposed correction method provides good accuracy for addressing the excludedvolume effect in realistic infinite stochastic media.
文摘The global diabetes surge poses a critical public health challenge,emphasizing the need for effective glycemic control.However,rapid correction of chronic hyperglycemia can unexpectedly trigger microvascular complications,necessitating a reevaluation of the speed and intensity of glycemic correction.Theories suggest swift blood sugar reductions may cause inflammation,oxidative stress,and neurovascular changes,resulting in complications.Healthcare providers should cautiously approach aggressive glycemic control,especially in long-standing,poorly controlled diabetes.Preventing and managing these complications requires a personalized,comprehensive approach with education,monitoring,and interdisciplinary care.Diabetes management must balance short and longterm goals,prioritizing overall well-being.This editorial underscores the need for a personalized,nuanced approach,focusing on equilibrium between glycemic control and avoiding overcorrection.
文摘Global efforts for environmental cleanliness through the control of gaseous emissions from vehicles are gaining momentum and attracting increasing attention. Calibration plays a crucial role in these efforts by ensuring the quantitative assessment of emissions for informed decisions on environmental treatments. This paper describes a method for the calibration of CO/CO<sub>2</sub> monitors used for periodic inspections of vehicles in cites. The calibration was performed in the selected ranges: 900 - 12,000 µmol/mol for CO and 2000 - 20,000 µmol/mol for CO<sub>2</sub>. The traceability of the measurement results to the SI units was ensured by using certified reference materials from CO/N<sub>2</sub> and CO<sub>2</sub>/N<sub>2</sub> primary gas mixtures. The method performance was evaluated by assessing its linearity, accuracy, precision, bias, and uncertainty of the calibration results. The calibration data exhibited a strong linear trend with R² values close to 1, indicating an excellent fit between the measured values and the calibration lines. Precision, expressed as relative standard deviation (%RSD), ranged from 0.48 to 4.56% for CO and from 0.97 to 3.53% for CO<sub>2</sub>, staying well below the 5% threshold for reporting results at a 95% confidence level. Accuracy measured as percent recovery, was consistently high (≥ 99.1%) for CO and ranged from 84.90% to 101.54% across the calibration range for CO<sub>2</sub>. In addition, the method exhibited minimal bias for both CO and CO<sub>2</sub> calibrations and thus provided a reliable and accurate approach for calibrating CO/CO<sub>2</sub> monitors used in vehicle inspections. Thus, it ensures the effectiveness of exhaust emission control for better environment.
基金supported in part by the National Natural Science Foundation of China(Nos.62071441 and 61701464)in part by the Fundamental Research Funds for the Central Universities(No.202151006).
文摘This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.