Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronar...Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronary artery disease(CAD).The automatic classification of SPECT images for different techniques has achieved near-optimal accuracy when using convolutional neural networks(CNNs).This paper uses a SPECT classification framework with three steps:1)Image denoising,2)Attenuation correction,and 3)Image classification.Image denoising is done by a U-Net architecture that ensures effective image denoising.Attenuation correction is implemented by a convolution neural network model that can remove the attenuation that affects the feature extraction process of classification.Finally,a novel multi-scale diluted convolution(MSDC)network is proposed.It merges the features extracted in different scales and makes the model learn the features more efficiently.Three scales of filters with size 3×3 are used to extract features.All three steps are compared with state-of-the-art methods.The proposed denoising architecture ensures a high-quality image with the highest peak signal-to-noise ratio(PSNR)value of 39.7.The proposed classification method is compared with the five different CNN models,and the proposed method ensures better classification with an accuracy of 96%,precision of 87%,sensitivity of 87%,specificity of 89%,and F1-score of 87%.To demonstrate the importance of preprocessing,the classification model was analyzed without denoising and attenuation correction.展开更多
In order to obtain more accurate precipitation data and better simulate the precipitation on the Tibetan Plateau,the simulation capability of 14 Coupled Model Intercomparison Project Phase 6(CMIP6)models of historical...In order to obtain more accurate precipitation data and better simulate the precipitation on the Tibetan Plateau,the simulation capability of 14 Coupled Model Intercomparison Project Phase 6(CMIP6)models of historical precipitation(1982-2014)on the Qinghai-Tibetan Plateau was evaluated in this study.Results indicate that all models exhibit an overestimation of precipitation through the analysis of the Taylor index,temporal and spatial statistical parameters.To correct the overestimation,a fusion correction method combining the Backpropagation Neural Network Correction(BP)and Quantum Mapping(QM)correction,named BQ method,was proposed.With this method,the historical precipitation of each model was corrected in space and time,respectively.The correction results were then analyzed in time,space,and analysis of variance(ANOVA)with those corrected by the BP and QM methods,respectively.Finally,the fusion correction method results for each model were compared with the Climatic Research Unit(CRU)data for significance analysis to obtain the trends of precipitation increase and decrease for each model.The results show that the IPSL-CM6A-LR model is relatively good in simulating historical precipitation on the Qinghai-Tibetan Plateau(R=0.7,RSME=0.15)among the uncorrected data.In terms of time,the total precipitation corrected by the fusion method has the same interannual trend and the closest precipitation values to the CRU data;In terms of space,the annual average precipitation corrected by the fusion method has the smallest difference with the CRU data,and the total historical annual average precipitation is not significantly different from the CRU data,which is better than BP and QM.Therefore,the correction effect of the fusion method on the historical precipitation of each model is better than that of the QM and BP methods.The precipitation in the central and northeastern parts of the plateau shows a significant increasing trend.The correlation coefficients between monthly precipitation and site-detected precipitation for all models after BQ correction exceed 0.8.展开更多
Objective: This study aims to evaluate the efficacy and safety of using a strip-shaped cymba conchae orthosis for the nonsurgical correction of complex auricular deformities. Methods: Clinical data were collected from...Objective: This study aims to evaluate the efficacy and safety of using a strip-shaped cymba conchae orthosis for the nonsurgical correction of complex auricular deformities. Methods: Clinical data were collected from 2020 to 2021 for 6 patients who underwent correction using a stripshaped cymba conchae orthosis. The indications, corrective effects, and complications associated with use of the orthosis were analyzed. Results: There were four indications for treatment: cryptotia with helix adhesion;cryptotia with grade I microtia;cryptotia with excessive helix thickness;and auricular deformity beyond the treatment time window(≥6 months). Excellent corrective effects were observed in all 6 patients. Complications occurred in one patient, who recovered after symptomatic treatment. Conclusion: The use of a strip-shaped cymba conchae orthosis alone or combined with a U-shaped helix orthosis presents a feasible approach for correcting complex auricular deformities or deformities beyond the treatment time window in pediatric patients.展开更多
Following publication of the original article[1],the authors reported an error in the last author’s name,it was mistakenly written as“Jun Den”.The correct author’s name“Jun Deng”has been updated in this Correction.
Raman spectroscopy has found extensive use in monitoring and controlling cell culture processes.In this context,the prediction accuracy of Raman-based models is of paramount importance.However,models established with ...Raman spectroscopy has found extensive use in monitoring and controlling cell culture processes.In this context,the prediction accuracy of Raman-based models is of paramount importance.However,models established with data from manually fed-batch cultures often exhibit poor performance in Raman-controlled cultures.Thus,there is a need for effective methods to rectify these models.The objective of this paper is to investigate the efficacy of Kalman filter(KF)algorithm in correcting Raman-based models during cell culture.Initially,partial least squares(PLS)models for different components were constructed using data from manually fed-batch cultures,and the predictive performance of these models was compared.Subsequently,various correction methods including the PLS-KF-KF method proposed in this study were employed to refine the PLS models.Finally,a case study involving the auto-control of glucose concentration demonstrated the application of optimal model correction method.The results indicated that the original PLS models exhibited differential performance between manually fed-batch cultures and Raman-controlled cultures.For glucose,the root mean square error of prediction(RMSEP)of manually fed-batch culture and Raman-controlled culture was 0.23 and 0.40 g·L^(-1).With the implementation of model correction methods,there was a significant improvement in model performance within Raman-controlled cultures.The RMSEP for glucose from updating-PLS,KF-PLS,and PLS-KF-KF was 0.38,0.36 and 0.17 g·L^(-1),respectively.Notably,the proposed PLS-KF-KF model correction method was found to be more effective and stable,playing a vital role in the automated nutrient feeding of cell cultures.展开更多
In high-altitude nuclear detonations,the proportion of pulsed X-ray energy can exceed 70%,making it a specific monitoring signal for such events.These pulsed X-rays can be captured using a satellite-borne X-ray detect...In high-altitude nuclear detonations,the proportion of pulsed X-ray energy can exceed 70%,making it a specific monitoring signal for such events.These pulsed X-rays can be captured using a satellite-borne X-ray detector following atmospheric transmission.To quantitatively analyze the effects of different satellite detection altitudes,burst heights,and transmission angles on the physical processes of X-ray transport and energy fluence,we developed an atmospheric transmission algorithm for pulsed X-rays from high-altitude nuclear detonations based on scattering correction.The proposed method is an improvement over the traditional analytical method that only computes direct-transmission X-rays.The traditional analytical method exhibits a maximum relative error of 67.79% compared with the Monte Carlo method.Our improved method reduces this error to within 10% under the same conditions,even reaching 1% in certain scenarios.Moreover,its computation time is 48,000 times faster than that of the Monte Carlo method.These results have important theoretical significance and engineering application value for designing satellite-borne nuclear detonation pulsed X-ray detectors,inverting nuclear detonation source terms,and assessing ionospheric effects.展开更多
Dispersion fuels,knowned for their excellent safety performance,are widely used in advanced reactors,such as hightemperature gas-cooled reactors.Compared with deterministic methods,the Monte Carlo method has more adva...Dispersion fuels,knowned for their excellent safety performance,are widely used in advanced reactors,such as hightemperature gas-cooled reactors.Compared with deterministic methods,the Monte Carlo method has more advantages in the geometric modeling of stochastic media.The explicit modeling method has high computational accuracy and high computational cost.The chord length sampling(CLS)method can improve computational efficiency by sampling the chord length during neutron transport using the matrix chord length?s probability density function.This study shows that the excluded-volume effect in realistic stochastic media can introduce certain deviations into the CLS.A chord length correction approach is proposed to obtain the chord length correction factor by developing the Particle code based on equivalent transmission probability.Through numerical analysis against reference solutions from explicit modeling in the RMC code,it was demonstrated that CLS with the proposed correction method provides good accuracy for addressing the excludedvolume effect in realistic infinite stochastic media.展开更多
This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm...This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm that can continuously detect the miscalibration in each frame is designed,leveraging the rotational motion each individual sensor observes.Then,as sensor drift occurs,the projection constraints between visual feature points and LiDAR 3-D points are used to compute the scaled camera motion,which is further utilized to align the drifted LiDAR scan with the camera image.Finally,the proposed method is sufficiently compared with two representative approaches in the online experiments with varying levels of random drift,then the method is further extended to the offline calibration experiment and is demonstrated by a comparison with two existing benchmark methods.展开更多
This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding type...This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.展开更多
Correction to:Nano-Micro Letters(2024)16:112 https://doi.org/10.1007/s40820-024-01327-2 In the supplementary information the following corrections have been carried out:1.Institute of Energy and Climate Research,Mater...Correction to:Nano-Micro Letters(2024)16:112 https://doi.org/10.1007/s40820-024-01327-2 In the supplementary information the following corrections have been carried out:1.Institute of Energy and Climate Research,Materials Synthesis and Processing,Forschungszentrum Jülich GmbH,52425 Jülich,Germany.Corrected:Institute of Energy and Climate Research:Materials Synthesis and Processing(IEK-1),Forschungszentrum Jülich GmbH,52425 Jülich,Germany.展开更多
Scanning focused light with corrected aberrations holds great importance in high-precision optical systems.However,conventional optical systems,relying on additional dynamical correctors to eliminate scanning aberrati...Scanning focused light with corrected aberrations holds great importance in high-precision optical systems.However,conventional optical systems,relying on additional dynamical correctors to eliminate scanning aberrations,inevitably result in undesired bulkiness and complexity.In this paper,we propose achieving adaptive aberration corrections coordinated with focus scanning by rotating only two cascaded transmissive metasurfaces.Each metasurface is carefully designed by searching for optimal phase-profile parameters of three coherently worked phase functions,allowing flexible control of both the longitudinal and lateral focal position to scan on any custom-designed curved surfaces.As proof-ofconcept,we engineer and fabricate two all-silicon terahertz meta-devices capable of scanning the focal spot with adaptively corrected aberrations.Experimental results demonstrate that the first one dynamically scans the focal spot on a planar surface,achieving an average scanning aberration of 1.18%within the scanning range of±30°.Meanwhile,the second meta-device scans two focal points on a planar surface and a conical surface with 2.5%and 4.6%scanning aberrations,respectively.Our work pioneers a breakthrough pathway enabling the development of high-precision yet compact optical devices across various practical domains.展开更多
The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction ...The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction fuse actuator.The impact point easily deviates from the target,and thus the correction result cannot be readily evaluated.However,the cost of shooting tests is considerably high to conduct many tests for data collection.To address this issue,this study proposes an aiming method for shooting tests based on small sample size.The proposed method uses the Bootstrap method to expand the test data;repeatedly iterates and corrects the position of the simulated theoretical impact points through an improved compatibility test method;and dynamically adjusts the weight of the prior distribution of simulation results based on Kullback-Leibler divergence,which to some extent avoids the real data being"submerged"by the simulation data and achieves the fusion Bayesian estimation of the dispersion center.The experimental results show that when the simulation accuracy is sufficiently high,the proposed method yields a smaller mean-square deviation in estimating the dispersion center and higher shooting accuracy than those of the three comparison methods,which is more conducive to reflecting the effect of the control algorithm and facilitating test personnel to iterate their proposed structures and algorithms.;in addition,this study provides a knowledge base for further comprehensive studies in the future.展开更多
This correction adds some information to our publication[Chin.J.Chem.Phys.32,365–372(2019)]that we previously missed to include.Our previous work published in[Appl.Catal.B Env-iron.186,10(2016)]was based on the same ...This correction adds some information to our publication[Chin.J.Chem.Phys.32,365–372(2019)]that we previously missed to include.Our previous work published in[Appl.Catal.B Env-iron.186,10(2016)]was based on the same sample series but with the focus of explaining the interplay between the catalytic behavior and properties of the cuprous thin films.A superior catalytic performance was demonstrated when water was added in the deposition process[1](see Ref.[47]in our publication corrected here).展开更多
Covert communication can conceal the existence of wireless transmission and thus has the ability to address information security transfer issue in many applications of the booming Internet of Things(IoT).However,the p...Covert communication can conceal the existence of wireless transmission and thus has the ability to address information security transfer issue in many applications of the booming Internet of Things(IoT).However,the proliferation of sensing devices has generated massive amounts of data,which has increased the burden of covert communication.Considering the spatiotemporal correlation of data collection causing redundancy between data,eliminating duplicate data before transmission is beneficial for shortening transmission time,reducing the average received signal power of warden,and ultimately realizing covert communication.In this paper,we propose to apply delta compression technology in the gateway to reduce the amount of data generated by IoT devices,and then sent it to the cloud server.To this end,a cost model and evaluation method that is closer to the actual storage mode of computer systems is been constructed.Based on which,the delta version sequence obtained by existing delta compression algorithms is no longer compact,manifested by the still high cost.In this situation,we designed the correction scheme based on instructions merging(CSIM)correction to save costs by merging instructions.Firstly,the delta version sequence is divided into five categories and corresponding merge rules were derived.Then,for any COPY/ADD class delta compression algorithm,merge according to strict to relaxed to selection rules while generating instructions.Finally,a more cost-effective delta version sequence can be gained.The experimental results on random data show that the delta version sequences output by the CSIM corrected 1.5-pass and greedy algorithms have better performance in cost reducing.展开更多
In the original publication the third author name is published incorrectly as“Hayatdavoodi Masoud”.The correct author name should be read as“Masoud Hayatdavoodi”.The correct author name is available in this correc...In the original publication the third author name is published incorrectly as“Hayatdavoodi Masoud”.The correct author name should be read as“Masoud Hayatdavoodi”.The correct author name is available in this correction.展开更多
The deferred correction(DeC)is an iterative procedure,characterized by increasing the accuracy at each iteration,which can be used to design numerical methods for systems of ODEs.The main advantage of such framework i...The deferred correction(DeC)is an iterative procedure,characterized by increasing the accuracy at each iteration,which can be used to design numerical methods for systems of ODEs.The main advantage of such framework is the automatic way of getting arbitrarily high order methods,which can be put in the Runge-Kutta(RK)form.The drawback is the larger computational cost with respect to the most used RK methods.To reduce such cost,in an explicit setting,we propose an efcient modifcation:we introduce interpolation processes between the DeC iterations,decreasing the computational cost associated to the low order ones.We provide the Butcher tableaux of the new modifed methods and we study their stability,showing that in some cases the computational advantage does not afect the stability.The fexibility of the novel modifcation allows nontrivial applications to PDEs and construction of adaptive methods.The good performances of the introduced methods are broadly tested on several benchmarks both in ODE and PDE contexts.展开更多
Correction to“Research progress of ferroptosis regulating lipid peroxidation and metabolism in occurrence and development of primary liver cancer”in World J Gastrointest Oncol 2024;16:2335-2349,published by Shu YJ,L...Correction to“Research progress of ferroptosis regulating lipid peroxidation and metabolism in occurrence and development of primary liver cancer”in World J Gastrointest Oncol 2024;16:2335-2349,published by Shu YJ,Lao B,and Qiu YY.In this article,we added the correct citations of images.展开更多
In the article‘MicroRNA-329-3p inhibits the Wnt/β-catenin pathway and proliferation of osteosarcoma cells by targeting transcription factor 7-like 1’(Oncology Research,2024,Vol.32,No.3,pp.463−476.doi:10.32604/or.20...In the article‘MicroRNA-329-3p inhibits the Wnt/β-catenin pathway and proliferation of osteosarcoma cells by targeting transcription factor 7-like 1’(Oncology Research,2024,Vol.32,No.3,pp.463−476.doi:10.32604/or.2023.044085),there was an error in the compilation of Fig.8D.We have revised Fig.8D to correct this error.A corrected version of Fig.8 is provided.This correction does not change any results or conclusions of the article.We apologize for any inconvenience caused.展开更多
We calculate the thermodynamic quantities in the quantum corrected Reissner-Nordstr?m-AdS(RN-AdS)black hole,and examine their quantum corrections.By analyzing the mass and heat capacity,we give the critical state and ...We calculate the thermodynamic quantities in the quantum corrected Reissner-Nordstr?m-AdS(RN-AdS)black hole,and examine their quantum corrections.By analyzing the mass and heat capacity,we give the critical state and the remnant state,respectively,and discuss their consistency.Then,we investigate the quantum tunneling from the event horizon of massless scalar particle by using the null geodesic method,and charged massive boson W^(±)and fermions by using the Hamilton-Jacob method.It is shown that the same Hawking temperature can be obtained from these tunneling processes of different particles and methods.Next,by using the generalized uncertainty principle(GUP),we study the quantum corrections to the tunneling and the temperature.Then the logarithmic correction to the black hole entropy is obtained.展开更多
We present a class of preconditioners for the linear systems resulting from a finite element or discontinuous Galerkin discretizations of advection-dominated problems.These preconditioners are designed to treat the ca...We present a class of preconditioners for the linear systems resulting from a finite element or discontinuous Galerkin discretizations of advection-dominated problems.These preconditioners are designed to treat the case of geometrically localized stiffness,where the convergence rates of iterative methods are degraded in a localized subregion of the mesh.Slower convergence may be caused by a number of factors,including the mesh size,anisotropy,highly variable coefficients,and more challenging physics.The approach taken in this work is to correct well-known preconditioners such as the block Jacobi and the block incomplete LU(ILU)with an adaptive inner subregion iteration.The goal of these preconditioners is to reduce the number of costly global iterations by accelerating the convergence in the stiff region by iterating on the less expensive reduced problem.The tolerance for the inner iteration is adaptively chosen to minimize subregion-local work while guaranteeing global convergence rates.We present analysis showing that the convergence of these preconditioners,even when combined with an adaptively selected tolerance,is independent of discretization parameters(e.g.,the mesh size and diffusion coefficient)in the subregion.We demonstrate significant performance improvements over black-box preconditioners when applied to several model convection-diffusion problems.Finally,we present performance results of several variations of iterative subregion correction preconditioners applied to the Reynolds number 2.25×10^(6)fluid flow over the NACA 0012 airfoil,as well as massively separated flow at 30°angle of attack.展开更多
基金the Research Grant of Kwangwoon University in 2024.
文摘Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronary artery disease(CAD).The automatic classification of SPECT images for different techniques has achieved near-optimal accuracy when using convolutional neural networks(CNNs).This paper uses a SPECT classification framework with three steps:1)Image denoising,2)Attenuation correction,and 3)Image classification.Image denoising is done by a U-Net architecture that ensures effective image denoising.Attenuation correction is implemented by a convolution neural network model that can remove the attenuation that affects the feature extraction process of classification.Finally,a novel multi-scale diluted convolution(MSDC)network is proposed.It merges the features extracted in different scales and makes the model learn the features more efficiently.Three scales of filters with size 3×3 are used to extract features.All three steps are compared with state-of-the-art methods.The proposed denoising architecture ensures a high-quality image with the highest peak signal-to-noise ratio(PSNR)value of 39.7.The proposed classification method is compared with the five different CNN models,and the proposed method ensures better classification with an accuracy of 96%,precision of 87%,sensitivity of 87%,specificity of 89%,and F1-score of 87%.To demonstrate the importance of preprocessing,the classification model was analyzed without denoising and attenuation correction.
文摘In order to obtain more accurate precipitation data and better simulate the precipitation on the Tibetan Plateau,the simulation capability of 14 Coupled Model Intercomparison Project Phase 6(CMIP6)models of historical precipitation(1982-2014)on the Qinghai-Tibetan Plateau was evaluated in this study.Results indicate that all models exhibit an overestimation of precipitation through the analysis of the Taylor index,temporal and spatial statistical parameters.To correct the overestimation,a fusion correction method combining the Backpropagation Neural Network Correction(BP)and Quantum Mapping(QM)correction,named BQ method,was proposed.With this method,the historical precipitation of each model was corrected in space and time,respectively.The correction results were then analyzed in time,space,and analysis of variance(ANOVA)with those corrected by the BP and QM methods,respectively.Finally,the fusion correction method results for each model were compared with the Climatic Research Unit(CRU)data for significance analysis to obtain the trends of precipitation increase and decrease for each model.The results show that the IPSL-CM6A-LR model is relatively good in simulating historical precipitation on the Qinghai-Tibetan Plateau(R=0.7,RSME=0.15)among the uncorrected data.In terms of time,the total precipitation corrected by the fusion method has the same interannual trend and the closest precipitation values to the CRU data;In terms of space,the annual average precipitation corrected by the fusion method has the smallest difference with the CRU data,and the total historical annual average precipitation is not significantly different from the CRU data,which is better than BP and QM.Therefore,the correction effect of the fusion method on the historical precipitation of each model is better than that of the QM and BP methods.The precipitation in the central and northeastern parts of the plateau shows a significant increasing trend.The correlation coefficients between monthly precipitation and site-detected precipitation for all models after BQ correction exceed 0.8.
文摘Objective: This study aims to evaluate the efficacy and safety of using a strip-shaped cymba conchae orthosis for the nonsurgical correction of complex auricular deformities. Methods: Clinical data were collected from 2020 to 2021 for 6 patients who underwent correction using a stripshaped cymba conchae orthosis. The indications, corrective effects, and complications associated with use of the orthosis were analyzed. Results: There were four indications for treatment: cryptotia with helix adhesion;cryptotia with grade I microtia;cryptotia with excessive helix thickness;and auricular deformity beyond the treatment time window(≥6 months). Excellent corrective effects were observed in all 6 patients. Complications occurred in one patient, who recovered after symptomatic treatment. Conclusion: The use of a strip-shaped cymba conchae orthosis alone or combined with a U-shaped helix orthosis presents a feasible approach for correcting complex auricular deformities or deformities beyond the treatment time window in pediatric patients.
文摘Following publication of the original article[1],the authors reported an error in the last author’s name,it was mistakenly written as“Jun Den”.The correct author’s name“Jun Deng”has been updated in this Correction.
基金supported by the Key Research and Development Program of Zhejiang Province,China(2023C03116).
文摘Raman spectroscopy has found extensive use in monitoring and controlling cell culture processes.In this context,the prediction accuracy of Raman-based models is of paramount importance.However,models established with data from manually fed-batch cultures often exhibit poor performance in Raman-controlled cultures.Thus,there is a need for effective methods to rectify these models.The objective of this paper is to investigate the efficacy of Kalman filter(KF)algorithm in correcting Raman-based models during cell culture.Initially,partial least squares(PLS)models for different components were constructed using data from manually fed-batch cultures,and the predictive performance of these models was compared.Subsequently,various correction methods including the PLS-KF-KF method proposed in this study were employed to refine the PLS models.Finally,a case study involving the auto-control of glucose concentration demonstrated the application of optimal model correction method.The results indicated that the original PLS models exhibited differential performance between manually fed-batch cultures and Raman-controlled cultures.For glucose,the root mean square error of prediction(RMSEP)of manually fed-batch culture and Raman-controlled culture was 0.23 and 0.40 g·L^(-1).With the implementation of model correction methods,there was a significant improvement in model performance within Raman-controlled cultures.The RMSEP for glucose from updating-PLS,KF-PLS,and PLS-KF-KF was 0.38,0.36 and 0.17 g·L^(-1),respectively.Notably,the proposed PLS-KF-KF model correction method was found to be more effective and stable,playing a vital role in the automated nutrient feeding of cell cultures.
文摘In high-altitude nuclear detonations,the proportion of pulsed X-ray energy can exceed 70%,making it a specific monitoring signal for such events.These pulsed X-rays can be captured using a satellite-borne X-ray detector following atmospheric transmission.To quantitatively analyze the effects of different satellite detection altitudes,burst heights,and transmission angles on the physical processes of X-ray transport and energy fluence,we developed an atmospheric transmission algorithm for pulsed X-rays from high-altitude nuclear detonations based on scattering correction.The proposed method is an improvement over the traditional analytical method that only computes direct-transmission X-rays.The traditional analytical method exhibits a maximum relative error of 67.79% compared with the Monte Carlo method.Our improved method reduces this error to within 10% under the same conditions,even reaching 1% in certain scenarios.Moreover,its computation time is 48,000 times faster than that of the Monte Carlo method.These results have important theoretical significance and engineering application value for designing satellite-borne nuclear detonation pulsed X-ray detectors,inverting nuclear detonation source terms,and assessing ionospheric effects.
文摘Dispersion fuels,knowned for their excellent safety performance,are widely used in advanced reactors,such as hightemperature gas-cooled reactors.Compared with deterministic methods,the Monte Carlo method has more advantages in the geometric modeling of stochastic media.The explicit modeling method has high computational accuracy and high computational cost.The chord length sampling(CLS)method can improve computational efficiency by sampling the chord length during neutron transport using the matrix chord length?s probability density function.This study shows that the excluded-volume effect in realistic stochastic media can introduce certain deviations into the CLS.A chord length correction approach is proposed to obtain the chord length correction factor by developing the Particle code based on equivalent transmission probability.Through numerical analysis against reference solutions from explicit modeling in the RMC code,it was demonstrated that CLS with the proposed correction method provides good accuracy for addressing the excludedvolume effect in realistic infinite stochastic media.
基金Supported by National Natural Science Foundation of China(Grant Nos.52025121,52394263)National Key R&D Plan of China(Grant No.2023YFD2000301).
文摘This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm that can continuously detect the miscalibration in each frame is designed,leveraging the rotational motion each individual sensor observes.Then,as sensor drift occurs,the projection constraints between visual feature points and LiDAR 3-D points are used to compute the scaled camera motion,which is further utilized to align the drifted LiDAR scan with the camera image.Finally,the proposed method is sufficiently compared with two representative approaches in the online experiments with varying levels of random drift,then the method is further extended to the offline calibration experiment and is demonstrated by a comparison with two existing benchmark methods.
基金supported in part by the National Natural Science Foundation of China(Nos.62071441 and 61701464)in part by the Fundamental Research Funds for the Central Universities(No.202151006).
文摘This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.
文摘Correction to:Nano-Micro Letters(2024)16:112 https://doi.org/10.1007/s40820-024-01327-2 In the supplementary information the following corrections have been carried out:1.Institute of Energy and Climate Research,Materials Synthesis and Processing,Forschungszentrum Jülich GmbH,52425 Jülich,Germany.Corrected:Institute of Energy and Climate Research:Materials Synthesis and Processing(IEK-1),Forschungszentrum Jülich GmbH,52425 Jülich,Germany.
基金supported by National Natural Science Foundation of China(62175141)Ministry of Science and Technology(2022YFA1404704)+2 种基金China Scholarship Council(202306890039)Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2022R1A6A1A03052954)Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2019-0-01906,Artificial Intelligence Graduate School Program(POSTECH)).
文摘Scanning focused light with corrected aberrations holds great importance in high-precision optical systems.However,conventional optical systems,relying on additional dynamical correctors to eliminate scanning aberrations,inevitably result in undesired bulkiness and complexity.In this paper,we propose achieving adaptive aberration corrections coordinated with focus scanning by rotating only two cascaded transmissive metasurfaces.Each metasurface is carefully designed by searching for optimal phase-profile parameters of three coherently worked phase functions,allowing flexible control of both the longitudinal and lateral focal position to scan on any custom-designed curved surfaces.As proof-ofconcept,we engineer and fabricate two all-silicon terahertz meta-devices capable of scanning the focal spot with adaptively corrected aberrations.Experimental results demonstrate that the first one dynamically scans the focal spot on a planar surface,achieving an average scanning aberration of 1.18%within the scanning range of±30°.Meanwhile,the second meta-device scans two focal points on a planar surface and a conical surface with 2.5%and 4.6%scanning aberrations,respectively.Our work pioneers a breakthrough pathway enabling the development of high-precision yet compact optical devices across various practical domains.
基金the National Natural Science Foundation of China(Grant No.61973033)Preliminary Research of Equipment(Grant No.9090102010305)for funding the experiments。
文摘The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction fuse actuator.The impact point easily deviates from the target,and thus the correction result cannot be readily evaluated.However,the cost of shooting tests is considerably high to conduct many tests for data collection.To address this issue,this study proposes an aiming method for shooting tests based on small sample size.The proposed method uses the Bootstrap method to expand the test data;repeatedly iterates and corrects the position of the simulated theoretical impact points through an improved compatibility test method;and dynamically adjusts the weight of the prior distribution of simulation results based on Kullback-Leibler divergence,which to some extent avoids the real data being"submerged"by the simulation data and achieves the fusion Bayesian estimation of the dispersion center.The experimental results show that when the simulation accuracy is sufficiently high,the proposed method yields a smaller mean-square deviation in estimating the dispersion center and higher shooting accuracy than those of the three comparison methods,which is more conducive to reflecting the effect of the control algorithm and facilitating test personnel to iterate their proposed structures and algorithms.;in addition,this study provides a knowledge base for further comprehensive studies in the future.
文摘This correction adds some information to our publication[Chin.J.Chem.Phys.32,365–372(2019)]that we previously missed to include.Our previous work published in[Appl.Catal.B Env-iron.186,10(2016)]was based on the same sample series but with the focus of explaining the interplay between the catalytic behavior and properties of the cuprous thin films.A superior catalytic performance was demonstrated when water was added in the deposition process[1](see Ref.[47]in our publication corrected here).
基金supported by regional innovation capability guidance plan of Shaanxi Provincial Department of science and Technology(2022QFY01-14)Plan Project of the Xi’an Science and Technology(22GXFW0047)under Grant+1 种基金Science,Technology Plan Project of Xi’an Bei lin District(GX2214)under GrantKey R&D projects of Xianyang Science and Technology Bureau(2021ZDYF-NY-0019)。
文摘Covert communication can conceal the existence of wireless transmission and thus has the ability to address information security transfer issue in many applications of the booming Internet of Things(IoT).However,the proliferation of sensing devices has generated massive amounts of data,which has increased the burden of covert communication.Considering the spatiotemporal correlation of data collection causing redundancy between data,eliminating duplicate data before transmission is beneficial for shortening transmission time,reducing the average received signal power of warden,and ultimately realizing covert communication.In this paper,we propose to apply delta compression technology in the gateway to reduce the amount of data generated by IoT devices,and then sent it to the cloud server.To this end,a cost model and evaluation method that is closer to the actual storage mode of computer systems is been constructed.Based on which,the delta version sequence obtained by existing delta compression algorithms is no longer compact,manifested by the still high cost.In this situation,we designed the correction scheme based on instructions merging(CSIM)correction to save costs by merging instructions.Firstly,the delta version sequence is divided into five categories and corresponding merge rules were derived.Then,for any COPY/ADD class delta compression algorithm,merge according to strict to relaxed to selection rules while generating instructions.Finally,a more cost-effective delta version sequence can be gained.The experimental results on random data show that the delta version sequences output by the CSIM corrected 1.5-pass and greedy algorithms have better performance in cost reducing.
文摘In the original publication the third author name is published incorrectly as“Hayatdavoodi Masoud”.The correct author name should be read as“Masoud Hayatdavoodi”.The correct author name is available in this correction.
文摘The deferred correction(DeC)is an iterative procedure,characterized by increasing the accuracy at each iteration,which can be used to design numerical methods for systems of ODEs.The main advantage of such framework is the automatic way of getting arbitrarily high order methods,which can be put in the Runge-Kutta(RK)form.The drawback is the larger computational cost with respect to the most used RK methods.To reduce such cost,in an explicit setting,we propose an efcient modifcation:we introduce interpolation processes between the DeC iterations,decreasing the computational cost associated to the low order ones.We provide the Butcher tableaux of the new modifed methods and we study their stability,showing that in some cases the computational advantage does not afect the stability.The fexibility of the novel modifcation allows nontrivial applications to PDEs and construction of adaptive methods.The good performances of the introduced methods are broadly tested on several benchmarks both in ODE and PDE contexts.
文摘Correction to“Research progress of ferroptosis regulating lipid peroxidation and metabolism in occurrence and development of primary liver cancer”in World J Gastrointest Oncol 2024;16:2335-2349,published by Shu YJ,Lao B,and Qiu YY.In this article,we added the correct citations of images.
文摘In the article‘MicroRNA-329-3p inhibits the Wnt/β-catenin pathway and proliferation of osteosarcoma cells by targeting transcription factor 7-like 1’(Oncology Research,2024,Vol.32,No.3,pp.463−476.doi:10.32604/or.2023.044085),there was an error in the compilation of Fig.8D.We have revised Fig.8D to correct this error.A corrected version of Fig.8 is provided.This correction does not change any results or conclusions of the article.We apologize for any inconvenience caused.
基金Project supported by the Natural Science Foundation of Zhejiang Province,China (Grant No.LY14A030001)。
文摘We calculate the thermodynamic quantities in the quantum corrected Reissner-Nordstr?m-AdS(RN-AdS)black hole,and examine their quantum corrections.By analyzing the mass and heat capacity,we give the critical state and the remnant state,respectively,and discuss their consistency.Then,we investigate the quantum tunneling from the event horizon of massless scalar particle by using the null geodesic method,and charged massive boson W^(±)and fermions by using the Hamilton-Jacob method.It is shown that the same Hawking temperature can be obtained from these tunneling processes of different particles and methods.Next,by using the generalized uncertainty principle(GUP),we study the quantum corrections to the tunneling and the temperature.Then the logarithmic correction to the black hole entropy is obtained.
文摘We present a class of preconditioners for the linear systems resulting from a finite element or discontinuous Galerkin discretizations of advection-dominated problems.These preconditioners are designed to treat the case of geometrically localized stiffness,where the convergence rates of iterative methods are degraded in a localized subregion of the mesh.Slower convergence may be caused by a number of factors,including the mesh size,anisotropy,highly variable coefficients,and more challenging physics.The approach taken in this work is to correct well-known preconditioners such as the block Jacobi and the block incomplete LU(ILU)with an adaptive inner subregion iteration.The goal of these preconditioners is to reduce the number of costly global iterations by accelerating the convergence in the stiff region by iterating on the less expensive reduced problem.The tolerance for the inner iteration is adaptively chosen to minimize subregion-local work while guaranteeing global convergence rates.We present analysis showing that the convergence of these preconditioners,even when combined with an adaptively selected tolerance,is independent of discretization parameters(e.g.,the mesh size and diffusion coefficient)in the subregion.We demonstrate significant performance improvements over black-box preconditioners when applied to several model convection-diffusion problems.Finally,we present performance results of several variations of iterative subregion correction preconditioners applied to the Reynolds number 2.25×10^(6)fluid flow over the NACA 0012 airfoil,as well as massively separated flow at 30°angle of attack.