The assessment of the measurement error status of online Capacitor Voltage Transformers (CVT) within the power grid is of profound significance to the equitable trade of electric energy and the secure operation of the...The assessment of the measurement error status of online Capacitor Voltage Transformers (CVT) within the power grid is of profound significance to the equitable trade of electric energy and the secure operation of the power grid. This paper advances an online CVT error state evaluation method, anchored in the in-phase relationship and outlier detection. Initially, this method leverages the in-phase relationship to obviate the influence of primary side fluctuations in the grid on assessment accuracy. Subsequently, Principal Component Analysis (PCA) is employed to meticulously disentangle the error change information inherent in the CVT from the measured values and to compute statistics that delineate the error state. Finally, the Local Outlier Factor (LOF) is deployed to discern outliers in the statistics, with thresholds serving to appraise the CVT error state. Experimental results incontrovertibly demonstrate the efficacy of this method, showcasing its prowess in effecting online tracking of CVT error changes and conducting error state assessments. The discernible enhancements in reliability, accuracy, and sensitivity are manifest, with the assessment accuracy reaching an exemplary 0.01%.展开更多
Control charts(CCs)are one of the main tools in Statistical Process Control that have been widely adopted in manufacturing sectors as an effective strategy for malfunction detection throughout the previous decades.Mea...Control charts(CCs)are one of the main tools in Statistical Process Control that have been widely adopted in manufacturing sectors as an effective strategy for malfunction detection throughout the previous decades.Measurement errors(M.E’s)are involved in the quality characteristic of interest,which can effect the CC’s performance.The authors explored the impact of a linearmodel with additive covariate M.E on the multivariate cumulative sum(CUSUM)CC for a specific kind of data known as compositional data(CoDa).The average run length(ARL)is used to assess the performance of the proposed chart.The results indicate that M.E’s significantly affects themultivariate CUSUM-CoDaCCs.The authors haveused theMarkov chainmethod to study the impact of different involved parameters using six different cases for the variance-covariance matrix(VCM)(i.e.,uncorrelated with equal variances,uncorrelated with unequal variances,positively correlated with equal variances,positively correlated with unequal variances,negatively correlatedwith equal variances and negatively correlated with unequal variances).The authors concluded that the error VCM has a negative impact on the performance of themultivariate CUSUM-CoDa CC,as the ARL increases with an increase in the value of the error VCM.The subgroup size m and powering operator b positively impact the proposed CC,as the ARL decreases with an increase in m or b.The number of variables p also has a negative impact on the performance of the proposed CC,as the values of ARL increase with an increase in p.For the implementation of the proposal,two illustrated examples have been reported formultivariate CUSUM-CoDaCCs inthe presence ofM.E’s.Onedealswith themanufacturingprocessof uncoated aspirin tablets,and the other is based on monitoring the machines involved in the muesli manufacturing process.展开更多
A method was proposed to analyze the influences of the non-ideal spectroscopic performance of optical components and orientation errors of a laser tracing measurement optical system on the tracing measurement performa...A method was proposed to analyze the influences of the non-ideal spectroscopic performance of optical components and orientation errors of a laser tracing measurement optical system on the tracing measurement performance.A comprehensive model of the interference fringe contrast based on the laser tracing system s measurement principle was established in this study.Simulation results based on ZEMAX verified the model.According to the simulation results,the placement angle of the analyzer had a direct influence on the interference fringe contrast.When the angle of the polarized light to the analyzer’s transmission axis increased from 65°to 85°,each contrast of the four-way interference fringes decreased from 0.9996 to 0.3528,the interference fringe contrast is decreased by 65%.Under the split ratio of beam splitters in the interference part(BS 1)of 5∶5,when the splitting ratio of BS 2 changed from 2∶8 to 8∶2,the fringe contrast of the interference signals received by the photodetectors increased,but the injection light intensity onto the PSD reflected by BS 2 decreased.The significant influence of the tracing performance was verified by the experiments.When splitting ratio of BS 2 increased,the contrast of the interference fringes increased.Due to the weakening of the incident light intensity of the PSD caused by the change of BS 2 splitting ratio,the response time of the tracing system is increased by 23.7 ms.As a result,the tracing performance of the laser tracing measurement optical system was degraded.An important theoretical basis was provided to evaluate and improve the accuracy and reliability of laser tracing measurement systems.展开更多
This paper investigates the anomaly-resistant decentralized state estimation(SE) problem for a class of wide-area power systems which are divided into several non-overlapping areas connected through transmission lines...This paper investigates the anomaly-resistant decentralized state estimation(SE) problem for a class of wide-area power systems which are divided into several non-overlapping areas connected through transmission lines. Two classes of measurements(i.e., local measurements and edge measurements) are obtained, respectively, from the individual area and the transmission lines. A decentralized state estimator, whose performance is resistant against measurement with anomalies, is designed based on the minimum error entropy with fiducial points(MEEF) criterion. Specifically, 1) An augmented model, which incorporates the local prediction and local measurement, is developed by resorting to the unscented transformation approach and the statistical linearization approach;2) Using the augmented model, an MEEF-based cost function is designed that reflects the local prediction errors of the state and the measurement;and 3) The local estimate is first obtained by minimizing the MEEF-based cost function through a fixed-point iteration and then updated by using the edge measuring information. Finally, simulation experiments with three scenarios are carried out on the IEEE 14-bus system to illustrate the validity of the proposed anomaly-resistant decentralized SE scheme.展开更多
In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally a...In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally affordable.This paper proposes a novel Bayesian filtering algorithm that considers algorithmic computational cost and estimation accuracy for high-dimensional linear systems.The high-dimensional state vector is divided into several blocks to save computation resources by avoiding the calculation of error covariance with immense dimensions.After that,two sequential states are estimated simultaneously by introducing an auxiliary variable in the new probability space,mitigating the performance degradation caused by state segmentation.Moreover,the computational cost and error covariance of the proposed algorithm are analyzed analytically to show its distinct features compared with several existing methods.Simulation results illustrate that the proposed Bayesian filtering can maintain a higher estimation accuracy with reasonable computational cost when applied to high-dimensional linear systems.展开更多
An externally generated resonant magnetic perturbation can induce complex non-ideal MHD responses in their resonant surfaces.We have studied the plasma responses using Fitzpatrick's improved two-fluid model and pr...An externally generated resonant magnetic perturbation can induce complex non-ideal MHD responses in their resonant surfaces.We have studied the plasma responses using Fitzpatrick's improved two-fluid model and program LAYER.We calculated the error field penetration threshold for J-TEXT.In addition,we find that the island width increases slightly as the error field amplitude increases when the error field amplitude is below the critical penetration value.However,the island width suddenly jumps to a large value because the shielding effect of the plasma against the error field disappears after the penetration.By scanning the natural mode frequency,we find that the shielding effect of the plasma decreases as the natural mode frequency decreases.Finally,we obtain the m/n=2/1 penetration threshold scaling on density and temperature.展开更多
The application of Intelligent Internet of Things(IIoT)in constructing distribution station areas strongly supports platform transformation,upgrade,and intelligent integration.The sensing layer of IIoT comprises the e...The application of Intelligent Internet of Things(IIoT)in constructing distribution station areas strongly supports platform transformation,upgrade,and intelligent integration.The sensing layer of IIoT comprises the edge convergence layer and the end sensing layer,with the former using intelligent fusion terminals for real-time data collection and processing.However,the influx of multiple low-voltage in the smart grid raises higher demands for the performance,energy efficiency,and response speed of the substation fusion terminals.Simultaneously,it brings significant security risks to the entire distribution substation,posing a major challenge to the smart grid.In response to these challenges,a proposed dynamic and energy-efficient trust measurement scheme for smart grids aims to address these issues.The scheme begins by establishing a hierarchical trust measurement model,elucidating the trust relationships among smart IoT terminals.It then incorporates multidimensional measurement factors,encompassing static environmental factors,dynamic behaviors,and energy states.This comprehensive approach reduces the impact of subjective factors on trust measurements.Additionally,the scheme incorporates a detection process designed for identifying malicious low-voltage end sensing units,ensuring the prompt identification and elimination of any malicious terminals.This,in turn,enhances the security and reliability of the smart grid environment.The effectiveness of the proposed scheme in pinpointing malicious nodes has been demonstrated through simulation experiments.Notably,the scheme outperforms established trust metric models in terms of energy efficiency,showcasing its significant contribution to the field.展开更多
Timer error as well as its convention is very important for dose accuracy during irradiation. This paper determines the timer error of irradiators at Secondary Standard Dosimetry Laboratory (SSDL) in Nigeria. The irra...Timer error as well as its convention is very important for dose accuracy during irradiation. This paper determines the timer error of irradiators at Secondary Standard Dosimetry Laboratory (SSDL) in Nigeria. The irradiators are Cs-137 OB6 irradiator and X-ray irradiators at the Protection level SSDL;and Co-60 irradiator at the Therapy Level SSDL. PTW UNIDOS electrometer and LS01 Ionization chamber were used at the Protection Level to obtain doses for both Cs-137 OB6 and X-ray irradiators while an IBA farmer type ionization chamber and an IBA DOSE 1 electrometer were used at the Protection Level SSDL. Single/multiple exposure method and graphical method were used in the determination of the timer error for the three irradiators. The timer error obtained for Cs-137 OB6 irradiator was 0.48 ± 0.01 s, the timer error for the X-ray irradiator was 0.09 ± 0.01 s while the timer error obtained for GammaBeam X200 was 1.21 ± 0.04 s. It was observed that the timer error is not affected by source to detector distance. It was also observed that the timer error of Co-60 Gamma X200 irradiator is increasing with the age of the machine. Source to detector distance and field size do not contribute towards the timer error of the irradiators. The timer error of the Co-60 Gamma X200 irradiator (the only irradiator among the irradiators with a pneumatic system) increases with the age of the irradiator.展开更多
Weak measurement amplification,which is considered as a very promising scheme in precision measurement,has been applied to various small physical quantities estimations.Since many physical quantities can be converted ...Weak measurement amplification,which is considered as a very promising scheme in precision measurement,has been applied to various small physical quantities estimations.Since many physical quantities can be converted into phase signals,it is interesting and important to consider measuring small longitudinal phase shifts by using weak measurement.Here,we propose and experimentally demonstrate a novel weak measurement amplification-based small longitudinal phase estimation,which is suitable for polarization interferometry.We realize one order of magnitude amplification measurement of a small phase signal directly introduced by a liquid crystal variable retarder and show that it is robust to the imperfection of interference.Besides,we analyze the effect of magnification error which is never considered in the previous works,and find the constraint on the magnification.Our results may find important applications in high-precision measurements,e.g.,gravitational wave detection.展开更多
The widespread adoption of the Internet of Things (IoT) has transformed various sectors globally, making themmore intelligent and connected. However, this advancement comes with challenges related to the effectiveness...The widespread adoption of the Internet of Things (IoT) has transformed various sectors globally, making themmore intelligent and connected. However, this advancement comes with challenges related to the effectiveness ofIoT devices. These devices, present in offices, homes, industries, and more, need constant monitoring to ensuretheir proper functionality. The success of smart systems relies on their seamless operation and ability to handlefaults. Sensors, crucial components of these systems, gather data and contribute to their functionality. Therefore,sensor faults can compromise the system’s reliability and undermine the trustworthiness of smart environments.To address these concerns, various techniques and algorithms can be employed to enhance the performance ofIoT devices through effective fault detection. This paper conducted a thorough review of the existing literature andconducted a detailed analysis.This analysis effectively links sensor errors with a prominent fault detection techniquecapable of addressing them. This study is innovative because it paves theway for future researchers to explore errorsthat have not yet been tackled by existing fault detection methods. Significant, the paper, also highlights essentialfactors for selecting and adopting fault detection techniques, as well as the characteristics of datasets and theircorresponding recommended techniques. Additionally, the paper presents amethodical overview of fault detectiontechniques employed in smart devices, including themetrics used for evaluation. Furthermore, the paper examinesthe body of academic work related to sensor faults and fault detection techniques within the domain. This reflectsthe growing inclination and scholarly attention of researchers and academicians toward strategies for fault detectionwithin the realm of the Internet of Things.展开更多
In this paper,an improved spatio-temporal alignment measurement method is presented to address the inertial matching measurement of hull deformation under the coexistence of time delay and large misalignment angle.Lar...In this paper,an improved spatio-temporal alignment measurement method is presented to address the inertial matching measurement of hull deformation under the coexistence of time delay and large misalignment angle.Large misalignment angle and time delay often occur simultaneously and bring great challenges to the accurate measurement of hull deformation in space and time.The proposed method utilizes coarse alignment with large misalignment angle and time delay estimation of inertial measurement unit modeling to establish a brand-new spatiotemporal aligned hull deformation measurement model.In addition,two-step loop control is designed to ensure the accurate description of dynamic deformation angle and static deformation angle by the time-space alignment method of hull deformation.The experiments illustrate that the proposed method can effectively measure the hull deformation angle when time delay and large misalignment angle coexist.展开更多
In this paper,Let M_(n)denote the maximum of logarithmic general error distribution with parameter v≥1.Higher-order expansions for distributions of powered extremes M_(n)^(p)are derived under an optimal choice of nor...In this paper,Let M_(n)denote the maximum of logarithmic general error distribution with parameter v≥1.Higher-order expansions for distributions of powered extremes M_(n)^(p)are derived under an optimal choice of normalizing constants.It is shown that M_(n)^(p),when v=1,converges to the Frechet extreme value distribution at the rate of 1/n,and if v>1 then M_(n)^(p)converges to the Gumbel extreme value distribution at the rate of(loglogn)^(2)=(log n)^(1-1/v).展开更多
Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sour...Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sources,the detector response can reflect various types of information of the medium.The Monte Carlo method is one of the primary methods used to obtain nuclear detection responses in complex environments.However,this requires a computational process with extensive random sampling,consumes considerable resources,and does not provide real-time response results.Therefore,a novel fast forward computational method(FFCM)for nuclear measurement that uses volumetric detection constraints to rapidly calculate the detector response in various complex environments is proposed.First,the data library required for the FFCM is built by collecting the detection volume,detector counts,and flux sensitivity functions through a Monte Carlo simulation.Then,based on perturbation theory and the Rytov approximation,a model for the detector response is derived using the flux sensitivity function method and a one-group diffusion model.The environmental perturbation is constrained to optimize the model according to the tool structure and the impact of the formation and borehole within the effective detection volume.Finally,the method is applied to a neutron porosity tool for verification.In various complex simulation environments,the maximum relative error between the calculated porosity results of Monte Carlo and FFCM was 6.80%,with a rootmean-square error of 0.62 p.u.In field well applications,the formation porosity model obtained using FFCM was in good agreement with the model obtained by interpreters,which demonstrates the validity and accuracy of the proposed method.展开更多
We propose a fast,adaptive multiscale resolution spectral measurement method based on compressed sensing.The method can apply variable measurement resolution over the entire spectral range to reduce the measurement ti...We propose a fast,adaptive multiscale resolution spectral measurement method based on compressed sensing.The method can apply variable measurement resolution over the entire spectral range to reduce the measurement time by over 75%compared to a global high-resolution measurement.Mimicking the characteristics of the human retina system,the resolution distribution follows the principle of gradually decreasing.The system allows the spectral peaks of interest to be captured dynamically or to be specified a priori by a user.The system was tested by measuring single and dual spectral peaks,and the results of spectral peaks are consistent with those of global high-resolution measurements.展开更多
Finesse is a critical parameter for describing the characteristics of an optical enhancement cavity(OEC). This paper first presents a review of finesse measurement techniques, including a comparative analysis of the a...Finesse is a critical parameter for describing the characteristics of an optical enhancement cavity(OEC). This paper first presents a review of finesse measurement techniques, including a comparative analysis of the advantages, disadvantages, and potential limitations of several main methods from both theoretical and practical perspectives. A variant of the existing method called the free spectral range(FSR) modulation method is proposed and compared with three other finesse measurement methods, i.e., the fast-switching cavity ring-down(CRD) method, the rapidly swept-frequency(SF) CRD method, and the ringing effect method. A high-power OEC platform with a high finesse of approximately 16000 is built and measured with the four methods. The performance of these methods is compared, and the results show that the FSR modulation method and the fast-switching CRD method are more suitable and accurate than the other two methods for high-finesse OEC measurements. The CRD method and the ringing effect method can be implemented in open loop using simple equipment and are easy to perform. Additionally, recommendations for selecting finesse measurement methods under different conditions are proposed, which benefit the development of OEC and its applications.展开更多
Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NIS...Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NISQ)computing.In this paper,we use the bit-flip averaging(BFA)method to mitigate frequent readout errors in quantum generative adversarial networks(QGAN)for image generation,which simplifies the response matrix structure by averaging the qubits for each random bit-flip in advance,successfully solving problems with high cost of measurement for traditional error mitigation methods.Our experiments were simulated in Qiskit using the handwritten digit image recognition dataset under the BFA-based method,the Kullback-Leibler(KL)divergence of the generated images converges to 0.04,0.05,and 0.1 for readout error probabilities of p=0.01,p=0.05,and p=0.1,respectively.Additionally,by evaluating the fidelity of the quantum states representing the images,we observe average fidelity values of 0.97,0.96,and 0.95 for the three readout error probabilities,respectively.These results demonstrate the robustness of the model in mitigating readout errors and provide a highly fault tolerant mechanism for image generation models.展开更多
Visible light communication(VLC)has attracted much attention in the research of sixthgeneration(6G)systems.Furthermore,channel modeling is the foundation for designing efficient and robust VLC systems.In this paper,we...Visible light communication(VLC)has attracted much attention in the research of sixthgeneration(6G)systems.Furthermore,channel modeling is the foundation for designing efficient and robust VLC systems.In this paper,we present extensive VLC channel measurement campaigns in indoor environments,i.e.,an office and a corridor.Based on the measured data,the large-scale fading characteristics and multipath-related characteristics,including omnidirectional optical path loss(OPL),K-factor,power angular spectrum(PAS),angle spread(AS),and clustering characteristics,are analyzed and modeled through a statistical method.Based on the extracted statistics of the above-mentioned channel characteristics,we propose a statistical spatial channel model(SSCM)capable of modeling multipath in the spatial domain.Furthermore,the simulated statistics of the proposed model are compared with the measured statistics.For instance,in the office,the simulated path loss exponent(PLE)and the measured PLE are 1.96and 1.97,respectively.And,the simulated medians of AS and measured medians of AS are 25.94°and 24.84°,respectively.Generally,the fact that the simulated results fit well with measured results has demonstrated the accuracy of our SSCM.展开更多
This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding type...This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.展开更多
In this paper,an efficient unequal error protection(UEP)scheme for online fountain codes is proposed.In the buildup phase,the traversing-selection strategy is proposed to select the most important symbols(MIS).Then,in...In this paper,an efficient unequal error protection(UEP)scheme for online fountain codes is proposed.In the buildup phase,the traversing-selection strategy is proposed to select the most important symbols(MIS).Then,in the completion phase,the weighted-selection strategy is applied to provide low overhead.The performance of the proposed scheme is analyzed and compared with the existing UEP online fountain scheme.Simulation results show that in terms of MIS and the least important symbols(LIS),when the bit error ratio is 10-4,the proposed scheme can achieve 85%and 31.58%overhead reduction,respectively.展开更多
In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken a...In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.展开更多
文摘The assessment of the measurement error status of online Capacitor Voltage Transformers (CVT) within the power grid is of profound significance to the equitable trade of electric energy and the secure operation of the power grid. This paper advances an online CVT error state evaluation method, anchored in the in-phase relationship and outlier detection. Initially, this method leverages the in-phase relationship to obviate the influence of primary side fluctuations in the grid on assessment accuracy. Subsequently, Principal Component Analysis (PCA) is employed to meticulously disentangle the error change information inherent in the CVT from the measured values and to compute statistics that delineate the error state. Finally, the Local Outlier Factor (LOF) is deployed to discern outliers in the statistics, with thresholds serving to appraise the CVT error state. Experimental results incontrovertibly demonstrate the efficacy of this method, showcasing its prowess in effecting online tracking of CVT error changes and conducting error state assessments. The discernible enhancements in reliability, accuracy, and sensitivity are manifest, with the assessment accuracy reaching an exemplary 0.01%.
基金supported by the National Natural Science Foundation of China (Grant No.71802110)the Humanity and Social Science Foundation of theMinistry of Education of China (Grant No.19YJA630061).
文摘Control charts(CCs)are one of the main tools in Statistical Process Control that have been widely adopted in manufacturing sectors as an effective strategy for malfunction detection throughout the previous decades.Measurement errors(M.E’s)are involved in the quality characteristic of interest,which can effect the CC’s performance.The authors explored the impact of a linearmodel with additive covariate M.E on the multivariate cumulative sum(CUSUM)CC for a specific kind of data known as compositional data(CoDa).The average run length(ARL)is used to assess the performance of the proposed chart.The results indicate that M.E’s significantly affects themultivariate CUSUM-CoDaCCs.The authors haveused theMarkov chainmethod to study the impact of different involved parameters using six different cases for the variance-covariance matrix(VCM)(i.e.,uncorrelated with equal variances,uncorrelated with unequal variances,positively correlated with equal variances,positively correlated with unequal variances,negatively correlatedwith equal variances and negatively correlated with unequal variances).The authors concluded that the error VCM has a negative impact on the performance of themultivariate CUSUM-CoDa CC,as the ARL increases with an increase in the value of the error VCM.The subgroup size m and powering operator b positively impact the proposed CC,as the ARL decreases with an increase in m or b.The number of variables p also has a negative impact on the performance of the proposed CC,as the values of ARL increase with an increase in p.For the implementation of the proposal,two illustrated examples have been reported formultivariate CUSUM-CoDaCCs inthe presence ofM.E’s.Onedealswith themanufacturingprocessof uncoated aspirin tablets,and the other is based on monitoring the machines involved in the muesli manufacturing process.
基金Sponsored by the National Natural Science Foundation of China(Grant No.52175491).
文摘A method was proposed to analyze the influences of the non-ideal spectroscopic performance of optical components and orientation errors of a laser tracing measurement optical system on the tracing measurement performance.A comprehensive model of the interference fringe contrast based on the laser tracing system s measurement principle was established in this study.Simulation results based on ZEMAX verified the model.According to the simulation results,the placement angle of the analyzer had a direct influence on the interference fringe contrast.When the angle of the polarized light to the analyzer’s transmission axis increased from 65°to 85°,each contrast of the four-way interference fringes decreased from 0.9996 to 0.3528,the interference fringe contrast is decreased by 65%.Under the split ratio of beam splitters in the interference part(BS 1)of 5∶5,when the splitting ratio of BS 2 changed from 2∶8 to 8∶2,the fringe contrast of the interference signals received by the photodetectors increased,but the injection light intensity onto the PSD reflected by BS 2 decreased.The significant influence of the tracing performance was verified by the experiments.When splitting ratio of BS 2 increased,the contrast of the interference fringes increased.Due to the weakening of the incident light intensity of the PSD caused by the change of BS 2 splitting ratio,the response time of the tracing system is increased by 23.7 ms.As a result,the tracing performance of the laser tracing measurement optical system was degraded.An important theoretical basis was provided to evaluate and improve the accuracy and reliability of laser tracing measurement systems.
基金supported in part by the National Natural Science Foundation of China(61933007, U21A2019, 62273005, 62273088, 62303301)the Program of Shanghai Academic/Technology Research Leader of China (20XD1420100)+2 种基金the Hainan Province Science and Technology Special Fund of China(ZDYF2022SHFZ105)the Natural Science Foundation of Anhui Province of China (2108085MA07)the Alexander von Humboldt Foundation of Germany。
文摘This paper investigates the anomaly-resistant decentralized state estimation(SE) problem for a class of wide-area power systems which are divided into several non-overlapping areas connected through transmission lines. Two classes of measurements(i.e., local measurements and edge measurements) are obtained, respectively, from the individual area and the transmission lines. A decentralized state estimator, whose performance is resistant against measurement with anomalies, is designed based on the minimum error entropy with fiducial points(MEEF) criterion. Specifically, 1) An augmented model, which incorporates the local prediction and local measurement, is developed by resorting to the unscented transformation approach and the statistical linearization approach;2) Using the augmented model, an MEEF-based cost function is designed that reflects the local prediction errors of the state and the measurement;and 3) The local estimate is first obtained by minimizing the MEEF-based cost function through a fixed-point iteration and then updated by using the edge measuring information. Finally, simulation experiments with three scenarios are carried out on the IEEE 14-bus system to illustrate the validity of the proposed anomaly-resistant decentralized SE scheme.
基金supported in part by the National Key R&D Program of China(2022YFC3401303)the Natural Science Foundation of Jiangsu Province(BK20211528)the Postgraduate Research&Practice Innovation Program of Jiangsu Province(KFCX22_2300)。
文摘In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally affordable.This paper proposes a novel Bayesian filtering algorithm that considers algorithmic computational cost and estimation accuracy for high-dimensional linear systems.The high-dimensional state vector is divided into several blocks to save computation resources by avoiding the calculation of error covariance with immense dimensions.After that,two sequential states are estimated simultaneously by introducing an auxiliary variable in the new probability space,mitigating the performance degradation caused by state segmentation.Moreover,the computational cost and error covariance of the proposed algorithm are analyzed analytically to show its distinct features compared with several existing methods.Simulation results illustrate that the proposed Bayesian filtering can maintain a higher estimation accuracy with reasonable computational cost when applied to high-dimensional linear systems.
基金Project supported by the National Natural Science Foundation of China (Grant No.51821005)。
文摘An externally generated resonant magnetic perturbation can induce complex non-ideal MHD responses in their resonant surfaces.We have studied the plasma responses using Fitzpatrick's improved two-fluid model and program LAYER.We calculated the error field penetration threshold for J-TEXT.In addition,we find that the island width increases slightly as the error field amplitude increases when the error field amplitude is below the critical penetration value.However,the island width suddenly jumps to a large value because the shielding effect of the plasma against the error field disappears after the penetration.By scanning the natural mode frequency,we find that the shielding effect of the plasma decreases as the natural mode frequency decreases.Finally,we obtain the m/n=2/1 penetration threshold scaling on density and temperature.
基金This project is partly funded by Science and Technology Project of State Grid Zhejiang Electric Power Co.,Ltd.“Research on active Security Defense Strategies for Distribution Internet of Things Based on Trustworthy,under Grant No.5211DS22000G”.
文摘The application of Intelligent Internet of Things(IIoT)in constructing distribution station areas strongly supports platform transformation,upgrade,and intelligent integration.The sensing layer of IIoT comprises the edge convergence layer and the end sensing layer,with the former using intelligent fusion terminals for real-time data collection and processing.However,the influx of multiple low-voltage in the smart grid raises higher demands for the performance,energy efficiency,and response speed of the substation fusion terminals.Simultaneously,it brings significant security risks to the entire distribution substation,posing a major challenge to the smart grid.In response to these challenges,a proposed dynamic and energy-efficient trust measurement scheme for smart grids aims to address these issues.The scheme begins by establishing a hierarchical trust measurement model,elucidating the trust relationships among smart IoT terminals.It then incorporates multidimensional measurement factors,encompassing static environmental factors,dynamic behaviors,and energy states.This comprehensive approach reduces the impact of subjective factors on trust measurements.Additionally,the scheme incorporates a detection process designed for identifying malicious low-voltage end sensing units,ensuring the prompt identification and elimination of any malicious terminals.This,in turn,enhances the security and reliability of the smart grid environment.The effectiveness of the proposed scheme in pinpointing malicious nodes has been demonstrated through simulation experiments.Notably,the scheme outperforms established trust metric models in terms of energy efficiency,showcasing its significant contribution to the field.
文摘Timer error as well as its convention is very important for dose accuracy during irradiation. This paper determines the timer error of irradiators at Secondary Standard Dosimetry Laboratory (SSDL) in Nigeria. The irradiators are Cs-137 OB6 irradiator and X-ray irradiators at the Protection level SSDL;and Co-60 irradiator at the Therapy Level SSDL. PTW UNIDOS electrometer and LS01 Ionization chamber were used at the Protection Level to obtain doses for both Cs-137 OB6 and X-ray irradiators while an IBA farmer type ionization chamber and an IBA DOSE 1 electrometer were used at the Protection Level SSDL. Single/multiple exposure method and graphical method were used in the determination of the timer error for the three irradiators. The timer error obtained for Cs-137 OB6 irradiator was 0.48 ± 0.01 s, the timer error for the X-ray irradiator was 0.09 ± 0.01 s while the timer error obtained for GammaBeam X200 was 1.21 ± 0.04 s. It was observed that the timer error is not affected by source to detector distance. It was also observed that the timer error of Co-60 Gamma X200 irradiator is increasing with the age of the machine. Source to detector distance and field size do not contribute towards the timer error of the irradiators. The timer error of the Co-60 Gamma X200 irradiator (the only irradiator among the irradiators with a pneumatic system) increases with the age of the irradiator.
基金Project supported by the National Natural Science Foundation of China (Grant Nos. 92065113, 11904357, 62075208, and 12174367)the Innovation Programme for Quantum Science and Technology (Grant No. 2021ZD0301604)+1 种基金the National Key Research and Development Program of China (Grant No. 2021YFE0113100)supported by Beijing Academy of Quantum Information Sciences
文摘Weak measurement amplification,which is considered as a very promising scheme in precision measurement,has been applied to various small physical quantities estimations.Since many physical quantities can be converted into phase signals,it is interesting and important to consider measuring small longitudinal phase shifts by using weak measurement.Here,we propose and experimentally demonstrate a novel weak measurement amplification-based small longitudinal phase estimation,which is suitable for polarization interferometry.We realize one order of magnitude amplification measurement of a small phase signal directly introduced by a liquid crystal variable retarder and show that it is robust to the imperfection of interference.Besides,we analyze the effect of magnification error which is never considered in the previous works,and find the constraint on the magnification.Our results may find important applications in high-precision measurements,e.g.,gravitational wave detection.
文摘The widespread adoption of the Internet of Things (IoT) has transformed various sectors globally, making themmore intelligent and connected. However, this advancement comes with challenges related to the effectiveness ofIoT devices. These devices, present in offices, homes, industries, and more, need constant monitoring to ensuretheir proper functionality. The success of smart systems relies on their seamless operation and ability to handlefaults. Sensors, crucial components of these systems, gather data and contribute to their functionality. Therefore,sensor faults can compromise the system’s reliability and undermine the trustworthiness of smart environments.To address these concerns, various techniques and algorithms can be employed to enhance the performance ofIoT devices through effective fault detection. This paper conducted a thorough review of the existing literature andconducted a detailed analysis.This analysis effectively links sensor errors with a prominent fault detection techniquecapable of addressing them. This study is innovative because it paves theway for future researchers to explore errorsthat have not yet been tackled by existing fault detection methods. Significant, the paper, also highlights essentialfactors for selecting and adopting fault detection techniques, as well as the characteristics of datasets and theircorresponding recommended techniques. Additionally, the paper presents amethodical overview of fault detectiontechniques employed in smart devices, including themetrics used for evaluation. Furthermore, the paper examinesthe body of academic work related to sensor faults and fault detection techniques within the domain. This reflectsthe growing inclination and scholarly attention of researchers and academicians toward strategies for fault detectionwithin the realm of the Internet of Things.
基金supported by Beijing Insititute of Technology Research Fund Program for Young Scholars(2020X04104)。
文摘In this paper,an improved spatio-temporal alignment measurement method is presented to address the inertial matching measurement of hull deformation under the coexistence of time delay and large misalignment angle.Large misalignment angle and time delay often occur simultaneously and bring great challenges to the accurate measurement of hull deformation in space and time.The proposed method utilizes coarse alignment with large misalignment angle and time delay estimation of inertial measurement unit modeling to establish a brand-new spatiotemporal aligned hull deformation measurement model.In addition,two-step loop control is designed to ensure the accurate description of dynamic deformation angle and static deformation angle by the time-space alignment method of hull deformation.The experiments illustrate that the proposed method can effectively measure the hull deformation angle when time delay and large misalignment angle coexist.
文摘In this paper,Let M_(n)denote the maximum of logarithmic general error distribution with parameter v≥1.Higher-order expansions for distributions of powered extremes M_(n)^(p)are derived under an optimal choice of normalizing constants.It is shown that M_(n)^(p),when v=1,converges to the Frechet extreme value distribution at the rate of 1/n,and if v>1 then M_(n)^(p)converges to the Gumbel extreme value distribution at the rate of(loglogn)^(2)=(log n)^(1-1/v).
基金This work is supported by National Natural Science Foundation of China(Nos.U23B20151 and 52171253).
文摘Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sources,the detector response can reflect various types of information of the medium.The Monte Carlo method is one of the primary methods used to obtain nuclear detection responses in complex environments.However,this requires a computational process with extensive random sampling,consumes considerable resources,and does not provide real-time response results.Therefore,a novel fast forward computational method(FFCM)for nuclear measurement that uses volumetric detection constraints to rapidly calculate the detector response in various complex environments is proposed.First,the data library required for the FFCM is built by collecting the detection volume,detector counts,and flux sensitivity functions through a Monte Carlo simulation.Then,based on perturbation theory and the Rytov approximation,a model for the detector response is derived using the flux sensitivity function method and a one-group diffusion model.The environmental perturbation is constrained to optimize the model according to the tool structure and the impact of the formation and borehole within the effective detection volume.Finally,the method is applied to a neutron porosity tool for verification.In various complex simulation environments,the maximum relative error between the calculated porosity results of Monte Carlo and FFCM was 6.80%,with a rootmean-square error of 0.62 p.u.In field well applications,the formation porosity model obtained using FFCM was in good agreement with the model obtained by interpreters,which demonstrates the validity and accuracy of the proposed method.
基金Project supported by the Natural Science Foundation of Shandong Province,China(Grant Nos.ZR2020MF119 and ZR2020MA082)the National Natural Science Foundation of China(Grant No.62002208)the National Key Research and Development Program of China(Grant No.2018YFB0504302).
文摘We propose a fast,adaptive multiscale resolution spectral measurement method based on compressed sensing.The method can apply variable measurement resolution over the entire spectral range to reduce the measurement time by over 75%compared to a global high-resolution measurement.Mimicking the characteristics of the human retina system,the resolution distribution follows the principle of gradually decreasing.The system allows the spectral peaks of interest to be captured dynamically or to be specified a priori by a user.The system was tested by measuring single and dual spectral peaks,and the results of spectral peaks are consistent with those of global high-resolution measurements.
基金Project supported by National Key Research and Development Program of China (Grant No.2022YFA1603403)。
文摘Finesse is a critical parameter for describing the characteristics of an optical enhancement cavity(OEC). This paper first presents a review of finesse measurement techniques, including a comparative analysis of the advantages, disadvantages, and potential limitations of several main methods from both theoretical and practical perspectives. A variant of the existing method called the free spectral range(FSR) modulation method is proposed and compared with three other finesse measurement methods, i.e., the fast-switching cavity ring-down(CRD) method, the rapidly swept-frequency(SF) CRD method, and the ringing effect method. A high-power OEC platform with a high finesse of approximately 16000 is built and measured with the four methods. The performance of these methods is compared, and the results show that the FSR modulation method and the fast-switching CRD method are more suitable and accurate than the other two methods for high-finesse OEC measurements. The CRD method and the ringing effect method can be implemented in open loop using simple equipment and are easy to perform. Additionally, recommendations for selecting finesse measurement methods under different conditions are proposed, which benefit the development of OEC and its applications.
基金Project supported by the Natural Science Foundation of Shandong Province,China (Grant No.ZR2021MF049)Joint Fund of Natural Science Foundation of Shandong Province (Grant Nos.ZR2022LLZ012 and ZR2021LLZ001)。
文摘Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NISQ)computing.In this paper,we use the bit-flip averaging(BFA)method to mitigate frequent readout errors in quantum generative adversarial networks(QGAN)for image generation,which simplifies the response matrix structure by averaging the qubits for each random bit-flip in advance,successfully solving problems with high cost of measurement for traditional error mitigation methods.Our experiments were simulated in Qiskit using the handwritten digit image recognition dataset under the BFA-based method,the Kullback-Leibler(KL)divergence of the generated images converges to 0.04,0.05,and 0.1 for readout error probabilities of p=0.01,p=0.05,and p=0.1,respectively.Additionally,by evaluating the fidelity of the quantum states representing the images,we observe average fidelity values of 0.97,0.96,and 0.95 for the three readout error probabilities,respectively.These results demonstrate the robustness of the model in mitigating readout errors and provide a highly fault tolerant mechanism for image generation models.
基金supported by the National Science Fund for Distinguished Young Scholars(No.61925102)the National Natural Science Foundation of China(No.62201086,92167202,62201087,62101069)BUPT-CMCC Joint Innovation Center,and State Key Laboratory of IPOC(BUPT)(No.IPOC2023ZT02),China。
文摘Visible light communication(VLC)has attracted much attention in the research of sixthgeneration(6G)systems.Furthermore,channel modeling is the foundation for designing efficient and robust VLC systems.In this paper,we present extensive VLC channel measurement campaigns in indoor environments,i.e.,an office and a corridor.Based on the measured data,the large-scale fading characteristics and multipath-related characteristics,including omnidirectional optical path loss(OPL),K-factor,power angular spectrum(PAS),angle spread(AS),and clustering characteristics,are analyzed and modeled through a statistical method.Based on the extracted statistics of the above-mentioned channel characteristics,we propose a statistical spatial channel model(SSCM)capable of modeling multipath in the spatial domain.Furthermore,the simulated statistics of the proposed model are compared with the measured statistics.For instance,in the office,the simulated path loss exponent(PLE)and the measured PLE are 1.96and 1.97,respectively.And,the simulated medians of AS and measured medians of AS are 25.94°and 24.84°,respectively.Generally,the fact that the simulated results fit well with measured results has demonstrated the accuracy of our SSCM.
基金supported in part by the National Natural Science Foundation of China(Nos.62071441 and 61701464)in part by the Fundamental Research Funds for the Central Universities(No.202151006).
文摘This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.
基金supported by the National Natural Science Foundation of China(61601147)the Beijing Natural Science Foundation(L182032)。
文摘In this paper,an efficient unequal error protection(UEP)scheme for online fountain codes is proposed.In the buildup phase,the traversing-selection strategy is proposed to select the most important symbols(MIS).Then,in the completion phase,the weighted-selection strategy is applied to provide low overhead.The performance of the proposed scheme is analyzed and compared with the existing UEP online fountain scheme.Simulation results show that in terms of MIS and the least important symbols(LIS),when the bit error ratio is 10-4,the proposed scheme can achieve 85%and 31.58%overhead reduction,respectively.
基金This work is funded by the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the National Science Fund for Distinguished Young Scholars of China(Grant No.52222905).
文摘In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.