In this study, the relationship between the limit of predictability and initial error was investigated using two simple chaotic systems: the Lorenz model, which possesses a single characteristic time scale, and the c...In this study, the relationship between the limit of predictability and initial error was investigated using two simple chaotic systems: the Lorenz model, which possesses a single characteristic time scale, and the coupled Lorenz model, which possesses two different characteristic time scales. The limit of predictability is defined here as the time at which the error reaches 95% of its saturation level; nonlinear behaviors of the error growth are therefore involved in the definition of the limit of predictability. Our results show that the logarithmic function performs well in describing the relationship between the limit of predictability and initial error in both models, although the coefficients in the logarithmic function were not constant across the examined range of initial errors. Compared with the Lorenz model, in the coupled Lorenz model in which the slow dynamics and the fast dynamics interact with each other--there is a more complex relationship between the limit of predictability and initial error. The limit of predictability of the Lorenz model is unbounded as the initial error becomes infinitesimally small; therefore, the limit of predictability of the Lorenz model may be extended by reducing the amplitude of the initial error. In contrast, if there exists a fixed initial error in the fast dynamics of the coupled Lorenz model, the slow dynamics has an intrinsic finite limit of predictability that cannot be extended by reducing the amplitude of the initial error in the slow dynamics, and vice versa. The findings reported here reveal the possible existence of an intrinsic finite limit of predictability in a coupled system that possesses many scales of time or motion.展开更多
Close-range photogrammetry is to determine the shape and size of the object, instead of it's absolute position. Therefore, at first, any translation and rotation of the photogrammetric model of the object caused b...Close-range photogrammetry is to determine the shape and size of the object, instead of it's absolute position. Therefore, at first, any translation and rotation of the photogrammetric model of the object caused by whole geodesic, photographic and photogrammetric procedures in close-range photogrammetry could not be considered. However, it is necessary to analyze all the reasons which cause the deformations of the shape and size and to present their corresponding theories and equations. This situation, of course, is very different from the conventional topophotogrammetry. In this paper some specific characters of limit errors in close-range photogrammetry are presented in detail, including limit errors for calibration of interior elements for close-range cameras, the limit errors of relative and absolute orientations in close-range cameras, the limit errors of relative and absolute orientations in close-range photogrammetric procedures, and the limit errors of control works in close-range photogrammetry. A theoretical equation of calibration accuracy for close-range camerais given. Relating to the three examples in this paper, their theoretical accuracy requirement of interior elements of camera change in the scope of ±(0.005–0.350) mm. This discussion permits us to reduce accuracy requirement in calibration for an object with small relief, but the camera platform is located in violent vibration environment. Another theoretical equation of relative RMS of base lines (m S/S) and the equation RMS of start direction are also presented. It is proved that them S/S could be equal to the relative RMS ofm ΔX/ΔX. It is also proved that the permitting RMS of start direction is much bigger than the traditionally used one. Some useful equations of limit errors in close-range photogrammetry are presented as well. Suggestions mentioned above are perhaps beneficial for increasing efficiency, for reducing production cost.展开更多
Error correction has long been suggested to extend the sensitivity of quantum sensors into the Heisenberg Limit. However, operations on logical qubits are only performed through universal gate sets consisting of finit...Error correction has long been suggested to extend the sensitivity of quantum sensors into the Heisenberg Limit. However, operations on logical qubits are only performed through universal gate sets consisting of finite-sized gates such as Clifford + T. Although these logical gate sets allow for universal quantum computation, the finite gate sizes present a problem for quantum sensing, since in sensing protocols, such as the Ramsey measurement protocol, the signal must act continuously. The difficulty in constructing a continuous logical op-erator comes from the Eastin-Knill theorem, which prevents a continuous sig-nal from being both fault-tolerant to local errors and transverse. Since error correction is needed to approach the Heisenberg Limit in a noisy environment, it is important to explore how to construct fault-tolerant continuous operators. In this paper, a protocol to design continuous logical z-rotations is proposed and applied to the Steane Code. The fault tolerance of the designed operator is investigated using the Knill-Laflamme conditions. The Knill-Laflamme condi-tions indicate that the diagonal unitary operator constructed cannot be fault tolerant solely due to the possibilities of X errors on the middle qubit. The ap-proach demonstrated throughout this paper may, however, find success in codes with more qubits such as the Shor code, distance 3 surface code, [15, 1, 3] code, or codes with a larger distance such as the [11, 1, 5] code.展开更多
The truncation error associated with a given sampling representation is defined as the difference between the signal and on approximating sumutilizing a finite number of terms. In this paper we give uniform bound for ...The truncation error associated with a given sampling representation is defined as the difference between the signal and on approximating sumutilizing a finite number of terms. In this paper we give uniform bound for truncation error of bandlimited functions in the n dimensional Lebesgue space Lp(Rn) associated with multidimensional Shannon sampling representation.展开更多
Parallel kinematic machines (PKMs) have the advantages of a compact structure,high stiffness,a low moving inertia,and a high load/weight ratio.PKMs have been intensively studied since the 1980s,and are still attract...Parallel kinematic machines (PKMs) have the advantages of a compact structure,high stiffness,a low moving inertia,and a high load/weight ratio.PKMs have been intensively studied since the 1980s,and are still attracting much attention.Compared with extensive researches focus on their type/dimensional synthesis,kinematic/dynamic analyses,the error modeling and separation issues in PKMs are not studied adequately,which is one of the most important obstacles in its commercial applications widely.Taking a 3-PRS parallel manipulator as an example,this paper presents a separation method of source errors for 3-DOF parallel manipulator into the compensable and non-compensable errors effectively.The kinematic analysis of 3-PRS parallel manipulator leads to its six-dimension Jacobian matrix,which can be mapped into the Jacobian matrix of actuations and constraints,and then the compensable and non-compensable errors can be separated accordingly.The compensable errors can be compensated by the kinematic calibration,while the non-compensable errors may be adjusted by the manufacturing and assembling process.Followed by the influence of the latter,i.e.,the non-compensable errors,on the pose error of the moving platform through the sensitivity analysis with the aid of the Monte-Carlo method,meanwhile,the configurations of the manipulator are sought as the pose errors of the moving platform approaching their maximum.The compensable and non-compensable errors in limited-DOF parallel manipulators can be separated effectively by means of the Jacobian matrix of actuations and constraints,providing designers with an informative guideline to taking proper measures for enhancing the pose accuracy via component tolerancing and/or kinematic calibration,which can lay the foundation for the error distinguishment and compensation.展开更多
Forming limit curves(FLCs) are commonly used for evaluating the formability of sheet metals. However, it is difficult to obtain the FLCs with desirable accuracy by experiments due to that the friction effects are no...Forming limit curves(FLCs) are commonly used for evaluating the formability of sheet metals. However, it is difficult to obtain the FLCs with desirable accuracy by experiments due to that the friction effects are non-negligible under warm/hot stamping conditions. To investigate the experimental errors, experiments for obtaining the FLCs of the AA5754 are conducted at 250℃. Then, FE models are created and validated on the basis of experimental results. A number of FE simulations are carried out for FLC test-pieces and punches with different geometry configurations and varying friction coefficients between the test-piece and the punch. The errors for all the test conditions are predicted and analyzed. Particular attention of error analysis is paid to two special cases, namely, the biaxial FLC test and the uniaxial FLC test. The failure location and the variation of the error with respect to the friction coefficient are studied as well. The results obtained from the FLC tests and the above analyses show that, for the biaxial tension state, the friction coefficient should be controlled within 0.15 to avoid significant shifting of the necking location away from the center of the punch; for the uniaxial tension state, the friction coefficient should be controlled within 0.1 to guarantee the validity of the data collected from FLC tests. The conclusions summarized are beneficial for obtaining accurate FLCs under warm/hot stamping conditions.展开更多
In this paper, we consider a multi-relay cooperative communication network that consists of a source node transmitting to its destination with the help of multiple decode-and- forward (DF) relays. Specifically, the DF...In this paper, we consider a multi-relay cooperative communication network that consists of a source node transmitting to its destination with the help of multiple decode-and- forward (DF) relays. Specifically, the DF relays that succeed in decoding the source signal are allowed to re-transmit their decoded results simultaneously to the destination in a cooperative beamforming manner. In order to carry out the cooperative beamforming, the destination needs to send the quantized channel state information (CSI) to the relays through a limited feedback channel in the face of channel quantization errors (CQE). We propose a CQE oriented multi-relay beamforming (MRB) scheme, denoted CQE-MRB for short, for the sake of improving the throughput of relay-destination transmissions. An effective throughput defined as the difference between the transmission rate and the feedback rate is used to measure an outage probability of the source-destination transmission. Simulation results demonstrate that the outage performance of proposed CQEMRB scheme is improved substantially with an increasing number of relays. Moreover, it is shown that the number of channel quantization bits can be further optimized to minimize the outage probability of proposed CQE-MRB scheme.展开更多
The quality of the radiation dose depends upon the gamma count rate of the radionuclide used. Any reduction in error in the count rate is reflected in the reduction in error in the activity and consequently on the qua...The quality of the radiation dose depends upon the gamma count rate of the radionuclide used. Any reduction in error in the count rate is reflected in the reduction in error in the activity and consequently on the quality of dose. All the efforts so far have been directed only to minimize the random errors in count rate by repetition. In the absence of probability distribution for the systematic errors, we propose to minimize these errors by estimating the upper and lower limits by the technique of determinant in equalities developed by us. Using the algorithm we have developed based on the tech- nique of determinant inequalities and the concept of maximization of mutual information (MI), we show how to process element by element of the covariance matrix to minimize the correlated systematic errors in the count rate of 113 mIn. The element wise processing of covariance matrix is so unique by our technique that it gives experimentalists enough maneuverability to mitigate different factors causing systematic errors in the count rate and consequently the activity of 113 mIn.展开更多
基金sprovided jointly by the 973 Program (Grant No.2010CB950400)National Natural Science Foundation of China (Grant Nos. 40805022 and 40821092)
文摘In this study, the relationship between the limit of predictability and initial error was investigated using two simple chaotic systems: the Lorenz model, which possesses a single characteristic time scale, and the coupled Lorenz model, which possesses two different characteristic time scales. The limit of predictability is defined here as the time at which the error reaches 95% of its saturation level; nonlinear behaviors of the error growth are therefore involved in the definition of the limit of predictability. Our results show that the logarithmic function performs well in describing the relationship between the limit of predictability and initial error in both models, although the coefficients in the logarithmic function were not constant across the examined range of initial errors. Compared with the Lorenz model, in the coupled Lorenz model in which the slow dynamics and the fast dynamics interact with each other--there is a more complex relationship between the limit of predictability and initial error. The limit of predictability of the Lorenz model is unbounded as the initial error becomes infinitesimally small; therefore, the limit of predictability of the Lorenz model may be extended by reducing the amplitude of the initial error. In contrast, if there exists a fixed initial error in the fast dynamics of the coupled Lorenz model, the slow dynamics has an intrinsic finite limit of predictability that cannot be extended by reducing the amplitude of the initial error in the slow dynamics, and vice versa. The findings reported here reveal the possible existence of an intrinsic finite limit of predictability in a coupled system that possesses many scales of time or motion.
文摘Close-range photogrammetry is to determine the shape and size of the object, instead of it's absolute position. Therefore, at first, any translation and rotation of the photogrammetric model of the object caused by whole geodesic, photographic and photogrammetric procedures in close-range photogrammetry could not be considered. However, it is necessary to analyze all the reasons which cause the deformations of the shape and size and to present their corresponding theories and equations. This situation, of course, is very different from the conventional topophotogrammetry. In this paper some specific characters of limit errors in close-range photogrammetry are presented in detail, including limit errors for calibration of interior elements for close-range cameras, the limit errors of relative and absolute orientations in close-range cameras, the limit errors of relative and absolute orientations in close-range photogrammetric procedures, and the limit errors of control works in close-range photogrammetry. A theoretical equation of calibration accuracy for close-range camerais given. Relating to the three examples in this paper, their theoretical accuracy requirement of interior elements of camera change in the scope of ±(0.005–0.350) mm. This discussion permits us to reduce accuracy requirement in calibration for an object with small relief, but the camera platform is located in violent vibration environment. Another theoretical equation of relative RMS of base lines (m S/S) and the equation RMS of start direction are also presented. It is proved that them S/S could be equal to the relative RMS ofm ΔX/ΔX. It is also proved that the permitting RMS of start direction is much bigger than the traditionally used one. Some useful equations of limit errors in close-range photogrammetry are presented as well. Suggestions mentioned above are perhaps beneficial for increasing efficiency, for reducing production cost.
文摘Error correction has long been suggested to extend the sensitivity of quantum sensors into the Heisenberg Limit. However, operations on logical qubits are only performed through universal gate sets consisting of finite-sized gates such as Clifford + T. Although these logical gate sets allow for universal quantum computation, the finite gate sizes present a problem for quantum sensing, since in sensing protocols, such as the Ramsey measurement protocol, the signal must act continuously. The difficulty in constructing a continuous logical op-erator comes from the Eastin-Knill theorem, which prevents a continuous sig-nal from being both fault-tolerant to local errors and transverse. Since error correction is needed to approach the Heisenberg Limit in a noisy environment, it is important to explore how to construct fault-tolerant continuous operators. In this paper, a protocol to design continuous logical z-rotations is proposed and applied to the Steane Code. The fault tolerance of the designed operator is investigated using the Knill-Laflamme conditions. The Knill-Laflamme condi-tions indicate that the diagonal unitary operator constructed cannot be fault tolerant solely due to the possibilities of X errors on the middle qubit. The ap-proach demonstrated throughout this paper may, however, find success in codes with more qubits such as the Shor code, distance 3 surface code, [15, 1, 3] code, or codes with a larger distance such as the [11, 1, 5] code.
基金Projcct supported by the Natural Science Foundation of China (Grant No. 10371009 ) of Beijing Educational Committee (No. 2002KJ112).
文摘The truncation error associated with a given sampling representation is defined as the difference between the signal and on approximating sumutilizing a finite number of terms. In this paper we give uniform bound for truncation error of bandlimited functions in the n dimensional Lebesgue space Lp(Rn) associated with multidimensional Shannon sampling representation.
基金supported by Tianjin Research Program of Application Foundation and Advanced Technology of China (Grant No.11JCZDJC22700)National Natural Science Foundation of China (GrantNo. 51075295,Grant No. 50675151)+1 种基金National High-tech Research and Development Program of China (863 Program,Grant No.2007AA042001)PhD Programs Foundation of Ministry of Education of China (Grant No. 20060056018)
文摘Parallel kinematic machines (PKMs) have the advantages of a compact structure,high stiffness,a low moving inertia,and a high load/weight ratio.PKMs have been intensively studied since the 1980s,and are still attracting much attention.Compared with extensive researches focus on their type/dimensional synthesis,kinematic/dynamic analyses,the error modeling and separation issues in PKMs are not studied adequately,which is one of the most important obstacles in its commercial applications widely.Taking a 3-PRS parallel manipulator as an example,this paper presents a separation method of source errors for 3-DOF parallel manipulator into the compensable and non-compensable errors effectively.The kinematic analysis of 3-PRS parallel manipulator leads to its six-dimension Jacobian matrix,which can be mapped into the Jacobian matrix of actuations and constraints,and then the compensable and non-compensable errors can be separated accordingly.The compensable errors can be compensated by the kinematic calibration,while the non-compensable errors may be adjusted by the manufacturing and assembling process.Followed by the influence of the latter,i.e.,the non-compensable errors,on the pose error of the moving platform through the sensitivity analysis with the aid of the Monte-Carlo method,meanwhile,the configurations of the manipulator are sought as the pose errors of the moving platform approaching their maximum.The compensable and non-compensable errors in limited-DOF parallel manipulators can be separated effectively by means of the Jacobian matrix of actuations and constraints,providing designers with an informative guideline to taking proper measures for enhancing the pose accuracy via component tolerancing and/or kinematic calibration,which can lay the foundation for the error distinguishment and compensation.
基金Supported by National Natural Science Foundation of China(Grant No.51375201)Jilin Province Science and Technology Development Plan(Grant No.20130101048JC)Open Research Fund of Shanghai Key Laboratory of Digital Manufacturer for Thin-walled Structure(Grant No.2013001)
文摘Forming limit curves(FLCs) are commonly used for evaluating the formability of sheet metals. However, it is difficult to obtain the FLCs with desirable accuracy by experiments due to that the friction effects are non-negligible under warm/hot stamping conditions. To investigate the experimental errors, experiments for obtaining the FLCs of the AA5754 are conducted at 250℃. Then, FE models are created and validated on the basis of experimental results. A number of FE simulations are carried out for FLC test-pieces and punches with different geometry configurations and varying friction coefficients between the test-piece and the punch. The errors for all the test conditions are predicted and analyzed. Particular attention of error analysis is paid to two special cases, namely, the biaxial FLC test and the uniaxial FLC test. The failure location and the variation of the error with respect to the friction coefficient are studied as well. The results obtained from the FLC tests and the above analyses show that, for the biaxial tension state, the friction coefficient should be controlled within 0.15 to avoid significant shifting of the necking location away from the center of the punch; for the uniaxial tension state, the friction coefficient should be controlled within 0.1 to guarantee the validity of the data collected from FLC tests. The conclusions summarized are beneficial for obtaining accurate FLCs under warm/hot stamping conditions.
基金supported by the National Natural Science Foundation of China (Grant Nos. 61522109, 61631020, 61671253 and 91738201)the Natural Science Foundation of Jiangsu Province (Grant Nos. BK20150040, BK20171446 and BRA2018043)
文摘In this paper, we consider a multi-relay cooperative communication network that consists of a source node transmitting to its destination with the help of multiple decode-and- forward (DF) relays. Specifically, the DF relays that succeed in decoding the source signal are allowed to re-transmit their decoded results simultaneously to the destination in a cooperative beamforming manner. In order to carry out the cooperative beamforming, the destination needs to send the quantized channel state information (CSI) to the relays through a limited feedback channel in the face of channel quantization errors (CQE). We propose a CQE oriented multi-relay beamforming (MRB) scheme, denoted CQE-MRB for short, for the sake of improving the throughput of relay-destination transmissions. An effective throughput defined as the difference between the transmission rate and the feedback rate is used to measure an outage probability of the source-destination transmission. Simulation results demonstrate that the outage performance of proposed CQEMRB scheme is improved substantially with an increasing number of relays. Moreover, it is shown that the number of channel quantization bits can be further optimized to minimize the outage probability of proposed CQE-MRB scheme.
文摘The quality of the radiation dose depends upon the gamma count rate of the radionuclide used. Any reduction in error in the count rate is reflected in the reduction in error in the activity and consequently on the quality of dose. All the efforts so far have been directed only to minimize the random errors in count rate by repetition. In the absence of probability distribution for the systematic errors, we propose to minimize these errors by estimating the upper and lower limits by the technique of determinant in equalities developed by us. Using the algorithm we have developed based on the tech- nique of determinant inequalities and the concept of maximization of mutual information (MI), we show how to process element by element of the covariance matrix to minimize the correlated systematic errors in the count rate of 113 mIn. The element wise processing of covariance matrix is so unique by our technique that it gives experimentalists enough maneuverability to mitigate different factors causing systematic errors in the count rate and consequently the activity of 113 mIn.