Integrated sensing and communication(ISAC)is one of the main usage scenarios for 6G wireless networks.To most efficiently utilize the limited wireless resources,integrated super-resolution sensing and communication(IS...Integrated sensing and communication(ISAC)is one of the main usage scenarios for 6G wireless networks.To most efficiently utilize the limited wireless resources,integrated super-resolution sensing and communication(ISSAC)has been recently proposed to significantly improve sensing performance with super-resolution algorithms for ISAC systems,such as the Multiple Signal Classification(MUSIC)algorithm.However,traditional super-resolution sensing algorithms suffer from prohibitive computational complexity of orthogonal-frequency division multiplexing(OFDM)systems due to the large dimensions of the signals in the subcarrier and symbol domains.To address such issues,we propose a novel two-stage approach to reduce the computational complexity for super-resolution range estimation significantly.The key idea of the proposed scheme is to first uniformly decimate signals in the subcarrier domain so that the computational complexity is significantly reduced without missing any target in the range domain.However,the decimation operation may result in range ambiguity due to pseudo peaks,which is addressed by the second stage where the total collocated subcarrier data are used to verify the detected peaks.Compared with traditional MUSIC algorithms,the proposed scheme reduces computational complexity by two orders of magnitude,while maintaining the range resolution and unambiguity.Simulation results verify the effectiveness of the proposed scheme.展开更多
The infinite time-evolving block decimation algorithm(i TEBD)provides an efficient way to determine the ground state and dynamics of the quantum lattice systems in the thermodynamic limit.In this paper we suggest an o...The infinite time-evolving block decimation algorithm(i TEBD)provides an efficient way to determine the ground state and dynamics of the quantum lattice systems in the thermodynamic limit.In this paper we suggest an optimized way to take the i TEBD calculation,which takes advantage of additional reduced decompositions to speed up the calculation.The numerical calculations show that for a comparable computation time our method provides more accurate results than the traditional i TEBD,especially for lattice systems with large on-site degrees of freedom.展开更多
This paper deals with the technology of using comb filters for FIR Decimation in Digital Signal Processing. The process of decreasing the sampling frequency of a sampled signal is called decimation. In the usage of de...This paper deals with the technology of using comb filters for FIR Decimation in Digital Signal Processing. The process of decreasing the sampling frequency of a sampled signal is called decimation. In the usage of decimating filters, only a portion of the out-of-pass band frequencies turns into the pass band, in systems wherein different parts operate at different sample rates. A filter design, tuned to the aliasing frequencies all of which can otherwise steal into the pass band, not only provides multiple stop bands but also exhibits computational efficiency and performance superiority over the single stop band design. These filters are referred to as multiband designs in the family of FIR filters. The other two special versions of FIR filter designs are Halfband and Comb filter designs, both of which are particularly useful for reducing the computational requirements in multirate designs. The proposed method of using Comb FIR decimation procedure is not only efficient but also opens up a new vista of simplicity and elegancy to compute Multiplications per Second (MPS) and Additions per Second (APS) for the desired filter over and above the half band designs.展开更多
This paper provides a method of the process of computation called the cumulative method, it is based upon repeated cumulative process. The cumulative method is being adapted to the purposes of computation, particularl...This paper provides a method of the process of computation called the cumulative method, it is based upon repeated cumulative process. The cumulative method is being adapted to the purposes of computation, particularly multiplication and division. The operations of multiplication and division are represented by algebraic formulas. An advantage of the method is that the cumulative process can be performed on decimal numbers. The present paper aims to establish a basic and useful formula valid for the two fundamental arithmetic operations of multiplication and division. The new cumulative method proved to be more flexible and made it possible to extend the multiplication and division based on repeated addition/subtraction to decimal numbers.展开更多
Accurate frequency estimation in a wideband digital receiver using the FFT algorithm encounters challenges, such as spectral leakage resulting from the FFT’s assumption of signal periodicity. High-resolution FFTs pos...Accurate frequency estimation in a wideband digital receiver using the FFT algorithm encounters challenges, such as spectral leakage resulting from the FFT’s assumption of signal periodicity. High-resolution FFTs pose computational demands, and estimating non-integer multiples of frequency resolution proves exceptionally challenging. This paper introduces two novel methods for enhanced frequency precision: polynomial interpolation and array indexing, comparing their results with super-resolution and scalloping loss. Simulation results demonstrate the effectiveness of the proposed methods in contemporary radar systems, with array indexing providing the best frequency estimation despite utilizing maximum hardware resources. The paper demonstrates a trade-off between accurate frequency estimation and hardware resources when comparing polynomial interpolation and array indexing.展开更多
In this paper, we construct some continuous but non-differentiable functions defined by quinary dec-imal, that are Kiesswetter-like functions. We discuss their properties, then investigate the Hausdorff dimensions of ...In this paper, we construct some continuous but non-differentiable functions defined by quinary dec-imal, that are Kiesswetter-like functions. We discuss their properties, then investigate the Hausdorff dimensions of graphs of these functions and give a detailed proof.展开更多
A three-part comb decimator is presented in this paper, for the applications with severe requirements of circuit performance and frequency response. Based on the modified prime factorization method and multistage poly...A three-part comb decimator is presented in this paper, for the applications with severe requirements of circuit performance and frequency response. Based on the modified prime factorization method and multistage polyphase decomposition, an efficient non-recursive structure for the cascaded integrator-comb (CIC) decimation filter is derived. Utilizing this structure as the core part, the proposed comb decimator can not only loosen the decimation ratio's limitation, but also balance the tradeoff among the overall power consumption, circuit area and maximum speed. Further, to improve the frequency response of the comb decimator, a cos-prefilter is introduced as the preprocessing part for increasing the aliasing rejection, and an optimum sin-based filter is used as the compensation part for decreasing the passband droop.展开更多
This paper introduces decimated filter banks for the one-dimensional empirical mode decomposition (1D-EMD). These filter banks can provide perfect reconstruction and allow for an arbitrary tree structure. Since the ...This paper introduces decimated filter banks for the one-dimensional empirical mode decomposition (1D-EMD). These filter banks can provide perfect reconstruction and allow for an arbitrary tree structure. Since the EMD is a data driven decomposition, it is a very useful analysis instrument for non-stationary and non-linear signals. However, the traditional 1D-EMD has the disadvantage of expanding the data. Large data sets can be generated as the amount of data to be stored increases with every decomposition level. The 1D-EMD can be thought as having the structure of a single dyadic filter. However, a methodology to incorporate the decomposition into any arbitrary tree structure has not been reported yet in the literature. This paper shows how to extend the 1D-EMD into any arbitrary tree structure while maintaining the perfect reconstruction property. Furthermore, the technique allows for downsampling the decomposed signals. This paper, thus, presents a method to minimize the data-expansion drawback of the 1D-EMD by using decimation and merging the EMD coefficients. The proposed algorithm is applicable for any arbitrary tree structure including a full binary tree structure.展开更多
Traditional Evolutionary Algorithm (EAs) is based on the binary code, real number code, structure code and so on. But these coding strategies have their own advantages and disadvantages for the optimization of functio...Traditional Evolutionary Algorithm (EAs) is based on the binary code, real number code, structure code and so on. But these coding strategies have their own advantages and disadvantages for the optimization of functions. In this paper a new Decimal Coding Strategy (DCS), which is convenient for space division and alterable precision, was proposed, and the theory analysis of its implicit parallelism and convergence was also discussed. We also redesign several genetic operators for the decimal code. In order to utilize the historial information of the existing individuals in the process of evolution and avoid repeated exploring, the strategies of space shrinking and precision alterable, are adopted. Finally, the evolutionary algorithm based on decimal coding (DCEAs) was applied to the optimization of functions, the optimization of parameter, mixed-integer nonlinear programming. Comparison with traditional GAs was made and the experimental results show that the performances of DCEAS are better than the tradition GAs.展开更多
基金supported by the National Natural Science Foundation of China under Grant No.62071114.
文摘Integrated sensing and communication(ISAC)is one of the main usage scenarios for 6G wireless networks.To most efficiently utilize the limited wireless resources,integrated super-resolution sensing and communication(ISSAC)has been recently proposed to significantly improve sensing performance with super-resolution algorithms for ISAC systems,such as the Multiple Signal Classification(MUSIC)algorithm.However,traditional super-resolution sensing algorithms suffer from prohibitive computational complexity of orthogonal-frequency division multiplexing(OFDM)systems due to the large dimensions of the signals in the subcarrier and symbol domains.To address such issues,we propose a novel two-stage approach to reduce the computational complexity for super-resolution range estimation significantly.The key idea of the proposed scheme is to first uniformly decimate signals in the subcarrier domain so that the computational complexity is significantly reduced without missing any target in the range domain.However,the decimation operation may result in range ambiguity due to pseudo peaks,which is addressed by the second stage where the total collocated subcarrier data are used to verify the detected peaks.Compared with traditional MUSIC algorithms,the proposed scheme reduces computational complexity by two orders of magnitude,while maintaining the range resolution and unambiguity.Simulation results verify the effectiveness of the proposed scheme.
基金Project supported by Fundamental Research Funds for the Central Universities(Grant No.FRF-TP-19-013A3)。
文摘The infinite time-evolving block decimation algorithm(i TEBD)provides an efficient way to determine the ground state and dynamics of the quantum lattice systems in the thermodynamic limit.In this paper we suggest an optimized way to take the i TEBD calculation,which takes advantage of additional reduced decompositions to speed up the calculation.The numerical calculations show that for a comparable computation time our method provides more accurate results than the traditional i TEBD,especially for lattice systems with large on-site degrees of freedom.
文摘This paper deals with the technology of using comb filters for FIR Decimation in Digital Signal Processing. The process of decreasing the sampling frequency of a sampled signal is called decimation. In the usage of decimating filters, only a portion of the out-of-pass band frequencies turns into the pass band, in systems wherein different parts operate at different sample rates. A filter design, tuned to the aliasing frequencies all of which can otherwise steal into the pass band, not only provides multiple stop bands but also exhibits computational efficiency and performance superiority over the single stop band design. These filters are referred to as multiband designs in the family of FIR filters. The other two special versions of FIR filter designs are Halfband and Comb filter designs, both of which are particularly useful for reducing the computational requirements in multirate designs. The proposed method of using Comb FIR decimation procedure is not only efficient but also opens up a new vista of simplicity and elegancy to compute Multiplications per Second (MPS) and Additions per Second (APS) for the desired filter over and above the half band designs.
文摘This paper provides a method of the process of computation called the cumulative method, it is based upon repeated cumulative process. The cumulative method is being adapted to the purposes of computation, particularly multiplication and division. The operations of multiplication and division are represented by algebraic formulas. An advantage of the method is that the cumulative process can be performed on decimal numbers. The present paper aims to establish a basic and useful formula valid for the two fundamental arithmetic operations of multiplication and division. The new cumulative method proved to be more flexible and made it possible to extend the multiplication and division based on repeated addition/subtraction to decimal numbers.
文摘Accurate frequency estimation in a wideband digital receiver using the FFT algorithm encounters challenges, such as spectral leakage resulting from the FFT’s assumption of signal periodicity. High-resolution FFTs pose computational demands, and estimating non-integer multiples of frequency resolution proves exceptionally challenging. This paper introduces two novel methods for enhanced frequency precision: polynomial interpolation and array indexing, comparing their results with super-resolution and scalloping loss. Simulation results demonstrate the effectiveness of the proposed methods in contemporary radar systems, with array indexing providing the best frequency estimation despite utilizing maximum hardware resources. The paper demonstrates a trade-off between accurate frequency estimation and hardware resources when comparing polynomial interpolation and array indexing.
文摘In this paper, we construct some continuous but non-differentiable functions defined by quinary dec-imal, that are Kiesswetter-like functions. We discuss their properties, then investigate the Hausdorff dimensions of graphs of these functions and give a detailed proof.
基金Supported by the China Postdoctoral Science Foundation (20080431379).
文摘A three-part comb decimator is presented in this paper, for the applications with severe requirements of circuit performance and frequency response. Based on the modified prime factorization method and multistage polyphase decomposition, an efficient non-recursive structure for the cascaded integrator-comb (CIC) decimation filter is derived. Utilizing this structure as the core part, the proposed comb decimator can not only loosen the decimation ratio's limitation, but also balance the tradeoff among the overall power consumption, circuit area and maximum speed. Further, to improve the frequency response of the comb decimator, a cos-prefilter is introduced as the preprocessing part for increasing the aliasing rejection, and an optimum sin-based filter is used as the compensation part for decreasing the passband droop.
基金supported in part by an internal grant of Eastern Washington University
文摘This paper introduces decimated filter banks for the one-dimensional empirical mode decomposition (1D-EMD). These filter banks can provide perfect reconstruction and allow for an arbitrary tree structure. Since the EMD is a data driven decomposition, it is a very useful analysis instrument for non-stationary and non-linear signals. However, the traditional 1D-EMD has the disadvantage of expanding the data. Large data sets can be generated as the amount of data to be stored increases with every decomposition level. The 1D-EMD can be thought as having the structure of a single dyadic filter. However, a methodology to incorporate the decomposition into any arbitrary tree structure has not been reported yet in the literature. This paper shows how to extend the 1D-EMD into any arbitrary tree structure while maintaining the perfect reconstruction property. Furthermore, the technique allows for downsampling the decomposed signals. This paper, thus, presents a method to minimize the data-expansion drawback of the 1D-EMD by using decimation and merging the EMD coefficients. The proposed algorithm is applicable for any arbitrary tree structure including a full binary tree structure.
文摘Traditional Evolutionary Algorithm (EAs) is based on the binary code, real number code, structure code and so on. But these coding strategies have their own advantages and disadvantages for the optimization of functions. In this paper a new Decimal Coding Strategy (DCS), which is convenient for space division and alterable precision, was proposed, and the theory analysis of its implicit parallelism and convergence was also discussed. We also redesign several genetic operators for the decimal code. In order to utilize the historial information of the existing individuals in the process of evolution and avoid repeated exploring, the strategies of space shrinking and precision alterable, are adopted. Finally, the evolutionary algorithm based on decimal coding (DCEAs) was applied to the optimization of functions, the optimization of parameter, mixed-integer nonlinear programming. Comparison with traditional GAs was made and the experimental results show that the performances of DCEAS are better than the tradition GAs.