For reducing the computational complexity of the problem of joint transmit and receive antenna selection in Multiple-Input-Multiple-Output (MIMO) systems, we present a concise joint transmit/receive antenna selection ...For reducing the computational complexity of the problem of joint transmit and receive antenna selection in Multiple-Input-Multiple-Output (MIMO) systems, we present a concise joint transmit/receive antenna selection algorithm. Using a novel partition of the channel matrix, we drive a concise formula. This formula enables us to augment the channel matrix in such a way that the computational complexity of the greedy Joint Transmit/Receive Antenna Selection (JTRAS) algorithm is reduced by a factor of 4n L , where n L is the number of selected antennas. A decoupled version of the proposed algorithm is also proposed to further improve the efficiency of the JTRAS algorithm, with some capacity degradation as a tradeoff. The computational complexity and the performance of the proposed approaches are evaluated mathematically and verified by computer simulations. The results have shown that the proposed joint antenna selection algorithm maintains the capacity perormance of the JTRAS algorithm while its computational complexity is only 1/4n L of that of the JTRAS algorithm. The decoupled version of the proposed algorithm further reduces the computational complexity of the joint antenna selection and has better performance than other decoupling-based algorithms when the selected antenna subset is small as compared to the total number of antennas.展开更多
This paper presents an improved Voice Activity Detection (VAD) algorithm which uses the Signal-to-Noise Ratio (SNR) measure. We assume that noise Power Spectral Density (PSD) in each spectral bin follows a Rayle...This paper presents an improved Voice Activity Detection (VAD) algorithm which uses the Signal-to-Noise Ratio (SNR) measure. We assume that noise Power Spectral Density (PSD) in each spectral bin follows a Rayleigh distribution. Rayleigh distributions with its asymmetric tail characteristics give a better description of the noise PSD distribution than Gaussian distribution. Under this asstlmption, a new threshold updating expression is derived. Since the analytical integral of the false alarm probability, the threshold updating expression can be represented without the inverse complementary error function and low computational complexity is achieved in our system. Experimental results show that the proposed VAD outperforms or at least is comparable with the VAD scheme presented by Davis under several noise environments and has a lower computational complexity.展开更多
Overlapped x domain multiplexing(OVXDM) is a promising encoding technique to obtain high spectral efficiency by utilizing inter-symbol interference(ISI) intelligently. However, the computational complexity of maximum ...Overlapped x domain multiplexing(OVXDM) is a promising encoding technique to obtain high spectral efficiency by utilizing inter-symbol interference(ISI) intelligently. However, the computational complexity of maximum likelihood sequence detection(MLSD) increases exponentially with the growth of spectral efficiency in OVXDM, which is unbearable for practical implementations. In this paper, based on a novel path metric associating adjacent symbols, we propose a multi-bit sliding stack decoding(Multi-Bit SSD) algorithm to achieve multiple-bit decoding simultaneously in OVXDM. Theoretical analysis is provided for the algorithm, which indicates the relationship between the performance and parameters including multiplexing waveform, overlapping fold and sliding window size. Simulation results show that the proposed algorithm can achieve better decoding performance and higher spectral efficiency than conventional fast decoding algorithms.展开更多
To solve the problem of strong nonlinear and motion model switching of maneuvering target tracking system in clutter environment, a novel maneuvering multi-target tracking algorithm based on multiple model particle fi...To solve the problem of strong nonlinear and motion model switching of maneuvering target tracking system in clutter environment, a novel maneuvering multi-target tracking algorithm based on multiple model particle filter is presented in this paper. The algorithm realizes dynamic combination of multiple model particle filter and joint probabilistic data association algorithm. The rapid expan- sion of computational complexity, caused by the simple combination of the interacting multiple model algorithm and particle filter is solved by introducing model information into the sampling process of particle state, and the effective validation and utilization of echo is accomplished by the joint proba- bilistic data association algorithm. The concrete steps of the algorithm are given, and the theory analysis and simulation results show the validity of the method.展开更多
A fast algorithm based on the grayscale distribution of infrared target and the weighted kernel function was proposed for the moving target detection(MTD) in dynamic scene of image series. This algorithm is used to de...A fast algorithm based on the grayscale distribution of infrared target and the weighted kernel function was proposed for the moving target detection(MTD) in dynamic scene of image series. This algorithm is used to deal with issues like the large computational complexity, the fluctuation of grayscale, and the noise in infrared images. Four characteristic points were selected by analyzing the grayscale distribution in infrared image, of which the series was quickly matched with an affine transformation model. The image was then divided into 32×32 squares and the gray-weighted kernel(GWK) for each square was calculated. At last, the MTD was carried out according to the variation of the four GWKs. The results indicate that the MTD can be achieved in real time using the algorithm with the fluctuations of grayscale and noise can be effectively suppressed. The detection probability is greater than 90% with the false alarm rate lower than 5% when the calculation time is less than 40 ms.展开更多
Named Data Networking(NDN)improves the data delivery efficiency by caching contents in routers. To prevent corrupted and faked contents be spread in the network,NDN routers should verify the digital signature of each ...Named Data Networking(NDN)improves the data delivery efficiency by caching contents in routers. To prevent corrupted and faked contents be spread in the network,NDN routers should verify the digital signature of each published content. Since the verification scheme in NDN applies the asymmetric encryption algorithm to sign contents,the content verification overhead is too high to satisfy wire-speed packet forwarding. In this paper, we propose two schemes to improve the verification performance of NDN routers to prevent content poisoning. The first content verification scheme, called "user-assisted",leads to the best performance, but can be bypassed if the clients and the content producer collude. A second scheme, named ``RouterCooperation ‘', prevents the aforementioned collusion attack by making edge routers verify the contents independently without the assistance of users and the core routers no longer verify the contents. The Router-Cooperation verification scheme reduces the computing complexity of cryptographic operation by replacing the asymmetric encryption algorithm with symmetric encryption algorithm.The simulation results demonstrate that this Router-Cooperation scheme can speed up18.85 times of the original content verification scheme with merely extra 80 Bytes transmission overhead.展开更多
Based on propagator method, a fast 2-D Angle-Of-Arrival (AOA) algorithm is proPosed in this paper. The proposed algorithm does not need the Eigen-Value Decomposition (EVD) or Singular Value Decomposition (SVD) of the ...Based on propagator method, a fast 2-D Angle-Of-Arrival (AOA) algorithm is proPosed in this paper. The proposed algorithm does not need the Eigen-Value Decomposition (EVD) or Singular Value Decomposition (SVD) of the Sample Covariance Matrix (SCM), thus the fast algorithm has lower computational complexity with insignificant performance degradation when comparing with conventional subspace approaches. Furthermore, the proposed algorithm has no performance degradation. Finally, computer simulations verify the effectiveness of the proposed algorithm.展开更多
The HASM(high accuracy surface modeling) technique is based on the fundamental theory of surfaces,which has been proved to improve the interpolation accuracy in surface fitting.However,the integral iterative solution ...The HASM(high accuracy surface modeling) technique is based on the fundamental theory of surfaces,which has been proved to improve the interpolation accuracy in surface fitting.However,the integral iterative solution in previous studies resulted in high temporal complexity in computation and huge memory usage so that it became difficult to put the technique into application,especially for large-scale datasets.In the study,an innovative model(HASM-AD) is developed according to the sequential least squares on the basis of data adjustment theory.Sequential division is adopted in the technique,so that linear equations can be divided into groups to be processed in sequence with the temporal complexity reduced greatly in computation.The experiment indicates that the HASM-AD technique surpasses the traditional spatial interpolation methods in accuracy.Also,the cross-validation result proves the same conclusion for the spatial interpolation of soil PH property with the data sampled in Jiangxi province.Moreover,it is demonstrated in the study that the HASM-AD technique significantly reduces the computational complexity and lessens memory usage in computation.展开更多
The problem of computing the greatest common divisor(GCD) of multivariate polynomials, as one of the most important tasks of computer algebra and symbolic computation in more general scope, has been studied extensiv...The problem of computing the greatest common divisor(GCD) of multivariate polynomials, as one of the most important tasks of computer algebra and symbolic computation in more general scope, has been studied extensively since the beginning of the interdisciplinary of mathematics with computer science. For many real applications such as digital image restoration and enhancement,robust control theory of nonlinear systems, L1-norm convex optimization in compressed sensing techniques, as well as algebraic decoding of Reed-Solomon and BCH codes, the concept of sparse GCD plays a core role where only the greatest common divisors with much fewer terms than the original polynomials are of interest due to the nature of problems or data structures. This paper presents two methods via multivariate polynomial interpolation which are based on the variation of Zippel's method and Ben-Or/Tiwari algorithm, respectively. To reduce computational complexity, probabilistic techniques and randomization are employed to deal with univariate GCD computation and univariate polynomial interpolation. The authors demonstrate the practical performance of our algorithms on a significant body of examples. The implemented experiment illustrates that our algorithms are efficient for a quite wide range of input.展开更多
Parameterizations that use mesh simplification to build the base domain always adopt the vertex removal scheme.This paper applies edge collapse to constructing the base domain instead.After inducing the parameterizati...Parameterizations that use mesh simplification to build the base domain always adopt the vertex removal scheme.This paper applies edge collapse to constructing the base domain instead.After inducing the parameterization of the original mesh over the base domain,new algorithms map the new vertices in the simplified mesh back to the original one according to the edge transition sequence to integrate the parameterization.We present a direct way,namely edge classification,to deduce the sequence.Experimental results show that the new parameterization features considerable saving in computing complexity and maintains smoothness.展开更多
In recent years, a family of numerical algorithms to solve problems in real algebraic and semialgebraic geometry has been slowly growing. Unlike their counterparts in symbolic computation they are numerically stable. ...In recent years, a family of numerical algorithms to solve problems in real algebraic and semialgebraic geometry has been slowly growing. Unlike their counterparts in symbolic computation they are numerically stable. But their complexity analysis, based on the condition of the data, is radically different from the usual complexity analysis in symbolic computation as these numerical algorithms may run forever on a thin set of ill-posed inputs.展开更多
The authors present an algorithm which is a modilication of the Nguyen-Stenle greedy reduction algorithm due to Nguyen and Stehle in 2009. This algorithm can be used to compute the Minkowski reduced lattice bases for ...The authors present an algorithm which is a modilication of the Nguyen-Stenle greedy reduction algorithm due to Nguyen and Stehle in 2009. This algorithm can be used to compute the Minkowski reduced lattice bases for arbitrary rank lattices with quadratic bit complexity on the size of the input vectors. The total bit complexity of the algorithm is O(n^2·(4n!)^n·(n!/2^n)^n/2·(4/3)^n(n-1)/2).log^2 A)where n is the rank of the lattice and A is maximal norm of the input base vectors. This is an O(log^2 A) algorithm which can be used to compute Minkowski reduced bases for the fixed rank lattices. A time complexity n!. 3n(log A)^O(1) algorithm which can be used to compute the successive minima with the help of the dual Hermite-Korkin-Zolotarev base was given by Blomer in 2000 and improved to the time complexity n!- (log A)^O(1) by Micciancio in 2008. The algorithm in this paper is more suitable for computing the Minkowski reduced bases of low rank lattices with very large base vector sizes.展开更多
Path length calculation is a frequent requirement in studies related to graph theoretic problems such as genetics. Standard method to calculate average path length (APL) of a graph requires traversing all nodes in t...Path length calculation is a frequent requirement in studies related to graph theoretic problems such as genetics. Standard method to calculate average path length (APL) of a graph requires traversing all nodes in the graph repeatedly, which is computationally expensive for graphs containing large number of nodes. We propose a novel method to calculate APL for graphs commonly required in the studies of genetics. The proposed method is computationally less expensive and less time-consuming compared to standard method.展开更多
文摘For reducing the computational complexity of the problem of joint transmit and receive antenna selection in Multiple-Input-Multiple-Output (MIMO) systems, we present a concise joint transmit/receive antenna selection algorithm. Using a novel partition of the channel matrix, we drive a concise formula. This formula enables us to augment the channel matrix in such a way that the computational complexity of the greedy Joint Transmit/Receive Antenna Selection (JTRAS) algorithm is reduced by a factor of 4n L , where n L is the number of selected antennas. A decoupled version of the proposed algorithm is also proposed to further improve the efficiency of the JTRAS algorithm, with some capacity degradation as a tradeoff. The computational complexity and the performance of the proposed approaches are evaluated mathematically and verified by computer simulations. The results have shown that the proposed joint antenna selection algorithm maintains the capacity perormance of the JTRAS algorithm while its computational complexity is only 1/4n L of that of the JTRAS algorithm. The decoupled version of the proposed algorithm further reduces the computational complexity of the joint antenna selection and has better performance than other decoupling-based algorithms when the selected antenna subset is small as compared to the total number of antennas.
基金Supported by the National Natural Science Foundation of China (No. 60874060)
文摘This paper presents an improved Voice Activity Detection (VAD) algorithm which uses the Signal-to-Noise Ratio (SNR) measure. We assume that noise Power Spectral Density (PSD) in each spectral bin follows a Rayleigh distribution. Rayleigh distributions with its asymmetric tail characteristics give a better description of the noise PSD distribution than Gaussian distribution. Under this asstlmption, a new threshold updating expression is derived. Since the analytical integral of the false alarm probability, the threshold updating expression can be represented without the inverse complementary error function and low computational complexity is achieved in our system. Experimental results show that the proposed VAD outperforms or at least is comparable with the VAD scheme presented by Davis under several noise environments and has a lower computational complexity.
基金supported by the Fundamental Research Funds for the Central Universities under grant 2016XD-01
文摘Overlapped x domain multiplexing(OVXDM) is a promising encoding technique to obtain high spectral efficiency by utilizing inter-symbol interference(ISI) intelligently. However, the computational complexity of maximum likelihood sequence detection(MLSD) increases exponentially with the growth of spectral efficiency in OVXDM, which is unbearable for practical implementations. In this paper, based on a novel path metric associating adjacent symbols, we propose a multi-bit sliding stack decoding(Multi-Bit SSD) algorithm to achieve multiple-bit decoding simultaneously in OVXDM. Theoretical analysis is provided for the algorithm, which indicates the relationship between the performance and parameters including multiplexing waveform, overlapping fold and sliding window size. Simulation results show that the proposed algorithm can achieve better decoding performance and higher spectral efficiency than conventional fast decoding algorithms.
基金Supported by the National Natural Science Foundation of China (60634030), the National Natural Science Foundation of China (60702066, 6097219) and the Natural Science Foundation of Henan Province (092300410158).
文摘To solve the problem of strong nonlinear and motion model switching of maneuvering target tracking system in clutter environment, a novel maneuvering multi-target tracking algorithm based on multiple model particle filter is presented in this paper. The algorithm realizes dynamic combination of multiple model particle filter and joint probabilistic data association algorithm. The rapid expan- sion of computational complexity, caused by the simple combination of the interacting multiple model algorithm and particle filter is solved by introducing model information into the sampling process of particle state, and the effective validation and utilization of echo is accomplished by the joint proba- bilistic data association algorithm. The concrete steps of the algorithm are given, and the theory analysis and simulation results show the validity of the method.
基金Project(61101185)supported by the National Natural Science Foundation of China
文摘A fast algorithm based on the grayscale distribution of infrared target and the weighted kernel function was proposed for the moving target detection(MTD) in dynamic scene of image series. This algorithm is used to deal with issues like the large computational complexity, the fluctuation of grayscale, and the noise in infrared images. Four characteristic points were selected by analyzing the grayscale distribution in infrared image, of which the series was quickly matched with an affine transformation model. The image was then divided into 32×32 squares and the gray-weighted kernel(GWK) for each square was calculated. At last, the MTD was carried out according to the variation of the four GWKs. The results indicate that the MTD can be achieved in real time using the algorithm with the fluctuations of grayscale and noise can be effectively suppressed. The detection probability is greater than 90% with the false alarm rate lower than 5% when the calculation time is less than 40 ms.
基金financially supported by Shenzhen Key Fundamental Research Projects(Grant No.:JCYJ20170306091556329).
文摘Named Data Networking(NDN)improves the data delivery efficiency by caching contents in routers. To prevent corrupted and faked contents be spread in the network,NDN routers should verify the digital signature of each published content. Since the verification scheme in NDN applies the asymmetric encryption algorithm to sign contents,the content verification overhead is too high to satisfy wire-speed packet forwarding. In this paper, we propose two schemes to improve the verification performance of NDN routers to prevent content poisoning. The first content verification scheme, called "user-assisted",leads to the best performance, but can be bypassed if the clients and the content producer collude. A second scheme, named ``RouterCooperation ‘', prevents the aforementioned collusion attack by making edge routers verify the contents independently without the assistance of users and the core routers no longer verify the contents. The Router-Cooperation verification scheme reduces the computing complexity of cryptographic operation by replacing the asymmetric encryption algorithm with symmetric encryption algorithm.The simulation results demonstrate that this Router-Cooperation scheme can speed up18.85 times of the original content verification scheme with merely extra 80 Bytes transmission overhead.
基金Supported by the Foundation of National Key Laboratory.
文摘Based on propagator method, a fast 2-D Angle-Of-Arrival (AOA) algorithm is proPosed in this paper. The proposed algorithm does not need the Eigen-Value Decomposition (EVD) or Singular Value Decomposition (SVD) of the Sample Covariance Matrix (SCM), thus the fast algorithm has lower computational complexity with insignificant performance degradation when comparing with conventional subspace approaches. Furthermore, the proposed algorithm has no performance degradation. Finally, computer simulations verify the effectiveness of the proposed algorithm.
基金Supported by the National Science Fund for Distinguished Young Scholars (No. 40825003)the Major Directivity Projects of Chinese Academy of Science (No. kzcx2-yw-429)the National High-tech R&D Program of China (No. 2006AA12Z219)
文摘The HASM(high accuracy surface modeling) technique is based on the fundamental theory of surfaces,which has been proved to improve the interpolation accuracy in surface fitting.However,the integral iterative solution in previous studies resulted in high temporal complexity in computation and huge memory usage so that it became difficult to put the technique into application,especially for large-scale datasets.In the study,an innovative model(HASM-AD) is developed according to the sequential least squares on the basis of data adjustment theory.Sequential division is adopted in the technique,so that linear equations can be divided into groups to be processed in sequence with the temporal complexity reduced greatly in computation.The experiment indicates that the HASM-AD technique surpasses the traditional spatial interpolation methods in accuracy.Also,the cross-validation result proves the same conclusion for the spatial interpolation of soil PH property with the data sampled in Jiangxi province.Moreover,it is demonstrated in the study that the HASM-AD technique significantly reduces the computational complexity and lessens memory usage in computation.
基金supported by the National Natural Science Foundation of China under Grant Nos.11471209,11561015,and 11301066Guangxi Key Laboratory of Cryptography and Information Security under Grant No.GCIS201615
文摘The problem of computing the greatest common divisor(GCD) of multivariate polynomials, as one of the most important tasks of computer algebra and symbolic computation in more general scope, has been studied extensively since the beginning of the interdisciplinary of mathematics with computer science. For many real applications such as digital image restoration and enhancement,robust control theory of nonlinear systems, L1-norm convex optimization in compressed sensing techniques, as well as algebraic decoding of Reed-Solomon and BCH codes, the concept of sparse GCD plays a core role where only the greatest common divisors with much fewer terms than the original polynomials are of interest due to the nature of problems or data structures. This paper presents two methods via multivariate polynomial interpolation which are based on the variation of Zippel's method and Ben-Or/Tiwari algorithm, respectively. To reduce computational complexity, probabilistic techniques and randomization are employed to deal with univariate GCD computation and univariate polynomial interpolation. The authors demonstrate the practical performance of our algorithms on a significant body of examples. The implemented experiment illustrates that our algorithms are efficient for a quite wide range of input.
基金supported by the National Natural Science Foundation of China (Nos.60273060,60333010 and 60473106)the Research Fund for the Doctoral Program of Higher Education of China (No.20030335064)
文摘Parameterizations that use mesh simplification to build the base domain always adopt the vertex removal scheme.This paper applies edge collapse to constructing the base domain instead.After inducing the parameterization of the original mesh over the base domain,new algorithms map the new vertices in the simplified mesh back to the original one according to the edge transition sequence to integrate the parameterization.We present a direct way,namely edge classification,to deduce the sequence.Experimental results show that the new parameterization features considerable saving in computing complexity and maintains smoothness.
基金supported by a GRF grant from the Research Grants Council of the Hong Kong SAR(No.CityU 11310716)
文摘In recent years, a family of numerical algorithms to solve problems in real algebraic and semialgebraic geometry has been slowly growing. Unlike their counterparts in symbolic computation they are numerically stable. But their complexity analysis, based on the condition of the data, is radically different from the usual complexity analysis in symbolic computation as these numerical algorithms may run forever on a thin set of ill-posed inputs.
基金supported by the National Natural Science Foundation of China (No.10871068)the Danish National Research Foundation and National Natural Science Foundation of China Joint Grant (No.11061130539)
文摘The authors present an algorithm which is a modilication of the Nguyen-Stenle greedy reduction algorithm due to Nguyen and Stehle in 2009. This algorithm can be used to compute the Minkowski reduced lattice bases for arbitrary rank lattices with quadratic bit complexity on the size of the input vectors. The total bit complexity of the algorithm is O(n^2·(4n!)^n·(n!/2^n)^n/2·(4/3)^n(n-1)/2).log^2 A)where n is the rank of the lattice and A is maximal norm of the input base vectors. This is an O(log^2 A) algorithm which can be used to compute Minkowski reduced bases for the fixed rank lattices. A time complexity n!. 3n(log A)^O(1) algorithm which can be used to compute the successive minima with the help of the dual Hermite-Korkin-Zolotarev base was given by Blomer in 2000 and improved to the time complexity n!- (log A)^O(1) by Micciancio in 2008. The algorithm in this paper is more suitable for computing the Minkowski reduced bases of low rank lattices with very large base vector sizes.
文摘Path length calculation is a frequent requirement in studies related to graph theoretic problems such as genetics. Standard method to calculate average path length (APL) of a graph requires traversing all nodes in the graph repeatedly, which is computationally expensive for graphs containing large number of nodes. We propose a novel method to calculate APL for graphs commonly required in the studies of genetics. The proposed method is computationally less expensive and less time-consuming compared to standard method.