The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingl...The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingly important to predict those regions through computational methods.In this paper,we developed a novel scheme by employing sequence complexity to calculate six features for each residue of a protein sequence,which includes the Shannon entropy,the topological entropy,the sample entropy and three amino acid preferences including Remark 465,Deleage/Roux,and Bfactor(2STD).Particularly,we introduced the sample entropy for calculating time series complexity by mapping the amino acid sequence to a time series of 0-9.To our knowledge,the sample entropy has not been previously used for predicting IDPs and hence is being used for the first time in our study.In addition,the scheme used a properly sized sliding window in every protein sequence which greatly improved the prediction performance.Finally,we used seven machine learning algorithms and tested with 10-fold cross-validation to get the results on the dataset R80 collected by Yang et al.and of the dataset DIS1556 from the Database of Protein Disorder(DisProt)(https://www.disprot.org)containing experimentally determined intrinsically disordered proteins(IDPs).The results showed that k-Nearest Neighbor was more appropriate and an overall prediction accuracy of 92%.Furthermore,our method just used six features and hence required lower computational complexity.展开更多
In this work,the computational complexity of a spin-glass three-dimensional(3D)Ising model(for the lattice sizeN=lmn,wherel,m,n are thenumbersof lattice points along three crystallographic directions)is studied.We pro...In this work,the computational complexity of a spin-glass three-dimensional(3D)Ising model(for the lattice sizeN=lmn,wherel,m,n are thenumbersof lattice points along three crystallographic directions)is studied.We prove that an absolute minimum core(AMC)model consisting of a spin-glass 2D Ising model interacting with its nearest neighboring plane,has its computational complexity O(2mn).Any algorithms to make the model smaller(or simpler)than the AMC model will cut the basic element of the spin-glass 3D Ising model and lost many important information of the original model.Therefore,the computational complexity of the spin-glass 3D Ising model cannot be reduced to be less than O(2mn)by any algorithms,which is in subexponential time,superpolynomial.展开更多
This article introduces the approach on studying the computational complexity of matrix multiplication by ranks of the matrix multiplication tensors.Basic results and recent developments in this area are reviewed.
We briefly survey a number of important recent uchievements in Theoretical Computer Science (TCS), especially Computational Complexity Theory. We will discuss the PCP Theorem, its implications to inapproximability o...We briefly survey a number of important recent uchievements in Theoretical Computer Science (TCS), especially Computational Complexity Theory. We will discuss the PCP Theorem, its implications to inapproximability on combinatorial optimization problems; space bounded computations, especially deterministic logspace algorithm for undirected graph connectivity problem; deterministic polynomial-time primality test; lattice complexity, worst-case to average-case reductions; pseudorandomness and extractor constructions; and Valiant's new theory of holographic algorithms and reductions.展开更多
Consider a computer assisted trading system in which the needs and the products of the traders are compared by a computer system and the trading proceeds without attaching a dollar price to each commodity. In such a s...Consider a computer assisted trading system in which the needs and the products of the traders are compared by a computer system and the trading proceeds without attaching a dollar price to each commodity. In such a system the computer serves as aii 'intelligent' communication link between traders, enhancing the ability of producers and consumers to exchange goods. In this paper, we examine one computational aspect of such computerized trading schemes:Given a list of trading proposals (each proposal specifying the quantities of the commodities to be traded),how should one arrange the trades so that the maximum number of trades can be made in the market? We show that this maximum trade problem is computationally hard; it is NP-complete (Nondeterministic Polynomial Time Complete). We then describe some related open questions and potential solutions.展开更多
This paper proposes low-cost yet high-accuracy direction of arrival(DOA)estimation for the automotive frequency-modulated continuous-wave(FMcW)radar.The existing subspace-based DOA estimation algorithms suffer fromeit...This paper proposes low-cost yet high-accuracy direction of arrival(DOA)estimation for the automotive frequency-modulated continuous-wave(FMcW)radar.The existing subspace-based DOA estimation algorithms suffer fromeither high computational costs or low accuracy.We aim to solve such contradictory relation between complexity and accuracy by using randomizedmatrix approximation.Specifically,we apply an easily-interpretablerandomized low-rank approximation to the covariance matrix(CM)and R∈C^(M×M)throughthresketch maties in the fom of R≈OBQ^(H).Here the approximately compute its subspaces.That is,we first approximate matrix Q∈C^(M×z)contains the orthonormal basis for the range of the sketchmatrik C∈C^(M×z)cwe whichis etrated fom R using randomized unifom counsampling and B∈C^(z×z)is a weight-matrix reducing the approximation error.Relying on such approximation,we are able to accelerate the subspacecomputation by the orders of the magnitude without compromising estimation accuracy.Furthermore,we drive a theoretical error bound for the suggested scheme to ensure the accuracy of the approximation.As validated by the simulation results,the DOA estimation accuracy of the proposed algorithm,eficient multiple signal classification(E-MUSIC)s high,closely tracks standardMUSIC,and outperforms the well-known algorithms with tremendouslyreduced time complexity.Thus,the devised method can realize high-resolutionreal-time target detection in the emerging multiple input and multiple output(MIMO)automotive radar systems.展开更多
This paper proposes a method of estimating computational complexity of problem through analyzing its input condition for N-vehicle exploration problem. The N-vehicle problem is firstly formulated to determine the opti...This paper proposes a method of estimating computational complexity of problem through analyzing its input condition for N-vehicle exploration problem. The N-vehicle problem is firstly formulated to determine the optimal replacement in the set of permutations of 1 to N. The complexity of the problem is factorial of N (input scale of problem). To balance accuracy and efficiency of general algorithms, this paper mentions a new systematic algorithm design and discusses correspondence between complexity of problem and its input condition, other than just putting forward a uniform approximation algorithm as usual. This is a new technique for analyzing computation of NP problems. The method of corresponding is then presented. We finally carry out a simulation to verify the advantages of the method: 1) to decrease computation in enumeration; 2) to efficiently obtain computational complexity for any N-vehicle case; 3) to guide an algorithm design for any N-vehicle case according to its complexity estimated by the method.展开更多
Based on the iterative bit-filling procedure, a computationally efficient bit and power allocation algorithm is presented. The algorithm improves the conventional bit-filling algorithms by maintaining only a subset of...Based on the iterative bit-filling procedure, a computationally efficient bit and power allocation algorithm is presented. The algorithm improves the conventional bit-filling algorithms by maintaining only a subset of subcarriers for computation in each iteration, which reduces the complexity without any performance degradation. Moreover, a modified algorithm with even lower complexity is developed, and equal power allocation is introduced as an initial allocation to accelerate its convergence. Simulation results show that the modified algorithm achieves a considerable complexity reduction while causing only a minor drop in performance.展开更多
Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems....Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems. These efforts produced a deeper understanding of how EAs perform on different kinds of fitness landscapes and general mathematical tools that may be extended to the analysis of more complicated EAs on more realistic problems. In fact, in recent years, it has been possible to analyze the (1+1)-EA on combinatorial optimization problems with practical applications and more realistic population-based EAs on structured toy problems. This paper presents a survey of the results obtained in the last decade along these two research lines. The most common mathematical techniques are introduced, the basic ideas behind them are discussed and their elective applications are highlighted. Solved problems that were still open are enumerated as are those still awaiting for a solution. New questions and problems arisen in the meantime are also considered.展开更多
For future wireless communication systems,Power Domain Non-Orthogonal Multiple Access(PD-NOMA)using an advanced receiver has been considered as a promising radio access technology candidate.Power allocation plays an i...For future wireless communication systems,Power Domain Non-Orthogonal Multiple Access(PD-NOMA)using an advanced receiver has been considered as a promising radio access technology candidate.Power allocation plays an important role in the PD-NOMA system because it considerably affects the total throughput and Geometric Mean User Throughput(GMUT)performance.However,most existing studies have not completely accounted for the computational complexity of the power allocation process when the User Terminals(UTs)move in a slow fading channel environment.To resolve such problems,a power allocation method is proposed to considerably reduce the search space of a Full Search Power(FSP)allocation algorithm.The initial power reallocation coefficients will be set to start with former optimal values by the proposed Lemma before searching for optimal power reallocation coefficients based on total throughput performance.Step size and correction granularity will be adjusted within a much narrower power search range while invalid power combinations may be reasonably discarded during the search process.The simulation results show that the proposed power reallocation scheme can greatly reduce computational complexity while the total throughput and GMUT performance loss are not greater than 1.5%compared with the FSP algorithm.展开更多
The use of the Internet of Things(IoT)is expanding at an unprecedented scale in many critical applications due to the ability to interconnect and utilize a plethora of wide range of devices.In critical infrastructure ...The use of the Internet of Things(IoT)is expanding at an unprecedented scale in many critical applications due to the ability to interconnect and utilize a plethora of wide range of devices.In critical infrastructure domains like oil and gas supply,intelligent transportation,power grids,and autonomous agriculture,it is essential to guarantee the confidentiality,integrity,and authenticity of data collected and exchanged.However,the limited resources coupled with the heterogeneity of IoT devices make it inefficient or sometimes infeasible to achieve secure data transmission using traditional cryptographic techniques.Consequently,designing a lightweight secure data transmission scheme is becoming essential.In this article,we propose lightweight secure data transmission(LSDT)scheme for IoT environments.LSDT consists of three phases and utilizes an effective combination of symmetric keys and the Elliptic Curve Menezes-Qu-Vanstone asymmetric key agreement protocol.We design the simulation environment and experiments to evaluate the performance of the LSDT scheme in terms of communication and computation costs.Security and performance analysis indicates that the LSDT scheme is secure,suitable for IoT applications,and performs better in comparison to other related security schemes.展开更多
The variable block-size motion estimation(ME) and disparity estimation(DE) are adopted in multi-view video coding(MVC) to achieve high coding efficiency. However, much higher computational complexity is also introduce...The variable block-size motion estimation(ME) and disparity estimation(DE) are adopted in multi-view video coding(MVC) to achieve high coding efficiency. However, much higher computational complexity is also introduced in coding system, which hinders practical application of MVC. An efficient fast mode decision method using mode complexity is proposed to reduce the computational complexity. In the proposed method, mode complexity is firstly computed by using the spatial, temporal and inter-view correlation between the current macroblock(MB) and its neighboring MBs. Based on the observation that direct mode is highly possible to be the optimal mode, mode complexity is always checked in advance whether it is below a predefined threshold for providing an efficient early termination opportunity. If this early termination condition is not met, three mode types for the MBs are classified according to the value of mode complexity, i.e., simple mode, medium mode and complex mode, to speed up the encoding process by reducing the number of the variable block modes required to be checked. Furthermore, for simple and medium mode region, the rate distortion(RD) cost of mode 16×16 in the temporal prediction direction is compared with that of the disparity prediction direction, to determine in advance whether the optimal prediction direction is in the temporal prediction direction or not, for skipping unnecessary disparity estimation. Experimental results show that the proposed method is able to significantly reduce the computational load by 78.79% and the total bit rate by 0.07% on average, while only incurring a negligible loss of PSNR(about 0.04 d B on average), compared with the full mode decision(FMD) in the reference software of MVC.展开更多
Nested linear array enables to enhance localization resolution and achieve under-determined direction of arrival(DOA)estimation.In this paper,the traditional two-level nested linear array is improved to achieve more d...Nested linear array enables to enhance localization resolution and achieve under-determined direction of arrival(DOA)estimation.In this paper,the traditional two-level nested linear array is improved to achieve more degrees of freedom(DOFs)and better angle estimation performance.Furthermore,a computationally efficient DOA estimation algorithm is proposed.The discrete Fourier transform(DFT)method is utilized to obtain coarse DOA estimates,and subsequently,fine DOA estimates are achieved by spatial smoothing multiple signals classification(SS-MUSIC)algorithm.Compared to SS-MUSIC algorithm,the proposed algorithm has the same estimation accuracy with lower computational complexity because the coarse DOA estimates enable to shrink the range of angle spectral search.In addition,the estimation of the number of signals is not required in advance by DFT method.Extensive simulation results testify the effectiveness of the proposed algorithm.展开更多
In this article the inherent computational power of the quantum entangled cluster states examined by measurement-based quantum computations is studied. By defining a common framework of rules for measurement of quantu...In this article the inherent computational power of the quantum entangled cluster states examined by measurement-based quantum computations is studied. By defining a common framework of rules for measurement of quantum entangled cluster states based on classical computations, the precise and detailed meaning of the computing power of the correlations in the quantum cluster states is made. This study exposes a connection, arousing interest, between the infringement of the realistic models that are local and the computing power of the quantum entangled cluster states.展开更多
The title complex is widely used as an efficient key component of Ziegler-Natta catalyst for stereospecific polymerization of dienes to produce synthetic rubbers. However, the quantitative structure-activity relations...The title complex is widely used as an efficient key component of Ziegler-Natta catalyst for stereospecific polymerization of dienes to produce synthetic rubbers. However, the quantitative structure-activity relationship(QSAR) of this kind of complexes is still not clear mainly due to the difficulties to obtain their geometric molecular structures through laboratory experiments. An alternative solution is the quantum chemistry calculation in which the comformational population shall be determined. In this study, ten conformers of the title complex were obtained with the function of molecular dynamics conformational search in Gabedit 2.4.8, and their geometry optimization and thermodynamics calculation were made with a Sparkle/PM7 approach in MOPAC 2012. Their Gibbs free energies at 1 atm. and 298.15 K were calculated. Population of the conformers was further calculated out according to the theory of Boltzmann distribution, indicating that one of the ten conformers has a dominant population of 77.13%.展开更多
In this paper,a comparative study for kernel-PCA based linear parameter varying(LPV)model approximation of sufficiently nonlinear and reasonably practical systems is carried out.Linear matrix inequalities(LMIs)to be s...In this paper,a comparative study for kernel-PCA based linear parameter varying(LPV)model approximation of sufficiently nonlinear and reasonably practical systems is carried out.Linear matrix inequalities(LMIs)to be solved in LPV controller design process increase exponentially with the increase in a number of scheduling variables.Fifteen kernel functions are used to obtain the approximate LPV model of highly coupled nonlinear systems.An error to norm ratio of original and approximate LPV models is introduced as a measure of accuracy of the approximate LPV model.Simulation examples conclude the effectiveness of kernel-PCA for LPV model approximation as with the identification of accurate approximate LPV model,computation complexity involved in LPV controller design is decreased exponentially.展开更多
In this paper,we propose Triangular Code(TC),a new class of fountain code with near-zero redundancy and linear encoding and decoding computational complexities of OeLklog kT,where k is the packet batch size and L is t...In this paper,we propose Triangular Code(TC),a new class of fountain code with near-zero redundancy and linear encoding and decoding computational complexities of OeLklog kT,where k is the packet batch size and L is the packet data length.Different from previous works where the optimal performance of codes has been shown under asymptotic assumption,TC enjoys near-zero redundancy even under non-asymptotic settings for smallmoderate number of packets.These features make TC suitable for practical implementation in batteryconstrained devices in IoT,D2D and M2M network paradigms to achieve scalable reliability,and minimize latency due to its low decoding delay.TC is a non-linear code,which is encoded using the simple shift and XOR addition operations,and decoded using the simple back-substitution algorithm.Although it is nonlinear code at the packet level,it remains linear code when atomized at the bit level.We use this property to show that the backsubstitution decoder of TC is equivalent to the Belief Propagation(BP)decoder of LT code.Therefore,TC can benefit from rich prolific literature published on LT code,to design efficient code for various applications.Despite the equivalency between the decoders of TC and LT code,we show that compared to state-of-the-art optimized LT code,TC reduces the redundancy of LT code by 68%-99% for k reaching 1024.展开更多
Appearance-based dynamic Hand Gesture Recognition(HGR)remains a prominent area of research in Human-Computer Interaction(HCI).Numerous environmental and computational constraints limit its real-time deployment.In addi...Appearance-based dynamic Hand Gesture Recognition(HGR)remains a prominent area of research in Human-Computer Interaction(HCI).Numerous environmental and computational constraints limit its real-time deployment.In addition,the performance of a model decreases as the subject’s distance from the camera increases.This study proposes a 3D separable Convolutional Neural Network(CNN),considering the model’s computa-tional complexity and recognition accuracy.The 20BN-Jester dataset was used to train the model for six gesture classes.After achieving the best offline recognition accuracy of 94.39%,the model was deployed in real-time while considering the subject’s attention,the instant of performing a gesture,and the subject’s distance from the camera.Despite being discussed in numerous research articles,the distance factor remains unresolved in real-time deployment,which leads to degraded recognition results.In the proposed approach,the distance calculation substantially improves the classification performance by reducing the impact of the subject’s distance from the camera.Additionally,the capability of feature extraction,degree of relevance,and statistical significance of the proposed model against other state-of-the-art models were validated using t-distributed Stochastic Neighbor Embedding(t-SNE),Mathew’s Correlation Coefficient(MCC),and the McNemar test,respectively.We observed that the proposed model exhibits state-of-the-art outcomes and a comparatively high significance level.展开更多
In this advanced exploration, we focus on multiple parameters estimation in bistatic Multiple-Input Multiple-Output(MIMO) radar systems, a crucial technique for target localization and imaging. Our research innovative...In this advanced exploration, we focus on multiple parameters estimation in bistatic Multiple-Input Multiple-Output(MIMO) radar systems, a crucial technique for target localization and imaging. Our research innovatively addresses the joint estimation of the Direction of Departure(DOD), Direction of Arrival(DOA), and Doppler frequency for incoherent targets. We propose a novel approach that significantly reduces computational complexity by utilizing the TemporalSpatial Nested Sampling Model(TSNSM). Our methodology begins with a multi-linear mapping mechanism to efficiently eliminate unnecessary virtual Degrees of Freedom(DOFs) and reorganize the remaining ones. We then employ the Toeplitz matrix triple iteration reconstruction method, surpassing the traditional Temporal-Spatial Smoothing Window(TSSW) approach, to mitigate the single snapshot effect and reduce computational demands. We further refine the highdimensional ESPRIT algorithm for joint estimation of DOD, DOA, and Doppler frequency, eliminating the need for additional parameter pairing. Moreover, we meticulously derive the Cramér-Rao Bound(CRB) for the TSNSM. This signal model allows for a second expansion of DOFs in time and space domains, achieving high precision in target angle and Doppler frequency estimation with low computational complexity. Our adaptable algorithm is validated through simulations and is suitable for sparse array MIMO radars with various structures, ensuring higher precision in parameter estimation with less complexity burden.展开更多
Image Super-Resolution(SR)research has achieved great success with powerful neural networks.The deeper networks with more parameters improve the restoration quality but add the computation complexity,which means more ...Image Super-Resolution(SR)research has achieved great success with powerful neural networks.The deeper networks with more parameters improve the restoration quality but add the computation complexity,which means more inference time would be cost,hindering image SR from practical usage.Noting the spatial distribution of the objects or things in images,a twostage local objects SR system is proposed,which consists of two modules,the object detection module and the SR module.Firstly,You Only Look Once(YOLO),which is efficient in generic object detection tasks,is selected to detect the input images for obtaining objects of interest,then put them into the SR module and output corresponding High-Resolution(HR)subimages.The computational power consumption of image SR is optimized by reducing the resolution of input images.In addition,we establish a dataset,TrafficSign500,for our experiment.Finally,the performance of the proposed system is evaluated under several State-Of-The-Art(SOTA)YOLOv5 and SISR models.Results show that our system can achieve a tremendous computation improvement in image SR.展开更多
文摘The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingly important to predict those regions through computational methods.In this paper,we developed a novel scheme by employing sequence complexity to calculate six features for each residue of a protein sequence,which includes the Shannon entropy,the topological entropy,the sample entropy and three amino acid preferences including Remark 465,Deleage/Roux,and Bfactor(2STD).Particularly,we introduced the sample entropy for calculating time series complexity by mapping the amino acid sequence to a time series of 0-9.To our knowledge,the sample entropy has not been previously used for predicting IDPs and hence is being used for the first time in our study.In addition,the scheme used a properly sized sliding window in every protein sequence which greatly improved the prediction performance.Finally,we used seven machine learning algorithms and tested with 10-fold cross-validation to get the results on the dataset R80 collected by Yang et al.and of the dataset DIS1556 from the Database of Protein Disorder(DisProt)(https://www.disprot.org)containing experimentally determined intrinsically disordered proteins(IDPs).The results showed that k-Nearest Neighbor was more appropriate and an overall prediction accuracy of 92%.Furthermore,our method just used six features and hence required lower computational complexity.
基金This work has been supported by the National Natural Science Foundation of China under grant numbers 51590883 and 51331006by the State Key Project of Research and Development of China(No.2017YFA0206302).
文摘In this work,the computational complexity of a spin-glass three-dimensional(3D)Ising model(for the lattice sizeN=lmn,wherel,m,n are thenumbersof lattice points along three crystallographic directions)is studied.We prove that an absolute minimum core(AMC)model consisting of a spin-glass 2D Ising model interacting with its nearest neighboring plane,has its computational complexity O(2mn).Any algorithms to make the model smaller(or simpler)than the AMC model will cut the basic element of the spin-glass 3D Ising model and lost many important information of the original model.Therefore,the computational complexity of the spin-glass 3D Ising model cannot be reduced to be less than O(2mn)by any algorithms,which is in subexponential time,superpolynomial.
基金This work was partially supported by the National Natural Science Foundation of China(Nos.11431002,11771328,11871051)Young Elite Scientists Sponsorship Program by Tianjin,the Natural Science Foundation of Zhejiang Province(No.LD19A010002)Innovation Research Foundation of Tianjin University(No.2017XRG-0015).
文摘This article introduces the approach on studying the computational complexity of matrix multiplication by ranks of the matrix multiplication tensors.Basic results and recent developments in this area are reviewed.
文摘We briefly survey a number of important recent uchievements in Theoretical Computer Science (TCS), especially Computational Complexity Theory. We will discuss the PCP Theorem, its implications to inapproximability on combinatorial optimization problems; space bounded computations, especially deterministic logspace algorithm for undirected graph connectivity problem; deterministic polynomial-time primality test; lattice complexity, worst-case to average-case reductions; pseudorandomness and extractor constructions; and Valiant's new theory of holographic algorithms and reductions.
文摘Consider a computer assisted trading system in which the needs and the products of the traders are compared by a computer system and the trading proceeds without attaching a dollar price to each commodity. In such a system the computer serves as aii 'intelligent' communication link between traders, enhancing the ability of producers and consumers to exchange goods. In this paper, we examine one computational aspect of such computerized trading schemes:Given a list of trading proposals (each proposal specifying the quantities of the commodities to be traded),how should one arrange the trades so that the maximum number of trades can be made in the market? We show that this maximum trade problem is computationally hard; it is NP-complete (Nondeterministic Polynomial Time Complete). We then describe some related open questions and potential solutions.
文摘This paper proposes low-cost yet high-accuracy direction of arrival(DOA)estimation for the automotive frequency-modulated continuous-wave(FMcW)radar.The existing subspace-based DOA estimation algorithms suffer fromeither high computational costs or low accuracy.We aim to solve such contradictory relation between complexity and accuracy by using randomizedmatrix approximation.Specifically,we apply an easily-interpretablerandomized low-rank approximation to the covariance matrix(CM)and R∈C^(M×M)throughthresketch maties in the fom of R≈OBQ^(H).Here the approximately compute its subspaces.That is,we first approximate matrix Q∈C^(M×z)contains the orthonormal basis for the range of the sketchmatrik C∈C^(M×z)cwe whichis etrated fom R using randomized unifom counsampling and B∈C^(z×z)is a weight-matrix reducing the approximation error.Relying on such approximation,we are able to accelerate the subspacecomputation by the orders of the magnitude without compromising estimation accuracy.Furthermore,we drive a theoretical error bound for the suggested scheme to ensure the accuracy of the approximation.As validated by the simulation results,the DOA estimation accuracy of the proposed algorithm,eficient multiple signal classification(E-MUSIC)s high,closely tracks standardMUSIC,and outperforms the well-known algorithms with tremendouslyreduced time complexity.Thus,the devised method can realize high-resolutionreal-time target detection in the emerging multiple input and multiple output(MIMO)automotive radar systems.
基金supported by the State 973 Program(2006CB701306) and Key Laboratory of Management,Decision and Information Systems,CAS
文摘This paper proposes a method of estimating computational complexity of problem through analyzing its input condition for N-vehicle exploration problem. The N-vehicle problem is firstly formulated to determine the optimal replacement in the set of permutations of 1 to N. The complexity of the problem is factorial of N (input scale of problem). To balance accuracy and efficiency of general algorithms, this paper mentions a new systematic algorithm design and discusses correspondence between complexity of problem and its input condition, other than just putting forward a uniform approximation algorithm as usual. This is a new technique for analyzing computation of NP problems. The method of corresponding is then presented. We finally carry out a simulation to verify the advantages of the method: 1) to decrease computation in enumeration; 2) to efficiently obtain computational complexity for any N-vehicle case; 3) to guide an algorithm design for any N-vehicle case according to its complexity estimated by the method.
基金The National High Technology Research and Devel-opment Program of China (863Program) (No2006AA01Z263)the National Natural Science Foundation of China (No60496311)
文摘Based on the iterative bit-filling procedure, a computationally efficient bit and power allocation algorithm is presented. The algorithm improves the conventional bit-filling algorithms by maintaining only a subset of subcarriers for computation in each iteration, which reduces the complexity without any performance degradation. Moreover, a modified algorithm with even lower complexity is developed, and equal power allocation is introduced as an initial allocation to accelerate its convergence. Simulation results show that the modified algorithm achieves a considerable complexity reduction while causing only a minor drop in performance.
基金This work was supported by an EPSRC grant (No.EP/C520696/1).
文摘Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems. These efforts produced a deeper understanding of how EAs perform on different kinds of fitness landscapes and general mathematical tools that may be extended to the analysis of more complicated EAs on more realistic problems. In fact, in recent years, it has been possible to analyze the (1+1)-EA on combinatorial optimization problems with practical applications and more realistic population-based EAs on structured toy problems. This paper presents a survey of the results obtained in the last decade along these two research lines. The most common mathematical techniques are introduced, the basic ideas behind them are discussed and their elective applications are highlighted. Solved problems that were still open are enumerated as are those still awaiting for a solution. New questions and problems arisen in the meantime are also considered.
基金supported in part by the Science and Technology Research Program of the National Science Foundation of China(61671096)Chongqing Research Program of Basic Science and Frontier Technology(cstc2017jcyjBX0005)+1 种基金Chongqing Municipal Education Commission(KJQN201800642)Doctoral Student Training Program(BYJS2016009).
文摘For future wireless communication systems,Power Domain Non-Orthogonal Multiple Access(PD-NOMA)using an advanced receiver has been considered as a promising radio access technology candidate.Power allocation plays an important role in the PD-NOMA system because it considerably affects the total throughput and Geometric Mean User Throughput(GMUT)performance.However,most existing studies have not completely accounted for the computational complexity of the power allocation process when the User Terminals(UTs)move in a slow fading channel environment.To resolve such problems,a power allocation method is proposed to considerably reduce the search space of a Full Search Power(FSP)allocation algorithm.The initial power reallocation coefficients will be set to start with former optimal values by the proposed Lemma before searching for optimal power reallocation coefficients based on total throughput performance.Step size and correction granularity will be adjusted within a much narrower power search range while invalid power combinations may be reasonably discarded during the search process.The simulation results show that the proposed power reallocation scheme can greatly reduce computational complexity while the total throughput and GMUT performance loss are not greater than 1.5%compared with the FSP algorithm.
基金support of the Interdisciplinary Research Center for Intelligent Secure Systems(IRC-ISS)Internal Fund Grant#INSS2202.
文摘The use of the Internet of Things(IoT)is expanding at an unprecedented scale in many critical applications due to the ability to interconnect and utilize a plethora of wide range of devices.In critical infrastructure domains like oil and gas supply,intelligent transportation,power grids,and autonomous agriculture,it is essential to guarantee the confidentiality,integrity,and authenticity of data collected and exchanged.However,the limited resources coupled with the heterogeneity of IoT devices make it inefficient or sometimes infeasible to achieve secure data transmission using traditional cryptographic techniques.Consequently,designing a lightweight secure data transmission scheme is becoming essential.In this article,we propose lightweight secure data transmission(LSDT)scheme for IoT environments.LSDT consists of three phases and utilizes an effective combination of symmetric keys and the Elliptic Curve Menezes-Qu-Vanstone asymmetric key agreement protocol.We design the simulation environment and experiments to evaluate the performance of the LSDT scheme in terms of communication and computation costs.Security and performance analysis indicates that the LSDT scheme is secure,suitable for IoT applications,and performs better in comparison to other related security schemes.
基金Project(08Y29-7)supported by the Transportation Science and Research Program of Jiangsu Province,ChinaProject(201103051)supported by the Major Infrastructure Program of the Health Monitoring System Hardware Platform Based on Sensor Network Node,China+1 种基金Project(61100111)supported by the National Natural Science Foundation of ChinaProject(BE2011169)supported by the Scientific and Technical Supporting Program of Jiangsu Province,China
文摘The variable block-size motion estimation(ME) and disparity estimation(DE) are adopted in multi-view video coding(MVC) to achieve high coding efficiency. However, much higher computational complexity is also introduced in coding system, which hinders practical application of MVC. An efficient fast mode decision method using mode complexity is proposed to reduce the computational complexity. In the proposed method, mode complexity is firstly computed by using the spatial, temporal and inter-view correlation between the current macroblock(MB) and its neighboring MBs. Based on the observation that direct mode is highly possible to be the optimal mode, mode complexity is always checked in advance whether it is below a predefined threshold for providing an efficient early termination opportunity. If this early termination condition is not met, three mode types for the MBs are classified according to the value of mode complexity, i.e., simple mode, medium mode and complex mode, to speed up the encoding process by reducing the number of the variable block modes required to be checked. Furthermore, for simple and medium mode region, the rate distortion(RD) cost of mode 16×16 in the temporal prediction direction is compared with that of the disparity prediction direction, to determine in advance whether the optimal prediction direction is in the temporal prediction direction or not, for skipping unnecessary disparity estimation. Experimental results show that the proposed method is able to significantly reduce the computational load by 78.79% and the total bit rate by 0.07% on average, while only incurring a negligible loss of PSNR(about 0.04 d B on average), compared with the full mode decision(FMD) in the reference software of MVC.
基金supported by the Postgraduate Research & Practice Innovation Program of Jiangsu Province (No.SJCX18_0103)Key Laboratory of Dynamic Cognitive System of Electromagnetic Spectrum Space (Nanjing University of Aeronautics and Astronautics), Ministry of Industry and Information Technology (No.KF20181915)
文摘Nested linear array enables to enhance localization resolution and achieve under-determined direction of arrival(DOA)estimation.In this paper,the traditional two-level nested linear array is improved to achieve more degrees of freedom(DOFs)and better angle estimation performance.Furthermore,a computationally efficient DOA estimation algorithm is proposed.The discrete Fourier transform(DFT)method is utilized to obtain coarse DOA estimates,and subsequently,fine DOA estimates are achieved by spatial smoothing multiple signals classification(SS-MUSIC)algorithm.Compared to SS-MUSIC algorithm,the proposed algorithm has the same estimation accuracy with lower computational complexity because the coarse DOA estimates enable to shrink the range of angle spectral search.In addition,the estimation of the number of signals is not required in advance by DFT method.Extensive simulation results testify the effectiveness of the proposed algorithm.
文摘In this article the inherent computational power of the quantum entangled cluster states examined by measurement-based quantum computations is studied. By defining a common framework of rules for measurement of quantum entangled cluster states based on classical computations, the precise and detailed meaning of the computing power of the correlations in the quantum cluster states is made. This study exposes a connection, arousing interest, between the infringement of the realistic models that are local and the computing power of the quantum entangled cluster states.
基金supported by the National Natural Science Foundation of China(No.21476119)
文摘The title complex is widely used as an efficient key component of Ziegler-Natta catalyst for stereospecific polymerization of dienes to produce synthetic rubbers. However, the quantitative structure-activity relationship(QSAR) of this kind of complexes is still not clear mainly due to the difficulties to obtain their geometric molecular structures through laboratory experiments. An alternative solution is the quantum chemistry calculation in which the comformational population shall be determined. In this study, ten conformers of the title complex were obtained with the function of molecular dynamics conformational search in Gabedit 2.4.8, and their geometry optimization and thermodynamics calculation were made with a Sparkle/PM7 approach in MOPAC 2012. Their Gibbs free energies at 1 atm. and 298.15 K were calculated. Population of the conformers was further calculated out according to the theory of Boltzmann distribution, indicating that one of the ten conformers has a dominant population of 77.13%.
文摘In this paper,a comparative study for kernel-PCA based linear parameter varying(LPV)model approximation of sufficiently nonlinear and reasonably practical systems is carried out.Linear matrix inequalities(LMIs)to be solved in LPV controller design process increase exponentially with the increase in a number of scheduling variables.Fifteen kernel functions are used to obtain the approximate LPV model of highly coupled nonlinear systems.An error to norm ratio of original and approximate LPV models is introduced as a measure of accuracy of the approximate LPV model.Simulation examples conclude the effectiveness of kernel-PCA for LPV model approximation as with the identification of accurate approximate LPV model,computation complexity involved in LPV controller design is decreased exponentially.
文摘In this paper,we propose Triangular Code(TC),a new class of fountain code with near-zero redundancy and linear encoding and decoding computational complexities of OeLklog kT,where k is the packet batch size and L is the packet data length.Different from previous works where the optimal performance of codes has been shown under asymptotic assumption,TC enjoys near-zero redundancy even under non-asymptotic settings for smallmoderate number of packets.These features make TC suitable for practical implementation in batteryconstrained devices in IoT,D2D and M2M network paradigms to achieve scalable reliability,and minimize latency due to its low decoding delay.TC is a non-linear code,which is encoded using the simple shift and XOR addition operations,and decoded using the simple back-substitution algorithm.Although it is nonlinear code at the packet level,it remains linear code when atomized at the bit level.We use this property to show that the backsubstitution decoder of TC is equivalent to the Belief Propagation(BP)decoder of LT code.Therefore,TC can benefit from rich prolific literature published on LT code,to design efficient code for various applications.Despite the equivalency between the decoders of TC and LT code,we show that compared to state-of-the-art optimized LT code,TC reduces the redundancy of LT code by 68%-99% for k reaching 1024.
文摘Appearance-based dynamic Hand Gesture Recognition(HGR)remains a prominent area of research in Human-Computer Interaction(HCI).Numerous environmental and computational constraints limit its real-time deployment.In addition,the performance of a model decreases as the subject’s distance from the camera increases.This study proposes a 3D separable Convolutional Neural Network(CNN),considering the model’s computa-tional complexity and recognition accuracy.The 20BN-Jester dataset was used to train the model for six gesture classes.After achieving the best offline recognition accuracy of 94.39%,the model was deployed in real-time while considering the subject’s attention,the instant of performing a gesture,and the subject’s distance from the camera.Despite being discussed in numerous research articles,the distance factor remains unresolved in real-time deployment,which leads to degraded recognition results.In the proposed approach,the distance calculation substantially improves the classification performance by reducing the impact of the subject’s distance from the camera.Additionally,the capability of feature extraction,degree of relevance,and statistical significance of the proposed model against other state-of-the-art models were validated using t-distributed Stochastic Neighbor Embedding(t-SNE),Mathew’s Correlation Coefficient(MCC),and the McNemar test,respectively.We observed that the proposed model exhibits state-of-the-art outcomes and a comparatively high significance level.
基金supported in part by the National Natural Science Foundation of China(No.62071476)in part by China Postdoctoral Science Foundation(No.2022M723879)in part by the Science and Technology Innovation Program of Hunan Province,China(No.2021RC3080)。
文摘In this advanced exploration, we focus on multiple parameters estimation in bistatic Multiple-Input Multiple-Output(MIMO) radar systems, a crucial technique for target localization and imaging. Our research innovatively addresses the joint estimation of the Direction of Departure(DOD), Direction of Arrival(DOA), and Doppler frequency for incoherent targets. We propose a novel approach that significantly reduces computational complexity by utilizing the TemporalSpatial Nested Sampling Model(TSNSM). Our methodology begins with a multi-linear mapping mechanism to efficiently eliminate unnecessary virtual Degrees of Freedom(DOFs) and reorganize the remaining ones. We then employ the Toeplitz matrix triple iteration reconstruction method, surpassing the traditional Temporal-Spatial Smoothing Window(TSSW) approach, to mitigate the single snapshot effect and reduce computational demands. We further refine the highdimensional ESPRIT algorithm for joint estimation of DOD, DOA, and Doppler frequency, eliminating the need for additional parameter pairing. Moreover, we meticulously derive the Cramér-Rao Bound(CRB) for the TSNSM. This signal model allows for a second expansion of DOFs in time and space domains, achieving high precision in target angle and Doppler frequency estimation with low computational complexity. Our adaptable algorithm is validated through simulations and is suitable for sparse array MIMO radars with various structures, ensuring higher precision in parameter estimation with less complexity burden.
基金supported by the National Natural Science Foundation of China(NSFC)under Grant No.62001057by Beijing University of Posts and Telecommunications Basic Research Fund,2021RC26by the National Natural Science Foundation of China(NSFC)under Grant Nos.61871048 and 61872253.
文摘Image Super-Resolution(SR)research has achieved great success with powerful neural networks.The deeper networks with more parameters improve the restoration quality but add the computation complexity,which means more inference time would be cost,hindering image SR from practical usage.Noting the spatial distribution of the objects or things in images,a twostage local objects SR system is proposed,which consists of two modules,the object detection module and the SR module.Firstly,You Only Look Once(YOLO),which is efficient in generic object detection tasks,is selected to detect the input images for obtaining objects of interest,then put them into the SR module and output corresponding High-Resolution(HR)subimages.The computational power consumption of image SR is optimized by reducing the resolution of input images.In addition,we establish a dataset,TrafficSign500,for our experiment.Finally,the performance of the proposed system is evaluated under several State-Of-The-Art(SOTA)YOLOv5 and SISR models.Results show that our system can achieve a tremendous computation improvement in image SR.