The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingl...The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingly important to predict those regions through computational methods.In this paper,we developed a novel scheme by employing sequence complexity to calculate six features for each residue of a protein sequence,which includes the Shannon entropy,the topological entropy,the sample entropy and three amino acid preferences including Remark 465,Deleage/Roux,and Bfactor(2STD).Particularly,we introduced the sample entropy for calculating time series complexity by mapping the amino acid sequence to a time series of 0-9.To our knowledge,the sample entropy has not been previously used for predicting IDPs and hence is being used for the first time in our study.In addition,the scheme used a properly sized sliding window in every protein sequence which greatly improved the prediction performance.Finally,we used seven machine learning algorithms and tested with 10-fold cross-validation to get the results on the dataset R80 collected by Yang et al.and of the dataset DIS1556 from the Database of Protein Disorder(DisProt)(https://www.disprot.org)containing experimentally determined intrinsically disordered proteins(IDPs).The results showed that k-Nearest Neighbor was more appropriate and an overall prediction accuracy of 92%.Furthermore,our method just used six features and hence required lower computational complexity.展开更多
This paper proposes low-cost yet high-accuracy direction of arrival(DOA)estimation for the automotive frequency-modulated continuous-wave(FMcW)radar.The existing subspace-based DOA estimation algorithms suffer fromeit...This paper proposes low-cost yet high-accuracy direction of arrival(DOA)estimation for the automotive frequency-modulated continuous-wave(FMcW)radar.The existing subspace-based DOA estimation algorithms suffer fromeither high computational costs or low accuracy.We aim to solve such contradictory relation between complexity and accuracy by using randomizedmatrix approximation.Specifically,we apply an easily-interpretablerandomized low-rank approximation to the covariance matrix(CM)and R∈C^(M×M)throughthresketch maties in the fom of R≈OBQ^(H).Here the approximately compute its subspaces.That is,we first approximate matrix Q∈C^(M×z)contains the orthonormal basis for the range of the sketchmatrik C∈C^(M×z)cwe whichis etrated fom R using randomized unifom counsampling and B∈C^(z×z)is a weight-matrix reducing the approximation error.Relying on such approximation,we are able to accelerate the subspacecomputation by the orders of the magnitude without compromising estimation accuracy.Furthermore,we drive a theoretical error bound for the suggested scheme to ensure the accuracy of the approximation.As validated by the simulation results,the DOA estimation accuracy of the proposed algorithm,eficient multiple signal classification(E-MUSIC)s high,closely tracks standardMUSIC,and outperforms the well-known algorithms with tremendouslyreduced time complexity.Thus,the devised method can realize high-resolutionreal-time target detection in the emerging multiple input and multiple output(MIMO)automotive radar systems.展开更多
The use of the Internet of Things(IoT)is expanding at an unprecedented scale in many critical applications due to the ability to interconnect and utilize a plethora of wide range of devices.In critical infrastructure ...The use of the Internet of Things(IoT)is expanding at an unprecedented scale in many critical applications due to the ability to interconnect and utilize a plethora of wide range of devices.In critical infrastructure domains like oil and gas supply,intelligent transportation,power grids,and autonomous agriculture,it is essential to guarantee the confidentiality,integrity,and authenticity of data collected and exchanged.However,the limited resources coupled with the heterogeneity of IoT devices make it inefficient or sometimes infeasible to achieve secure data transmission using traditional cryptographic techniques.Consequently,designing a lightweight secure data transmission scheme is becoming essential.In this article,we propose lightweight secure data transmission(LSDT)scheme for IoT environments.LSDT consists of three phases and utilizes an effective combination of symmetric keys and the Elliptic Curve Menezes-Qu-Vanstone asymmetric key agreement protocol.We design the simulation environment and experiments to evaluate the performance of the LSDT scheme in terms of communication and computation costs.Security and performance analysis indicates that the LSDT scheme is secure,suitable for IoT applications,and performs better in comparison to other related security schemes.展开更多
The variable block-size motion estimation(ME) and disparity estimation(DE) are adopted in multi-view video coding(MVC) to achieve high coding efficiency. However, much higher computational complexity is also introduce...The variable block-size motion estimation(ME) and disparity estimation(DE) are adopted in multi-view video coding(MVC) to achieve high coding efficiency. However, much higher computational complexity is also introduced in coding system, which hinders practical application of MVC. An efficient fast mode decision method using mode complexity is proposed to reduce the computational complexity. In the proposed method, mode complexity is firstly computed by using the spatial, temporal and inter-view correlation between the current macroblock(MB) and its neighboring MBs. Based on the observation that direct mode is highly possible to be the optimal mode, mode complexity is always checked in advance whether it is below a predefined threshold for providing an efficient early termination opportunity. If this early termination condition is not met, three mode types for the MBs are classified according to the value of mode complexity, i.e., simple mode, medium mode and complex mode, to speed up the encoding process by reducing the number of the variable block modes required to be checked. Furthermore, for simple and medium mode region, the rate distortion(RD) cost of mode 16×16 in the temporal prediction direction is compared with that of the disparity prediction direction, to determine in advance whether the optimal prediction direction is in the temporal prediction direction or not, for skipping unnecessary disparity estimation. Experimental results show that the proposed method is able to significantly reduce the computational load by 78.79% and the total bit rate by 0.07% on average, while only incurring a negligible loss of PSNR(about 0.04 d B on average), compared with the full mode decision(FMD) in the reference software of MVC.展开更多
Based on the iterative bit-filling procedure, a computationally efficient bit and power allocation algorithm is presented. The algorithm improves the conventional bit-filling algorithms by maintaining only a subset of...Based on the iterative bit-filling procedure, a computationally efficient bit and power allocation algorithm is presented. The algorithm improves the conventional bit-filling algorithms by maintaining only a subset of subcarriers for computation in each iteration, which reduces the complexity without any performance degradation. Moreover, a modified algorithm with even lower complexity is developed, and equal power allocation is introduced as an initial allocation to accelerate its convergence. Simulation results show that the modified algorithm achieves a considerable complexity reduction while causing only a minor drop in performance.展开更多
Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems....Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems. These efforts produced a deeper understanding of how EAs perform on different kinds of fitness landscapes and general mathematical tools that may be extended to the analysis of more complicated EAs on more realistic problems. In fact, in recent years, it has been possible to analyze the (1+1)-EA on combinatorial optimization problems with practical applications and more realistic population-based EAs on structured toy problems. This paper presents a survey of the results obtained in the last decade along these two research lines. The most common mathematical techniques are introduced, the basic ideas behind them are discussed and their elective applications are highlighted. Solved problems that were still open are enumerated as are those still awaiting for a solution. New questions and problems arisen in the meantime are also considered.展开更多
For future wireless communication systems,Power Domain Non-Orthogonal Multiple Access(PD-NOMA)using an advanced receiver has been considered as a promising radio access technology candidate.Power allocation plays an i...For future wireless communication systems,Power Domain Non-Orthogonal Multiple Access(PD-NOMA)using an advanced receiver has been considered as a promising radio access technology candidate.Power allocation plays an important role in the PD-NOMA system because it considerably affects the total throughput and Geometric Mean User Throughput(GMUT)performance.However,most existing studies have not completely accounted for the computational complexity of the power allocation process when the User Terminals(UTs)move in a slow fading channel environment.To resolve such problems,a power allocation method is proposed to considerably reduce the search space of a Full Search Power(FSP)allocation algorithm.The initial power reallocation coefficients will be set to start with former optimal values by the proposed Lemma before searching for optimal power reallocation coefficients based on total throughput performance.Step size and correction granularity will be adjusted within a much narrower power search range while invalid power combinations may be reasonably discarded during the search process.The simulation results show that the proposed power reallocation scheme can greatly reduce computational complexity while the total throughput and GMUT performance loss are not greater than 1.5%compared with the FSP algorithm.展开更多
The title complex is widely used as an efficient key component of Ziegler-Natta catalyst for stereospecific polymerization of dienes to produce synthetic rubbers. However, the quantitative structure-activity relations...The title complex is widely used as an efficient key component of Ziegler-Natta catalyst for stereospecific polymerization of dienes to produce synthetic rubbers. However, the quantitative structure-activity relationship(QSAR) of this kind of complexes is still not clear mainly due to the difficulties to obtain their geometric molecular structures through laboratory experiments. An alternative solution is the quantum chemistry calculation in which the comformational population shall be determined. In this study, ten conformers of the title complex were obtained with the function of molecular dynamics conformational search in Gabedit 2.4.8, and their geometry optimization and thermodynamics calculation were made with a Sparkle/PM7 approach in MOPAC 2012. Their Gibbs free energies at 1 atm. and 298.15 K were calculated. Population of the conformers was further calculated out according to the theory of Boltzmann distribution, indicating that one of the ten conformers has a dominant population of 77.13%.展开更多
Nested linear array enables to enhance localization resolution and achieve under-determined direction of arrival(DOA)estimation.In this paper,the traditional two-level nested linear array is improved to achieve more d...Nested linear array enables to enhance localization resolution and achieve under-determined direction of arrival(DOA)estimation.In this paper,the traditional two-level nested linear array is improved to achieve more degrees of freedom(DOFs)and better angle estimation performance.Furthermore,a computationally efficient DOA estimation algorithm is proposed.The discrete Fourier transform(DFT)method is utilized to obtain coarse DOA estimates,and subsequently,fine DOA estimates are achieved by spatial smoothing multiple signals classification(SS-MUSIC)algorithm.Compared to SS-MUSIC algorithm,the proposed algorithm has the same estimation accuracy with lower computational complexity because the coarse DOA estimates enable to shrink the range of angle spectral search.In addition,the estimation of the number of signals is not required in advance by DFT method.Extensive simulation results testify the effectiveness of the proposed algorithm.展开更多
First,an explicit representation A(2)T,S=(GA+E)^-1G of the outer invers A(2)T,S for a matrix A∈Cm×n with the prescribed range T and null space S is derived,which is simpler than A(2)T,S=(GA+E)^-1G-V(UV)-2UG prop...First,an explicit representation A(2)T,S=(GA+E)^-1G of the outer invers A(2)T,S for a matrix A∈Cm×n with the prescribed range T and null space S is derived,which is simpler than A(2)T,S=(GA+E)^-1G-V(UV)-2UG proposed by Ji in 2005.Next,a new algorithm for computing the outer inverse A(2)T,S based on the improved representation A(2)T,S=(GA+E)^-1G through elementary operations on an appropriate partitioned matrix GAInIn0 is proposed and investigated.Then,the computational complexity of the introduced algorithm is also analyzed in detail.Finally,two numerical examples are shown to illustrate that this method is correct.展开更多
In this paper, a real-time computation method for the control problems in differential-algebraic systems is presented. The errors of the method are estimated, and the relation between the sampling stepsize and the con...In this paper, a real-time computation method for the control problems in differential-algebraic systems is presented. The errors of the method are estimated, and the relation between the sampling stepsize and the controlled errors is analyzed. The stability analysis is done for a model problem, and the stability region is ploted which gives the range of the sampling stepsizes with which the stability of control process is guaranteed.展开更多
In this article the inherent computational power of the quantum entangled cluster states examined by measurement-based quantum computations is studied. By defining a common framework of rules for measurement of quantu...In this article the inherent computational power of the quantum entangled cluster states examined by measurement-based quantum computations is studied. By defining a common framework of rules for measurement of quantum entangled cluster states based on classical computations, the precise and detailed meaning of the computing power of the correlations in the quantum cluster states is made. This study exposes a connection, arousing interest, between the infringement of the realistic models that are local and the computing power of the quantum entangled cluster states.展开更多
In this paper,a comparative study for kernel-PCA based linear parameter varying(LPV)model approximation of sufficiently nonlinear and reasonably practical systems is carried out.Linear matrix inequalities(LMIs)to be s...In this paper,a comparative study for kernel-PCA based linear parameter varying(LPV)model approximation of sufficiently nonlinear and reasonably practical systems is carried out.Linear matrix inequalities(LMIs)to be solved in LPV controller design process increase exponentially with the increase in a number of scheduling variables.Fifteen kernel functions are used to obtain the approximate LPV model of highly coupled nonlinear systems.An error to norm ratio of original and approximate LPV models is introduced as a measure of accuracy of the approximate LPV model.Simulation examples conclude the effectiveness of kernel-PCA for LPV model approximation as with the identification of accurate approximate LPV model,computation complexity involved in LPV controller design is decreased exponentially.展开更多
In this paper,we propose Triangular Code(TC),a new class of fountain code with near-zero redundancy and linear encoding and decoding computational complexities of OeLklog kT,where k is the packet batch size and L is t...In this paper,we propose Triangular Code(TC),a new class of fountain code with near-zero redundancy and linear encoding and decoding computational complexities of OeLklog kT,where k is the packet batch size and L is the packet data length.Different from previous works where the optimal performance of codes has been shown under asymptotic assumption,TC enjoys near-zero redundancy even under non-asymptotic settings for smallmoderate number of packets.These features make TC suitable for practical implementation in batteryconstrained devices in IoT,D2D and M2M network paradigms to achieve scalable reliability,and minimize latency due to its low decoding delay.TC is a non-linear code,which is encoded using the simple shift and XOR addition operations,and decoded using the simple back-substitution algorithm.Although it is nonlinear code at the packet level,it remains linear code when atomized at the bit level.We use this property to show that the backsubstitution decoder of TC is equivalent to the Belief Propagation(BP)decoder of LT code.Therefore,TC can benefit from rich prolific literature published on LT code,to design efficient code for various applications.Despite the equivalency between the decoders of TC and LT code,we show that compared to state-of-the-art optimized LT code,TC reduces the redundancy of LT code by 68%-99% for k reaching 1024.展开更多
Image Super-Resolution(SR)research has achieved great success with powerful neural networks.The deeper networks with more parameters improve the restoration quality but add the computation complexity,which means more ...Image Super-Resolution(SR)research has achieved great success with powerful neural networks.The deeper networks with more parameters improve the restoration quality but add the computation complexity,which means more inference time would be cost,hindering image SR from practical usage.Noting the spatial distribution of the objects or things in images,a twostage local objects SR system is proposed,which consists of two modules,the object detection module and the SR module.Firstly,You Only Look Once(YOLO),which is efficient in generic object detection tasks,is selected to detect the input images for obtaining objects of interest,then put them into the SR module and output corresponding High-Resolution(HR)subimages.The computational power consumption of image SR is optimized by reducing the resolution of input images.In addition,we establish a dataset,TrafficSign500,for our experiment.Finally,the performance of the proposed system is evaluated under several State-Of-The-Art(SOTA)YOLOv5 and SISR models.Results show that our system can achieve a tremendous computation improvement in image SR.展开更多
Appearance-based dynamic Hand Gesture Recognition(HGR)remains a prominent area of research in Human-Computer Interaction(HCI).Numerous environmental and computational constraints limit its real-time deployment.In addi...Appearance-based dynamic Hand Gesture Recognition(HGR)remains a prominent area of research in Human-Computer Interaction(HCI).Numerous environmental and computational constraints limit its real-time deployment.In addition,the performance of a model decreases as the subject’s distance from the camera increases.This study proposes a 3D separable Convolutional Neural Network(CNN),considering the model’s computa-tional complexity and recognition accuracy.The 20BN-Jester dataset was used to train the model for six gesture classes.After achieving the best offline recognition accuracy of 94.39%,the model was deployed in real-time while considering the subject’s attention,the instant of performing a gesture,and the subject’s distance from the camera.Despite being discussed in numerous research articles,the distance factor remains unresolved in real-time deployment,which leads to degraded recognition results.In the proposed approach,the distance calculation substantially improves the classification performance by reducing the impact of the subject’s distance from the camera.Additionally,the capability of feature extraction,degree of relevance,and statistical significance of the proposed model against other state-of-the-art models were validated using t-distributed Stochastic Neighbor Embedding(t-SNE),Mathew’s Correlation Coefficient(MCC),and the McNemar test,respectively.We observed that the proposed model exhibits state-of-the-art outcomes and a comparatively high significance level.展开更多
In this advanced exploration, we focus on multiple parameters estimation in bistatic Multiple-Input Multiple-Output(MIMO) radar systems, a crucial technique for target localization and imaging. Our research innovative...In this advanced exploration, we focus on multiple parameters estimation in bistatic Multiple-Input Multiple-Output(MIMO) radar systems, a crucial technique for target localization and imaging. Our research innovatively addresses the joint estimation of the Direction of Departure(DOD), Direction of Arrival(DOA), and Doppler frequency for incoherent targets. We propose a novel approach that significantly reduces computational complexity by utilizing the TemporalSpatial Nested Sampling Model(TSNSM). Our methodology begins with a multi-linear mapping mechanism to efficiently eliminate unnecessary virtual Degrees of Freedom(DOFs) and reorganize the remaining ones. We then employ the Toeplitz matrix triple iteration reconstruction method, surpassing the traditional Temporal-Spatial Smoothing Window(TSSW) approach, to mitigate the single snapshot effect and reduce computational demands. We further refine the highdimensional ESPRIT algorithm for joint estimation of DOD, DOA, and Doppler frequency, eliminating the need for additional parameter pairing. Moreover, we meticulously derive the Cramér-Rao Bound(CRB) for the TSNSM. This signal model allows for a second expansion of DOFs in time and space domains, achieving high precision in target angle and Doppler frequency estimation with low computational complexity. Our adaptable algorithm is validated through simulations and is suitable for sparse array MIMO radars with various structures, ensuring higher precision in parameter estimation with less complexity burden.展开更多
Unique shortest vector problem(uSVP)plays an important role in lattice based cryptography.Many cryptographic schemes based their security on it.For the cofidence of those applications,it is essential to clarify the co...Unique shortest vector problem(uSVP)plays an important role in lattice based cryptography.Many cryptographic schemes based their security on it.For the cofidence of those applications,it is essential to clarify the complex-ity of uSVP with different parameters.However,proving the NP-hardness of usVP appears quite hard.To the state of the art,we are even not able to prove the NP-hardness of usVP with constant parameters.In this work,we gave a lower bound for the hardness of usVP with constant parameters,i.e.we proved that uSVP is at least as hard as gap shortest vector problem(GapSVP)with gap of O(√n/log(n)),which is in NP n coAM.Unlike previous works,our reduction works for paramters in a bigger range,especially when the constant hidden by the big-O in GapsVP is smallerthan1.展开更多
A central problem in the study of complexity is the measure of nonuniform complexity classes. BPPP/poly has been proved by Aldman, and EXPSPACEP/poly by Kannan. We propose the definition of approximate acceptance with...A central problem in the study of complexity is the measure of nonuniform complexity classes. BPPP/poly has been proved by Aldman, and EXPSPACEP/poly by Kannan. We propose the definition of approximate acceptance with which we discuss the nonuniform complexity of the K sized complete subgraph problem. The method of modal theory is used and the K sized complete subgraph problemP/poly, co NPP/poly and NPP/poly is proved. This paper solves the Karp Lipton′s open problem: “NPP/poly?”展开更多
The theory of parameterized computation and complexity is a recentlydeveloped subarea in theoretical computer science. The theory is aimed at practically solving alarge number of computational problems that are theore...The theory of parameterized computation and complexity is a recentlydeveloped subarea in theoretical computer science. The theory is aimed at practically solving alarge number of computational problems that are theoretically intractable. The theory is based onthe observation that many intractable computational problems in practice are associated with aparameter that varies within a small or moderate range. Therefore, by taking the advantages of thesmall parameters, many theoretically intractable problems can be solved effectively and practically.On the other hand, the theory of parameterized computation and complexity has also offered powerfultechniques that enable us to derive strong computational lower bounds for many computationalproblems, thus explaining why certain theoretically tractable problems cannot be solved effectivelyand practically. The theory of parameterized computation and complexity has found wide applicationsin areas such as database systems, programming languages, networks, VLSI design, parallel anddistributed computing, computational biology, and robotics. This survey gives an overview on thefundamentals, algorithms, techniques, and applications developed in the research of parameterizedcomputation and complexity. We will also report the most recent advances and excitements, anddiscuss further research directions in the area.展开更多
文摘The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingly important to predict those regions through computational methods.In this paper,we developed a novel scheme by employing sequence complexity to calculate six features for each residue of a protein sequence,which includes the Shannon entropy,the topological entropy,the sample entropy and three amino acid preferences including Remark 465,Deleage/Roux,and Bfactor(2STD).Particularly,we introduced the sample entropy for calculating time series complexity by mapping the amino acid sequence to a time series of 0-9.To our knowledge,the sample entropy has not been previously used for predicting IDPs and hence is being used for the first time in our study.In addition,the scheme used a properly sized sliding window in every protein sequence which greatly improved the prediction performance.Finally,we used seven machine learning algorithms and tested with 10-fold cross-validation to get the results on the dataset R80 collected by Yang et al.and of the dataset DIS1556 from the Database of Protein Disorder(DisProt)(https://www.disprot.org)containing experimentally determined intrinsically disordered proteins(IDPs).The results showed that k-Nearest Neighbor was more appropriate and an overall prediction accuracy of 92%.Furthermore,our method just used six features and hence required lower computational complexity.
文摘This paper proposes low-cost yet high-accuracy direction of arrival(DOA)estimation for the automotive frequency-modulated continuous-wave(FMcW)radar.The existing subspace-based DOA estimation algorithms suffer fromeither high computational costs or low accuracy.We aim to solve such contradictory relation between complexity and accuracy by using randomizedmatrix approximation.Specifically,we apply an easily-interpretablerandomized low-rank approximation to the covariance matrix(CM)and R∈C^(M×M)throughthresketch maties in the fom of R≈OBQ^(H).Here the approximately compute its subspaces.That is,we first approximate matrix Q∈C^(M×z)contains the orthonormal basis for the range of the sketchmatrik C∈C^(M×z)cwe whichis etrated fom R using randomized unifom counsampling and B∈C^(z×z)is a weight-matrix reducing the approximation error.Relying on such approximation,we are able to accelerate the subspacecomputation by the orders of the magnitude without compromising estimation accuracy.Furthermore,we drive a theoretical error bound for the suggested scheme to ensure the accuracy of the approximation.As validated by the simulation results,the DOA estimation accuracy of the proposed algorithm,eficient multiple signal classification(E-MUSIC)s high,closely tracks standardMUSIC,and outperforms the well-known algorithms with tremendouslyreduced time complexity.Thus,the devised method can realize high-resolutionreal-time target detection in the emerging multiple input and multiple output(MIMO)automotive radar systems.
基金support of the Interdisciplinary Research Center for Intelligent Secure Systems(IRC-ISS)Internal Fund Grant#INSS2202.
文摘The use of the Internet of Things(IoT)is expanding at an unprecedented scale in many critical applications due to the ability to interconnect and utilize a plethora of wide range of devices.In critical infrastructure domains like oil and gas supply,intelligent transportation,power grids,and autonomous agriculture,it is essential to guarantee the confidentiality,integrity,and authenticity of data collected and exchanged.However,the limited resources coupled with the heterogeneity of IoT devices make it inefficient or sometimes infeasible to achieve secure data transmission using traditional cryptographic techniques.Consequently,designing a lightweight secure data transmission scheme is becoming essential.In this article,we propose lightweight secure data transmission(LSDT)scheme for IoT environments.LSDT consists of three phases and utilizes an effective combination of symmetric keys and the Elliptic Curve Menezes-Qu-Vanstone asymmetric key agreement protocol.We design the simulation environment and experiments to evaluate the performance of the LSDT scheme in terms of communication and computation costs.Security and performance analysis indicates that the LSDT scheme is secure,suitable for IoT applications,and performs better in comparison to other related security schemes.
基金Project(08Y29-7)supported by the Transportation Science and Research Program of Jiangsu Province,ChinaProject(201103051)supported by the Major Infrastructure Program of the Health Monitoring System Hardware Platform Based on Sensor Network Node,China+1 种基金Project(61100111)supported by the National Natural Science Foundation of ChinaProject(BE2011169)supported by the Scientific and Technical Supporting Program of Jiangsu Province,China
文摘The variable block-size motion estimation(ME) and disparity estimation(DE) are adopted in multi-view video coding(MVC) to achieve high coding efficiency. However, much higher computational complexity is also introduced in coding system, which hinders practical application of MVC. An efficient fast mode decision method using mode complexity is proposed to reduce the computational complexity. In the proposed method, mode complexity is firstly computed by using the spatial, temporal and inter-view correlation between the current macroblock(MB) and its neighboring MBs. Based on the observation that direct mode is highly possible to be the optimal mode, mode complexity is always checked in advance whether it is below a predefined threshold for providing an efficient early termination opportunity. If this early termination condition is not met, three mode types for the MBs are classified according to the value of mode complexity, i.e., simple mode, medium mode and complex mode, to speed up the encoding process by reducing the number of the variable block modes required to be checked. Furthermore, for simple and medium mode region, the rate distortion(RD) cost of mode 16×16 in the temporal prediction direction is compared with that of the disparity prediction direction, to determine in advance whether the optimal prediction direction is in the temporal prediction direction or not, for skipping unnecessary disparity estimation. Experimental results show that the proposed method is able to significantly reduce the computational load by 78.79% and the total bit rate by 0.07% on average, while only incurring a negligible loss of PSNR(about 0.04 d B on average), compared with the full mode decision(FMD) in the reference software of MVC.
基金The National High Technology Research and Devel-opment Program of China (863Program) (No2006AA01Z263)the National Natural Science Foundation of China (No60496311)
文摘Based on the iterative bit-filling procedure, a computationally efficient bit and power allocation algorithm is presented. The algorithm improves the conventional bit-filling algorithms by maintaining only a subset of subcarriers for computation in each iteration, which reduces the complexity without any performance degradation. Moreover, a modified algorithm with even lower complexity is developed, and equal power allocation is introduced as an initial allocation to accelerate its convergence. Simulation results show that the modified algorithm achieves a considerable complexity reduction while causing only a minor drop in performance.
基金This work was supported by an EPSRC grant (No.EP/C520696/1).
文摘Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems. These efforts produced a deeper understanding of how EAs perform on different kinds of fitness landscapes and general mathematical tools that may be extended to the analysis of more complicated EAs on more realistic problems. In fact, in recent years, it has been possible to analyze the (1+1)-EA on combinatorial optimization problems with practical applications and more realistic population-based EAs on structured toy problems. This paper presents a survey of the results obtained in the last decade along these two research lines. The most common mathematical techniques are introduced, the basic ideas behind them are discussed and their elective applications are highlighted. Solved problems that were still open are enumerated as are those still awaiting for a solution. New questions and problems arisen in the meantime are also considered.
基金supported in part by the Science and Technology Research Program of the National Science Foundation of China(61671096)Chongqing Research Program of Basic Science and Frontier Technology(cstc2017jcyjBX0005)+1 种基金Chongqing Municipal Education Commission(KJQN201800642)Doctoral Student Training Program(BYJS2016009).
文摘For future wireless communication systems,Power Domain Non-Orthogonal Multiple Access(PD-NOMA)using an advanced receiver has been considered as a promising radio access technology candidate.Power allocation plays an important role in the PD-NOMA system because it considerably affects the total throughput and Geometric Mean User Throughput(GMUT)performance.However,most existing studies have not completely accounted for the computational complexity of the power allocation process when the User Terminals(UTs)move in a slow fading channel environment.To resolve such problems,a power allocation method is proposed to considerably reduce the search space of a Full Search Power(FSP)allocation algorithm.The initial power reallocation coefficients will be set to start with former optimal values by the proposed Lemma before searching for optimal power reallocation coefficients based on total throughput performance.Step size and correction granularity will be adjusted within a much narrower power search range while invalid power combinations may be reasonably discarded during the search process.The simulation results show that the proposed power reallocation scheme can greatly reduce computational complexity while the total throughput and GMUT performance loss are not greater than 1.5%compared with the FSP algorithm.
基金supported by the National Natural Science Foundation of China(No.21476119)
文摘The title complex is widely used as an efficient key component of Ziegler-Natta catalyst for stereospecific polymerization of dienes to produce synthetic rubbers. However, the quantitative structure-activity relationship(QSAR) of this kind of complexes is still not clear mainly due to the difficulties to obtain their geometric molecular structures through laboratory experiments. An alternative solution is the quantum chemistry calculation in which the comformational population shall be determined. In this study, ten conformers of the title complex were obtained with the function of molecular dynamics conformational search in Gabedit 2.4.8, and their geometry optimization and thermodynamics calculation were made with a Sparkle/PM7 approach in MOPAC 2012. Their Gibbs free energies at 1 atm. and 298.15 K were calculated. Population of the conformers was further calculated out according to the theory of Boltzmann distribution, indicating that one of the ten conformers has a dominant population of 77.13%.
基金supported by the Postgraduate Research & Practice Innovation Program of Jiangsu Province (No.SJCX18_0103)Key Laboratory of Dynamic Cognitive System of Electromagnetic Spectrum Space (Nanjing University of Aeronautics and Astronautics), Ministry of Industry and Information Technology (No.KF20181915)
文摘Nested linear array enables to enhance localization resolution and achieve under-determined direction of arrival(DOA)estimation.In this paper,the traditional two-level nested linear array is improved to achieve more degrees of freedom(DOFs)and better angle estimation performance.Furthermore,a computationally efficient DOA estimation algorithm is proposed.The discrete Fourier transform(DFT)method is utilized to obtain coarse DOA estimates,and subsequently,fine DOA estimates are achieved by spatial smoothing multiple signals classification(SS-MUSIC)algorithm.Compared to SS-MUSIC algorithm,the proposed algorithm has the same estimation accuracy with lower computational complexity because the coarse DOA estimates enable to shrink the range of angle spectral search.In addition,the estimation of the number of signals is not required in advance by DFT method.Extensive simulation results testify the effectiveness of the proposed algorithm.
基金The National Natural Science Foundation of China(No.11771076).
文摘First,an explicit representation A(2)T,S=(GA+E)^-1G of the outer invers A(2)T,S for a matrix A∈Cm×n with the prescribed range T and null space S is derived,which is simpler than A(2)T,S=(GA+E)^-1G-V(UV)-2UG proposed by Ji in 2005.Next,a new algorithm for computing the outer inverse A(2)T,S based on the improved representation A(2)T,S=(GA+E)^-1G through elementary operations on an appropriate partitioned matrix GAInIn0 is proposed and investigated.Then,the computational complexity of the introduced algorithm is also analyzed in detail.Finally,two numerical examples are shown to illustrate that this method is correct.
文摘In this paper, a real-time computation method for the control problems in differential-algebraic systems is presented. The errors of the method are estimated, and the relation between the sampling stepsize and the controlled errors is analyzed. The stability analysis is done for a model problem, and the stability region is ploted which gives the range of the sampling stepsizes with which the stability of control process is guaranteed.
文摘In this article the inherent computational power of the quantum entangled cluster states examined by measurement-based quantum computations is studied. By defining a common framework of rules for measurement of quantum entangled cluster states based on classical computations, the precise and detailed meaning of the computing power of the correlations in the quantum cluster states is made. This study exposes a connection, arousing interest, between the infringement of the realistic models that are local and the computing power of the quantum entangled cluster states.
文摘In this paper,a comparative study for kernel-PCA based linear parameter varying(LPV)model approximation of sufficiently nonlinear and reasonably practical systems is carried out.Linear matrix inequalities(LMIs)to be solved in LPV controller design process increase exponentially with the increase in a number of scheduling variables.Fifteen kernel functions are used to obtain the approximate LPV model of highly coupled nonlinear systems.An error to norm ratio of original and approximate LPV models is introduced as a measure of accuracy of the approximate LPV model.Simulation examples conclude the effectiveness of kernel-PCA for LPV model approximation as with the identification of accurate approximate LPV model,computation complexity involved in LPV controller design is decreased exponentially.
文摘In this paper,we propose Triangular Code(TC),a new class of fountain code with near-zero redundancy and linear encoding and decoding computational complexities of OeLklog kT,where k is the packet batch size and L is the packet data length.Different from previous works where the optimal performance of codes has been shown under asymptotic assumption,TC enjoys near-zero redundancy even under non-asymptotic settings for smallmoderate number of packets.These features make TC suitable for practical implementation in batteryconstrained devices in IoT,D2D and M2M network paradigms to achieve scalable reliability,and minimize latency due to its low decoding delay.TC is a non-linear code,which is encoded using the simple shift and XOR addition operations,and decoded using the simple back-substitution algorithm.Although it is nonlinear code at the packet level,it remains linear code when atomized at the bit level.We use this property to show that the backsubstitution decoder of TC is equivalent to the Belief Propagation(BP)decoder of LT code.Therefore,TC can benefit from rich prolific literature published on LT code,to design efficient code for various applications.Despite the equivalency between the decoders of TC and LT code,we show that compared to state-of-the-art optimized LT code,TC reduces the redundancy of LT code by 68%-99% for k reaching 1024.
基金supported by the National Natural Science Foundation of China(NSFC)under Grant No.62001057by Beijing University of Posts and Telecommunications Basic Research Fund,2021RC26by the National Natural Science Foundation of China(NSFC)under Grant Nos.61871048 and 61872253.
文摘Image Super-Resolution(SR)research has achieved great success with powerful neural networks.The deeper networks with more parameters improve the restoration quality but add the computation complexity,which means more inference time would be cost,hindering image SR from practical usage.Noting the spatial distribution of the objects or things in images,a twostage local objects SR system is proposed,which consists of two modules,the object detection module and the SR module.Firstly,You Only Look Once(YOLO),which is efficient in generic object detection tasks,is selected to detect the input images for obtaining objects of interest,then put them into the SR module and output corresponding High-Resolution(HR)subimages.The computational power consumption of image SR is optimized by reducing the resolution of input images.In addition,we establish a dataset,TrafficSign500,for our experiment.Finally,the performance of the proposed system is evaluated under several State-Of-The-Art(SOTA)YOLOv5 and SISR models.Results show that our system can achieve a tremendous computation improvement in image SR.
文摘Appearance-based dynamic Hand Gesture Recognition(HGR)remains a prominent area of research in Human-Computer Interaction(HCI).Numerous environmental and computational constraints limit its real-time deployment.In addition,the performance of a model decreases as the subject’s distance from the camera increases.This study proposes a 3D separable Convolutional Neural Network(CNN),considering the model’s computa-tional complexity and recognition accuracy.The 20BN-Jester dataset was used to train the model for six gesture classes.After achieving the best offline recognition accuracy of 94.39%,the model was deployed in real-time while considering the subject’s attention,the instant of performing a gesture,and the subject’s distance from the camera.Despite being discussed in numerous research articles,the distance factor remains unresolved in real-time deployment,which leads to degraded recognition results.In the proposed approach,the distance calculation substantially improves the classification performance by reducing the impact of the subject’s distance from the camera.Additionally,the capability of feature extraction,degree of relevance,and statistical significance of the proposed model against other state-of-the-art models were validated using t-distributed Stochastic Neighbor Embedding(t-SNE),Mathew’s Correlation Coefficient(MCC),and the McNemar test,respectively.We observed that the proposed model exhibits state-of-the-art outcomes and a comparatively high significance level.
基金supported in part by the National Natural Science Foundation of China(No.62071476)in part by China Postdoctoral Science Foundation(No.2022M723879)in part by the Science and Technology Innovation Program of Hunan Province,China(No.2021RC3080)。
文摘In this advanced exploration, we focus on multiple parameters estimation in bistatic Multiple-Input Multiple-Output(MIMO) radar systems, a crucial technique for target localization and imaging. Our research innovatively addresses the joint estimation of the Direction of Departure(DOD), Direction of Arrival(DOA), and Doppler frequency for incoherent targets. We propose a novel approach that significantly reduces computational complexity by utilizing the TemporalSpatial Nested Sampling Model(TSNSM). Our methodology begins with a multi-linear mapping mechanism to efficiently eliminate unnecessary virtual Degrees of Freedom(DOFs) and reorganize the remaining ones. We then employ the Toeplitz matrix triple iteration reconstruction method, surpassing the traditional Temporal-Spatial Smoothing Window(TSSW) approach, to mitigate the single snapshot effect and reduce computational demands. We further refine the highdimensional ESPRIT algorithm for joint estimation of DOD, DOA, and Doppler frequency, eliminating the need for additional parameter pairing. Moreover, we meticulously derive the Cramér-Rao Bound(CRB) for the TSNSM. This signal model allows for a second expansion of DOFs in time and space domains, achieving high precision in target angle and Doppler frequency estimation with low computational complexity. Our adaptable algorithm is validated through simulations and is suitable for sparse array MIMO radars with various structures, ensuring higher precision in parameter estimation with less complexity burden.
基金This work is funded by National Natural Science Foundation of China(Grants No.62172405).
文摘Unique shortest vector problem(uSVP)plays an important role in lattice based cryptography.Many cryptographic schemes based their security on it.For the cofidence of those applications,it is essential to clarify the complex-ity of uSVP with different parameters.However,proving the NP-hardness of usVP appears quite hard.To the state of the art,we are even not able to prove the NP-hardness of usVP with constant parameters.In this work,we gave a lower bound for the hardness of usVP with constant parameters,i.e.we proved that uSVP is at least as hard as gap shortest vector problem(GapSVP)with gap of O(√n/log(n)),which is in NP n coAM.Unlike previous works,our reduction works for paramters in a bigger range,especially when the constant hidden by the big-O in GapsVP is smallerthan1.
文摘A central problem in the study of complexity is the measure of nonuniform complexity classes. BPPP/poly has been proved by Aldman, and EXPSPACEP/poly by Kannan. We propose the definition of approximate acceptance with which we discuss the nonuniform complexity of the K sized complete subgraph problem. The method of modal theory is used and the K sized complete subgraph problemP/poly, co NPP/poly and NPP/poly is proved. This paper solves the Karp Lipton′s open problem: “NPP/poly?”
文摘The theory of parameterized computation and complexity is a recentlydeveloped subarea in theoretical computer science. The theory is aimed at practically solving alarge number of computational problems that are theoretically intractable. The theory is based onthe observation that many intractable computational problems in practice are associated with aparameter that varies within a small or moderate range. Therefore, by taking the advantages of thesmall parameters, many theoretically intractable problems can be solved effectively and practically.On the other hand, the theory of parameterized computation and complexity has also offered powerfultechniques that enable us to derive strong computational lower bounds for many computationalproblems, thus explaining why certain theoretically tractable problems cannot be solved effectivelyand practically. The theory of parameterized computation and complexity has found wide applicationsin areas such as database systems, programming languages, networks, VLSI design, parallel anddistributed computing, computational biology, and robotics. This survey gives an overview on thefundamentals, algorithms, techniques, and applications developed in the research of parameterizedcomputation and complexity. We will also report the most recent advances and excitements, anddiscuss further research directions in the area.