In this paper, we use the cellular automation model to imitate earthquake process and draw some conclusionsof general applicability. First, it is confirmed that earthquake process has some ordering characters, and it ...In this paper, we use the cellular automation model to imitate earthquake process and draw some conclusionsof general applicability. First, it is confirmed that earthquake process has some ordering characters, and it isshown that both the existence and their mutual arrangement of faults could obviously influence the overallcharacters of earthquake process. Then the characters of each stage of model evolution are explained withself-organized critical state theory. Finally, earthquake sequences produced by the models are analysed interms pf algorithmic complexity and the result shows that AC-values of algorithmic complexity could be usedto study earthquake process and evolution.展开更多
Complex-valued neural networks(CVNNs)have shown their excellent efficiency compared to their real counterparts in speech enhancement,image and signal processing.Researchers throughout the years have made many efforts ...Complex-valued neural networks(CVNNs)have shown their excellent efficiency compared to their real counterparts in speech enhancement,image and signal processing.Researchers throughout the years have made many efforts to improve the learning algorithms and activation functions of CVNNs.Since CVNNs have proven to have better performance in handling the naturally complex-valued data and signals,this area of study will grow and expect the arrival of some effective improvements in the future.Therefore,there exists an obvious reason to provide a comprehensive survey paper that systematically collects and categorizes the advancement of CVNNs.In this paper,we discuss and summarize the recent advances based on their learning algorithms,activation functions,which is the most challenging part of building a CVNN,and applications.Besides,we outline the structure and applications of complex-valued convolutional,residual and recurrent neural networks.Finally,we also present some challenges and future research directions to facilitate the exploration of the ability of CVNNs.展开更多
This paper presents a new tree sorting algorithm whose average time complexity is much better than the sorting methods using AVL-Tree or other balanced trees. The experiment shows that our algorithm is much faster tha...This paper presents a new tree sorting algorithm whose average time complexity is much better than the sorting methods using AVL-Tree or other balanced trees. The experiment shows that our algorithm is much faster than the sorting methods using AVL-Thee or other balanced trees.展开更多
Considering that the probability distribution of random variables in stochastic programming usually has incomplete information due to a perfect sample data in many real applications, this paper discusses a class of tw...Considering that the probability distribution of random variables in stochastic programming usually has incomplete information due to a perfect sample data in many real applications, this paper discusses a class of two-stage stochastic programming problems modeling with maximum minimum expectation compensation criterion (MaxEMin) under the probability distribution having linear partial information (LPI). In view of the nondifferentiability of this kind of stochastic programming modeling, an improved complex algorithm is designed and analyzed. This algorithm can effectively solve the nondifferentiable stochastic programming problem under LPI through the variable polyhedron iteration. The calculation and discussion of numerical examples show the effectiveness of the proposed algorithm.展开更多
Provided an algorithm for the distribution search and proves the time complexity of the algorithm. This algorithm uses a mathematical formula to search n elements in the sequence of n elements in O(n)expected time,and...Provided an algorithm for the distribution search and proves the time complexity of the algorithm. This algorithm uses a mathematical formula to search n elements in the sequence of n elements in O(n)expected time,and experimental reesult proves that distribution search is superior to binary search.展开更多
In this paper, we establish the polynomial complexity of a primal-dual path-following interior point algorithm for solving semidefinite optimization(SDO) problems. The proposed algorithm is based on a new kernel fun...In this paper, we establish the polynomial complexity of a primal-dual path-following interior point algorithm for solving semidefinite optimization(SDO) problems. The proposed algorithm is based on a new kernel function which differs from the existing kernel functions in which it has a double barrier term. With this function we define a new search direction and also a new proximity function for analyzing its complexity. We show that if q1 〉 q2 〉 1, the algorithm has O((q1 + 1) nq1+1/2(q1-q2)logn/ε)and O((q1 + 1)2(q1-q2)^3q1-2q2+1√n logn/c) complexity results for large- and small-update methods, respectively.展开更多
In this paper, we have proved that the lower bound of the number of real multiplications for computing a length 2(t) real GFT(a,b) (a = +/-1/2, b = 0 or b = +/-1/2, a = 0) is 2(t+1) - 2t - 2 and that for computing a l...In this paper, we have proved that the lower bound of the number of real multiplications for computing a length 2(t) real GFT(a,b) (a = +/-1/2, b = 0 or b = +/-1/2, a = 0) is 2(t+1) - 2t - 2 and that for computing a length 2t real GFT(a,b)(a = +/-1/2, b = +/-1/2) is 2(t+1) - 2. Practical algorithms which meet the lower bounds of multiplications are given.展开更多
A PL homotopy algorithm is modified to yield a polynomial-time result on its computational complexity.We prove that the cost of locating all zeros of a polynomial of degree n to an accuracy of ε(measured by the numbe...A PL homotopy algorithm is modified to yield a polynomial-time result on its computational complexity.We prove that the cost of locating all zeros of a polynomial of degree n to an accuracy of ε(measured by the number of evaluations of the polynomial)grows no faster than O(max{n^4,n^3log_2(n/ε)}).This work is in response to a question raised in a paper by S.Smale as to the efficiency of piecewise linear methods in solving equations.In comparison with a few results reported,the algorithm under discussion is the only one providing correct multiplicities and the only one employing vector labelling.展开更多
Computational Social Choice is an interdisciplinary research area involving Economics, Political Science,and Social Science on the one side, and Mathematics and Computer Science(including Artificial Intelligence and ...Computational Social Choice is an interdisciplinary research area involving Economics, Political Science,and Social Science on the one side, and Mathematics and Computer Science(including Artificial Intelligence and Multiagent Systems) on the other side. Typical computational problems studied in this field include the vulnerability of voting procedures against attacks, or preference aggregation in multi-agent systems. Parameterized Algorithmics is a subfield of Theoretical Computer Science seeking to exploit meaningful problem-specific parameters in order to identify tractable special cases of in general computationally hard problems. In this paper, we propose nine of our favorite research challenges concerning the parameterized complexity of problems appearing in this context. This work is dedicated to Jianer Chen, one of the strongest problem solvers in the history of parameterized algorithmics,on the occasion of his 60 th birthday.展开更多
The fading factor exerts a significant role in the strong tracking idea. However, traditional fading factor introduction method hinders the accuracy and robustness advantages of current strong-tracking-based nonlinear...The fading factor exerts a significant role in the strong tracking idea. However, traditional fading factor introduction method hinders the accuracy and robustness advantages of current strong-tracking-based nonlinear filtering algorithms such as Cubature Kalman Filter(CKF) since traditional fading factor introduction method only considers the first-order Taylor expansion. To this end, a new fading factor idea is suggested and introduced into the strong tracking CKF method.The new fading factor introduction method expanded the number of fading factors from one to two with reselected introduction positions. The relationship between the two fading factors as well as the general calculation method can be derived based on Taylor expansion. Obvious superiority of the newly suggested fading factor introduction method is demonstrated according to different nonlinearity of the measurement function. Equivalent calculation method can also be established while applied to CKF. Theoretical analysis shows that the strong tracking CKF can extract the thirdorder term information from the residual and thus realize second-order accuracy. After optimizing the strong tracking algorithm process, a Fast Strong Tracking CKF(FSTCKF) is finally established. Two simulation examples show that the novel FSTCKF improves the robustness of traditional CKF while minimizing the algorithm time complexity under various conditions.展开更多
With soaring work frequency and decreasing feature sizes, VLSI circuits with RLC parasitic components are more like analog circuits and should be carefully analyzed in physical design. However, the number of extracted...With soaring work frequency and decreasing feature sizes, VLSI circuits with RLC parasitic components are more like analog circuits and should be carefully analyzed in physical design. However, the number of extracted RLC components is typically too large to be analyzed efficiently by using present analog circuit simulators like SPICE. In order to speedup the simulations without error penalty, this paper proposes a novel methodology to compress the time-descritized circuits resulted from numerical integration approximation at every time step. The main contribution of the methodology is the efficient structure-level compression of DC circuits containing many current sources, which is an important complement to present circuit analysis theory. The methodology consists of the following parts: 1) An approach is proposed to delete all intermediate nodes of RL branches. 2) An efficient approach is proposed to compress and back-solve parallel and serial branches so that it is error-free and of linear complexity to analyze circuits of tree topology. 3) The Y to πtransformation method is used to error-free reduce and back-solve the intermediate nodes of ladder circuits with the linear complexity. Thus, the whole simulation method is very accurate and of linear complexity to analyze circuits of chain topology. Based on the methodology, we propose several novel algorithms for efficiently solving RLC-model transient power/ground (P/G) networks. Among them, EQU-ADI algorithm of linear-complexity is proposed to solve RLC P/G networks with mesh-tree or mesh-chain topologies. Experimental results show that the proposed method is at least two orders of magnitude faster than SPICE while it can scale linearly in both time- and memory-complexity to solve very large P/G networks.展开更多
A distribution theory of the roots of a polynomial and a parallel algorithm for finding roots of a complex polynomial based on that theory are developed in this paper. With high parallelism, the algorithm is an im- pr...A distribution theory of the roots of a polynomial and a parallel algorithm for finding roots of a complex polynomial based on that theory are developed in this paper. With high parallelism, the algorithm is an im- provement over the Wilf algorithm.展开更多
The Travelling Salesman Problem ( TSP ) is one of the most difficult problems that many scholars all over the world are studying. This paper points out the disparity between the definition and the classical solution...The Travelling Salesman Problem ( TSP ) is one of the most difficult problems that many scholars all over the world are studying. This paper points out the disparity between the definition and the classical solution to TSP and its practical applications, and then presents a new definition of TSP and its effective algorithm conforming to practical applications, thus making TSP practically more valuable.展开更多
The model of laminated wave turbulence puts forth a novel computational problem–construction of fast algorithms for finding exact solutions of Diophantine equations in integers of order 10^(12) and more.The equations...The model of laminated wave turbulence puts forth a novel computational problem–construction of fast algorithms for finding exact solutions of Diophantine equations in integers of order 10^(12) and more.The equations to be solved in integers are resonant conditions for nonlinearly interacting waves and their form is defined by the wave dispersion.It is established that for the most common dispersion as an arbitrary function of a wave-vector length two different generic algorithms are necessary:(1)one-class-case algorithm for waves interacting through scales,and(2)two-class-case algorithm for waves interacting through phases.In our previous paper we described the one-class-case generic algorithm and in our present paper we present the two-classcase generic algorithm.展开更多
We analyze a common feature of p-Kemeny AGGregation(p-KAGG) and p-One-Sided Crossing Minimization(p-OSCM) to provide new insights and findings of interest to both the graph drawing community and the social choice ...We analyze a common feature of p-Kemeny AGGregation(p-KAGG) and p-One-Sided Crossing Minimization(p-OSCM) to provide new insights and findings of interest to both the graph drawing community and the social choice community. We obtain parameterized subexponential-time algorithms for p-KAGG—a problem in social choice theory—and for p-OSCM—a problem in graph drawing. These algorithms run in time O*(2O(√k log k)),where k is the parameter, and significantly improve the previous best algorithms with running times O.1.403k/and O.1.4656k/, respectively. We also study natural "above-guarantee" versions of these problems and show them to be fixed parameter tractable. In fact, we show that the above-guarantee versions of these problems are equivalent to a weighted variant of p-directed feedback arc set. Our results for the above-guarantee version of p-KAGG reveal an interesting contrast. We show that when the number of "votes" in the input to p-KAGG is odd the above guarantee version can still be solved in time O*(2O(√k log k)), while if it is even then the problem cannot have a subexponential time algorithm unless the exponential time hypothesis fails(equivalently, unless FPT D M[1]).展开更多
文摘In this paper, we use the cellular automation model to imitate earthquake process and draw some conclusionsof general applicability. First, it is confirmed that earthquake process has some ordering characters, and it isshown that both the existence and their mutual arrangement of faults could obviously influence the overallcharacters of earthquake process. Then the characters of each stage of model evolution are explained withself-organized critical state theory. Finally, earthquake sequences produced by the models are analysed interms pf algorithmic complexity and the result shows that AC-values of algorithmic complexity could be usedto study earthquake process and evolution.
基金partially supported by the JSPS KAKENHI(JP22H03643,JP19K22891)。
文摘Complex-valued neural networks(CVNNs)have shown their excellent efficiency compared to their real counterparts in speech enhancement,image and signal processing.Researchers throughout the years have made many efforts to improve the learning algorithms and activation functions of CVNNs.Since CVNNs have proven to have better performance in handling the naturally complex-valued data and signals,this area of study will grow and expect the arrival of some effective improvements in the future.Therefore,there exists an obvious reason to provide a comprehensive survey paper that systematically collects and categorizes the advancement of CVNNs.In this paper,we discuss and summarize the recent advances based on their learning algorithms,activation functions,which is the most challenging part of building a CVNN,and applications.Besides,we outline the structure and applications of complex-valued convolutional,residual and recurrent neural networks.Finally,we also present some challenges and future research directions to facilitate the exploration of the ability of CVNNs.
文摘This paper presents a new tree sorting algorithm whose average time complexity is much better than the sorting methods using AVL-Tree or other balanced trees. The experiment shows that our algorithm is much faster than the sorting methods using AVL-Thee or other balanced trees.
文摘Considering that the probability distribution of random variables in stochastic programming usually has incomplete information due to a perfect sample data in many real applications, this paper discusses a class of two-stage stochastic programming problems modeling with maximum minimum expectation compensation criterion (MaxEMin) under the probability distribution having linear partial information (LPI). In view of the nondifferentiability of this kind of stochastic programming modeling, an improved complex algorithm is designed and analyzed. This algorithm can effectively solve the nondifferentiable stochastic programming problem under LPI through the variable polyhedron iteration. The calculation and discussion of numerical examples show the effectiveness of the proposed algorithm.
文摘Provided an algorithm for the distribution search and proves the time complexity of the algorithm. This algorithm uses a mathematical formula to search n elements in the sequence of n elements in O(n)expected time,and experimental reesult proves that distribution search is superior to binary search.
文摘In this paper, we establish the polynomial complexity of a primal-dual path-following interior point algorithm for solving semidefinite optimization(SDO) problems. The proposed algorithm is based on a new kernel function which differs from the existing kernel functions in which it has a double barrier term. With this function we define a new search direction and also a new proximity function for analyzing its complexity. We show that if q1 〉 q2 〉 1, the algorithm has O((q1 + 1) nq1+1/2(q1-q2)logn/ε)and O((q1 + 1)2(q1-q2)^3q1-2q2+1√n logn/c) complexity results for large- and small-update methods, respectively.
文摘In this paper, we have proved that the lower bound of the number of real multiplications for computing a length 2(t) real GFT(a,b) (a = +/-1/2, b = 0 or b = +/-1/2, a = 0) is 2(t+1) - 2t - 2 and that for computing a length 2t real GFT(a,b)(a = +/-1/2, b = +/-1/2) is 2(t+1) - 2. Practical algorithms which meet the lower bounds of multiplications are given.
基金This work is supported in part by the Foundation of Zhongshan University Advanced Research Centrein part by the National Natural Science Foundation of China
文摘A PL homotopy algorithm is modified to yield a polynomial-time result on its computational complexity.We prove that the cost of locating all zeros of a polynomial of degree n to an accuracy of ε(measured by the number of evaluations of the polynomial)grows no faster than O(max{n^4,n^3log_2(n/ε)}).This work is in response to a question raised in a paper by S.Smale as to the efficiency of piecewise linear methods in solving equations.In comparison with a few results reported,the algorithm under discussion is the only one providing correct multiplicities and the only one employing vector labelling.
基金supported by the Deutsche Forschungsgemeinschaft, project PAWS (NI 369/10)supported by the Studienstiftung des Deutschen Volkes+2 种基金supported by DFG "Cluster of Excellence Multimodal Computing and Interaction"supported by DIAMANT (a mathematics cluster of the Netherlands Organization for Scientific Research NWO)the Alexander von Humboldt Foundation, Bonn, Germany
文摘Computational Social Choice is an interdisciplinary research area involving Economics, Political Science,and Social Science on the one side, and Mathematics and Computer Science(including Artificial Intelligence and Multiagent Systems) on the other side. Typical computational problems studied in this field include the vulnerability of voting procedures against attacks, or preference aggregation in multi-agent systems. Parameterized Algorithmics is a subfield of Theoretical Computer Science seeking to exploit meaningful problem-specific parameters in order to identify tractable special cases of in general computationally hard problems. In this paper, we propose nine of our favorite research challenges concerning the parameterized complexity of problems appearing in this context. This work is dedicated to Jianer Chen, one of the strongest problem solvers in the history of parameterized algorithmics,on the occasion of his 60 th birthday.
基金supported by the National Natural Science Foundation of China (No. 61573283)
文摘The fading factor exerts a significant role in the strong tracking idea. However, traditional fading factor introduction method hinders the accuracy and robustness advantages of current strong-tracking-based nonlinear filtering algorithms such as Cubature Kalman Filter(CKF) since traditional fading factor introduction method only considers the first-order Taylor expansion. To this end, a new fading factor idea is suggested and introduced into the strong tracking CKF method.The new fading factor introduction method expanded the number of fading factors from one to two with reselected introduction positions. The relationship between the two fading factors as well as the general calculation method can be derived based on Taylor expansion. Obvious superiority of the newly suggested fading factor introduction method is demonstrated according to different nonlinearity of the measurement function. Equivalent calculation method can also be established while applied to CKF. Theoretical analysis shows that the strong tracking CKF can extract the thirdorder term information from the residual and thus realize second-order accuracy. After optimizing the strong tracking algorithm process, a Fast Strong Tracking CKF(FSTCKF) is finally established. Two simulation examples show that the novel FSTCKF improves the robustness of traditional CKF while minimizing the algorithm time complexity under various conditions.
基金supported by the National Natural Science Foundation of China(Grant No.60476014)the State"973"Key Basic Research Program(Grant No.2005CB321604)the UC Senate Research Fund.
文摘With soaring work frequency and decreasing feature sizes, VLSI circuits with RLC parasitic components are more like analog circuits and should be carefully analyzed in physical design. However, the number of extracted RLC components is typically too large to be analyzed efficiently by using present analog circuit simulators like SPICE. In order to speedup the simulations without error penalty, this paper proposes a novel methodology to compress the time-descritized circuits resulted from numerical integration approximation at every time step. The main contribution of the methodology is the efficient structure-level compression of DC circuits containing many current sources, which is an important complement to present circuit analysis theory. The methodology consists of the following parts: 1) An approach is proposed to delete all intermediate nodes of RL branches. 2) An efficient approach is proposed to compress and back-solve parallel and serial branches so that it is error-free and of linear complexity to analyze circuits of tree topology. 3) The Y to πtransformation method is used to error-free reduce and back-solve the intermediate nodes of ladder circuits with the linear complexity. Thus, the whole simulation method is very accurate and of linear complexity to analyze circuits of chain topology. Based on the methodology, we propose several novel algorithms for efficiently solving RLC-model transient power/ground (P/G) networks. Among them, EQU-ADI algorithm of linear-complexity is proposed to solve RLC P/G networks with mesh-tree or mesh-chain topologies. Experimental results show that the proposed method is at least two orders of magnitude faster than SPICE while it can scale linearly in both time- and memory-complexity to solve very large P/G networks.
文摘A distribution theory of the roots of a polynomial and a parallel algorithm for finding roots of a complex polynomial based on that theory are developed in this paper. With high parallelism, the algorithm is an im- provement over the Wilf algorithm.
文摘The Travelling Salesman Problem ( TSP ) is one of the most difficult problems that many scholars all over the world are studying. This paper points out the disparity between the definition and the classical solution to TSP and its practical applications, and then presents a new definition of TSP and its effective algorithm conforming to practical applications, thus making TSP practically more valuable.
文摘The model of laminated wave turbulence puts forth a novel computational problem–construction of fast algorithms for finding exact solutions of Diophantine equations in integers of order 10^(12) and more.The equations to be solved in integers are resonant conditions for nonlinearly interacting waves and their form is defined by the wave dispersion.It is established that for the most common dispersion as an arbitrary function of a wave-vector length two different generic algorithms are necessary:(1)one-class-case algorithm for waves interacting through scales,and(2)two-class-case algorithm for waves interacting through phases.In our previous paper we described the one-class-case generic algorithm and in our present paper we present the two-classcase generic algorithm.
基金supported by a GermanNorwegian PPP grantsupported by the Indo-German Max Planck Center for Computer Science (IMPECS)
文摘We analyze a common feature of p-Kemeny AGGregation(p-KAGG) and p-One-Sided Crossing Minimization(p-OSCM) to provide new insights and findings of interest to both the graph drawing community and the social choice community. We obtain parameterized subexponential-time algorithms for p-KAGG—a problem in social choice theory—and for p-OSCM—a problem in graph drawing. These algorithms run in time O*(2O(√k log k)),where k is the parameter, and significantly improve the previous best algorithms with running times O.1.403k/and O.1.4656k/, respectively. We also study natural "above-guarantee" versions of these problems and show them to be fixed parameter tractable. In fact, we show that the above-guarantee versions of these problems are equivalent to a weighted variant of p-directed feedback arc set. Our results for the above-guarantee version of p-KAGG reveal an interesting contrast. We show that when the number of "votes" in the input to p-KAGG is odd the above guarantee version can still be solved in time O*(2O(√k log k)), while if it is even then the problem cannot have a subexponential time algorithm unless the exponential time hypothesis fails(equivalently, unless FPT D M[1]).