The Quantum Approximate Optimization Algorithm(QAOA)is an algorithmic framework for finding approximate solutions to combinatorial optimization problems.It consists of interleaved unitary transformations induced by tw...The Quantum Approximate Optimization Algorithm(QAOA)is an algorithmic framework for finding approximate solutions to combinatorial optimization problems.It consists of interleaved unitary transformations induced by two operators labelled the mixing and problem Hamiltonians.To fit this framework,one needs to transform the original problem into a suitable form and embed it into these two Hamiltonians.In this paper,for the well-known NP-hard Traveling Salesman Problem(TSP),we encode its constraints into the mixing Hamiltonian rather than the conventional approach of adding penalty terms to the problem Hamiltonian.Moreover,we map edges(routes)connecting each pair of cities to qubits,which decreases the search space significantly in comparison to other approaches.As a result,our method can achieve a higher probability for the shortest round-trip route with only half the number of qubits consumed compared to IBM Q’s approach.We argue the formalization approach presented in this paper would lead to a generalized framework for finding,in the context of QAOA,high-quality approximate solutions to NP optimization problems.展开更多
In the era of big data,correlation analysis is significant because it can quickly detect the correlation between factors.And then,it has been received much attention.Due to the good properties of generality and equita...In the era of big data,correlation analysis is significant because it can quickly detect the correlation between factors.And then,it has been received much attention.Due to the good properties of generality and equitability of the maximal information coefficient(MIC),MIC is a hotspot in the research of correlation analysis.However,if the original approximate algorithm of MIC is directly applied into mining correlations in big data,the computation time is very long.Then the theoretical time complexity of the original approximate algorithm is analyzed in depth and the time complexity is n2.4 when parameters are default.And the experiments show that the large number of candidate partitions of random relationships results in long computation time.The analysis is a good preparation for the next step work of designing new fast algorithms.展开更多
This paper proposes two kinds of approximate proximal point algorithms (APPA) for monotone variational inequalities, both of which can be viewed as two extended versions of Solodov and Svaiter's APPA in the paper ...This paper proposes two kinds of approximate proximal point algorithms (APPA) for monotone variational inequalities, both of which can be viewed as two extended versions of Solodov and Svaiter's APPA in the paper "Error bounds for proximal point subproblems and associated inexact proximal point algorithms" published in 2000. They are both prediction- correction methods which use the same inexactness restriction; the only difference is that they use different search directions in the correction steps. This paper also chooses an optimal step size in the two versions of the APPA to improve the profit at each iteration. Analysis also shows that the two APPAs are globally convergent under appropriate assumptions, and we can expect algorithm 2 to get more progress in every iteration than algorithm 1. Numerical experiments indicate that algorithm 2 is more efficient than algorithm 1 with the same correction step size,展开更多
In order to find roots of maximal monotone operators, this paper introduces and studies the modified approximate proximal point algorithm with an error sequence {e k} such that || ek || \leqslant hk || xk - [(x)\tilde...In order to find roots of maximal monotone operators, this paper introduces and studies the modified approximate proximal point algorithm with an error sequence {e k} such that || ek || \leqslant hk || xk - [(x)\tilde]k ||\left\| { e^k } \right\| \leqslant \eta _k \left\| { x^k - \tilde x^k } \right\| with ?k = 0¥ ( hk - 1 ) < + ¥\sum\limits_{k = 0}^\infty {\left( {\eta _k - 1} \right)} and infk \geqslant 0 hk = m\geqslant 1\mathop {\inf }\limits_{k \geqslant 0} \eta _k = \mu \geqslant 1 . Here, the restrictions on {η k} are very different from the ones on {η k}, given by He et al (Science in China Ser. A, 2002, 32 (11): 1026–1032.) that supk \geqslant 0 hk = v < 1\mathop {\sup }\limits_{k \geqslant 0} \eta _k = v . Moreover, the characteristic conditions of the convergence of the modified approximate proximal point algorithm are presented by virtue of the new technique very different from the ones given by He et al.展开更多
A novel approach that integrates occlusion culling within the view-dependent rendering framework is proposed. The algorithm uses the prioritized-layered projection(PLP) algorithm to occlude those obscured objects, a...A novel approach that integrates occlusion culling within the view-dependent rendering framework is proposed. The algorithm uses the prioritized-layered projection(PLP) algorithm to occlude those obscured objects, and uses an approximate visibility technique to accurately and efficiently determine which objects will be visible in the coming future and prefetch those objects from disk before they are rendered, view-dependent rendering technique provides the ability to change level of detail over the surface seamlessly and smoothly in real-time according to cell solidity value.展开更多
develop a mentation This paper considers the priority facility primal-dual 3-approximation algorithm for procedure, the authors further improve the location problem with penalties: The authors this problem. Combining...develop a mentation This paper considers the priority facility primal-dual 3-approximation algorithm for procedure, the authors further improve the location problem with penalties: The authors this problem. Combining with the greedy aug- previous ratio 3 to 1.8526.展开更多
This paper presents two one-pass algorithms for dynamically computing frequency counts in sliding window over a data stream-computing frequency counts exceeding user-specified threshold ε. The first algorithm constru...This paper presents two one-pass algorithms for dynamically computing frequency counts in sliding window over a data stream-computing frequency counts exceeding user-specified threshold ε. The first algorithm constructs subwindows and deletes expired sub-windows periodically in sliding window, and each sub-window maintains a summary data structure. The first algorithm outputs at most 1/ε + 1 elements for frequency queries over the most recent N elements. The second algorithm adapts multiple levels method to deal with data stream. Once the sketch of the most recent N elements has been constructed, the second algorithm can provides the answers to the frequency queries over the most recent n ( n≤N) elements. The second algorithm outputs at most 1/ε + 2 elements. The analytical and experimental results show that our algorithms are accurate and effective.展开更多
The multiple knapsack problem denoted by MKP (B,S,m,n) can be defined as fol- lows.A set B of n items and a set Sof m knapsacks are given such thateach item j has a profit pjand weightwj,and each knapsack i has a ca...The multiple knapsack problem denoted by MKP (B,S,m,n) can be defined as fol- lows.A set B of n items and a set Sof m knapsacks are given such thateach item j has a profit pjand weightwj,and each knapsack i has a capacity Ci.The goal is to find a subset of items of maximum profit such that they have a feasible packing in the knapsacks.MKP(B,S,m,n) is strongly NP- Complete and no polynomial- time approximation algorithm can have an approxima- tion ratio better than0 .5 .In the last ten years,semi- definite programming has been empolyed to solve some combinatorial problems successfully.This paper firstly presents a semi- definite re- laxation algorithm (MKPS) for MKP (B,S,m,n) .It is proved that MKPS have a approxima- tion ratio better than 0 .5 for a subclass of MKP (B,S,m,n) with n≤ 1 0 0 ,m≤ 5 and maxnj=1{ wj} minmi=1{ Ci} ≤ 2 3 .展开更多
In this paper, we propose a model for the epidemic control problem, the goal of which is to minimize the total cost of quarantining, vaccination and cure under the constraint on the maximum number of infected people a...In this paper, we propose a model for the epidemic control problem, the goal of which is to minimize the total cost of quarantining, vaccination and cure under the constraint on the maximum number of infected people allowed. A (1+ε+ε3 , 1+ ε+1/ε )- bicriteria approximation algorithm is given.展开更多
This paper describes an extremely fast polynomial time algorithm, the NOVCA (Near Optimal Vertex Cover Algorithm) that produces an optimal or near optimal vertex cover for any known undirected graph G (V, E). NOVC...This paper describes an extremely fast polynomial time algorithm, the NOVCA (Near Optimal Vertex Cover Algorithm) that produces an optimal or near optimal vertex cover for any known undirected graph G (V, E). NOVCA is based on the idea of(l) including the vertex having maximum degree in the vertex cover and (2) rendering the degree of a vertex to zero by including all its adjacent vertices. The three versions of algorithm, NOVCA-I, NOVCA-II, and NOVCA-random, have been developed. The results identifying bounds on the size of the minimum vertex cover as well as polynomial complexity of algorithm are given with experimental verification. Future research efforts will be directed at tuning the algorithm and providing proof for better approximation ratio with NOVCA compared to any available vertex cover algorithms.展开更多
In this paper,attention is paid to study an algorithm for the common due datetotal weighted tardiness problem of single machine scheduling. Anapproximation alsorithm is given. It performs well in the sense of worst-ca...In this paper,attention is paid to study an algorithm for the common due datetotal weighted tardiness problem of single machine scheduling. Anapproximation alsorithm is given. It performs well in the sense of worst-casebehaviour and its worst-case performance ratio is 2.展开更多
The connected dominating set (CDS) problem, which consists of finding a smallest connected dominating set for graphs is an NP-hard problem in the unit disk graphs (UDGs). This paper focuses on the CDS problem in w...The connected dominating set (CDS) problem, which consists of finding a smallest connected dominating set for graphs is an NP-hard problem in the unit disk graphs (UDGs). This paper focuses on the CDS problem in wireless networks. Investigation of some properties of independent set (IS) in UDGs shows that geometric features of nodes distribution like angle and area can be used to design efficient heuristics for the approximation algorithms. Several constant factor approximation algorithms are presented for the CDS problem in UDGs. Simulation results show that the proposed algorithms perform better than some known ones.展开更多
A two-agent scheduling problem on parallel machines is considered in this paper. Our objective is to minimize the makespan for agent A, subject to an upper bound on the makespan for agent B. In this paper, we provide ...A two-agent scheduling problem on parallel machines is considered in this paper. Our objective is to minimize the makespan for agent A, subject to an upper bound on the makespan for agent B. In this paper, we provide a new approximation algorithm called CLPT. On the one hand, we compare the performance between the CLPT algorithm and the optimal solution and find that the solution obtained by the CLPT algorithm is very close to the optimal solution. On the other hand, we design different experimental frameworks to compare the CLPT algorithm and the A-LS algorithm for a comprehensive performance evaluation. A large number of numerical simulation results show that the CLPT algorithm outperformed the A-LS algorithm.展开更多
Submodular optimization is widely used in large datasets.In order to speed up the problems solving,it is essential to design low-adaptive algorithms to achieve acceleration in parallel.In general,the function values a...Submodular optimization is widely used in large datasets.In order to speed up the problems solving,it is essential to design low-adaptive algorithms to achieve acceleration in parallel.In general,the function values are given by a value oracle,but in practice,the oracle queries may consume a lot of time.Hence,how to strike a balance between optimizing them is important.In this paper,we focus on maximizing a normalized and strictly monotone set function with the diminishing-return ratio under a cardinality constraint,and propose two algorithms to deal with it.We apply the adaptive sequencing technique to devise the first algorithm,whose approximation ratio is arbitrarily close to 1-e^(-γ)in O(logn·log(log k/γ)) adaptive rounds,and requires O(logn^(2)·log(log k/γ)) queries.Then by adding preprocessing and parameter estimation steps to the first algorithm,we get the second one.The second algorithm trades a small sacrifice in adaptive complexity for a significant improvement in query complexity.With the same approximation and adaptive complexity,the query complexity is improved to.To the best of our knowledge,this is the first paper of designing adaptive algorithms for maximizing a monotone function using the diminishing-return ratio.展开更多
In this paper,we consider the parallel-machine customer order scheduling with delivery time and submodular rejection penalties.In this problem,we are given m dedicated machines in parallel and n customer orders.Each o...In this paper,we consider the parallel-machine customer order scheduling with delivery time and submodular rejection penalties.In this problem,we are given m dedicated machines in parallel and n customer orders.Each order has a delivery time and consists of m product types and each product type should be manufactured on a dedicated machine.An order is either rejected,in which case a rejection penalty has to be paid,or accepted and manufactured on the m dedicated machines.The objective is to find a solution to minimize the sum of the maximum delivery completion time of the accepted orders and the penalty of the rejected orders which is determined by a submodular function.We design an LP rounding algorithm with approximation ratio of n+1 for this problem.展开更多
As a classic NP-hard problem in machine learning and computational geometry,the k-means problem aims to partition the given dataset into k clusters according to the minimal squared Euclidean distance.Different from k-...As a classic NP-hard problem in machine learning and computational geometry,the k-means problem aims to partition the given dataset into k clusters according to the minimal squared Euclidean distance.Different from k-means problem and most of its variants,fuzzy k-means problem belongs to the soft clustering problem,where each given data point has relationship to every center point.Compared to fuzzy k-means problem,fuzzy k-means problem with penalties allows that some data points need not be clustered instead of being paid penalties.In this paper,we propose an O(αk In k)-approximation algorithm based on seeding algorithm for fuzzy k-means problem with penalties,whereαinvolves the ratio of the maximal penalty value to the minimal one.Furthermore,we implement numerical experiments to show the effectiveness of our algorithm.展开更多
When estimating the direction of arrival (DOA) of wideband signals from multiple sources, the performance of sparse Bayesian methods is influenced by the frequency bands occupied by signals in different directions. Th...When estimating the direction of arrival (DOA) of wideband signals from multiple sources, the performance of sparse Bayesian methods is influenced by the frequency bands occupied by signals in different directions. This is particularly true when multiple signal frequency bands overlap. Message passing algorithms (MPA) with Dirichlet process (DP) prior can be employed in a sparse Bayesian learning (SBL) framework with high precision. However, existing methods suffer from either high complexity or low precision. To address this, we propose a low-complexity DOA estimation algorithm based on a factor graph. This approach introduces two strong constraints via a stretching transformation of the factor graph. The first constraint separates the observation from the DP prior, enabling the application of the unitary approximate message passing (UAMP) algorithm for simplified inference and mitigation of divergence issues. The second constraint compensates for the deviation in estimation angle caused by the grid mismatch problem. Compared to state-of-the-art algorithms, our proposed method offers higher estimation accuracy and lower complexity.展开更多
Li transient concentration distribution in spherical active material particles can affect the maximum power density and the safe operating regime of the electric vehicles(EVs). On one hand, the quasiexact/exact soluti...Li transient concentration distribution in spherical active material particles can affect the maximum power density and the safe operating regime of the electric vehicles(EVs). On one hand, the quasiexact/exact solution obtained in the time/frequency domain is time-consuming and just as a reference value for approximate solutions;on the other hand, calculation errors and application range of approximate solutions not only rely on approximate algorithms but also on discharge modes. For the purpose to track the transient dynamics for Li solid-phase diffusion in spherical active particles with a tolerable error range and for a wide applicable range, it is necessary to choose optimal approximate algorithms in terms of discharge modes and the nature of active material particles. In this study, approximation methods,such as diffusion length method, polynomial profile approximation method, Padé approximation method,pseudo steady state method, eigenfunction-based Galerkin collocation method, and separation of variables method for solving Li solid-phase diffusion in spherical active particles are compared from calculation fundamentals to algorithm implementation. Furthermore, these approximate solutions are quantitatively compared to the quasi-exact/exact solution in the time/frequency domain under typical discharge modes, i.e., start-up, slow-down, and speed-up. The results obtained from the viewpoint of time-frequency analysis offer a theoretical foundation on how to track Li transient concentration profile in spherical active particles with a high precision and for a wide application range. In turn, optimal solutions of Li solid diffusion equations for spherical active particles can improve the reliability in predicting safe operating regime and estimating maximum power for automotive batteries.展开更多
In this paper,a two-stage semi-hybrid flowshop problem which appears in graphics processing is studied. For this problem, there are two machines M1 and M2, and a set of independent jobs J= {J1 ,J2 ,…,Jn }. Each Ji co...In this paper,a two-stage semi-hybrid flowshop problem which appears in graphics processing is studied. For this problem, there are two machines M1 and M2, and a set of independent jobs J= {J1 ,J2 ,…,Jn }. Each Ji consists of two tasks Ai and Bi ,and task Ai must be completed before task Bi can start. Furthermore ,task Ai can be processed on M1 for ai time units ,or on Mw for ai^J time units ,while task Bi can only be processed on M2 for bi time units. Jobs and machines are available at time zero and no preemption is allowed. The objective is to minimize the maximum job completion time. It is showed that this problem is NP-hard. And a pseudo-polynomial time optimal algorithm is presented. A polynomial time approximation algorithm with worst-case ratio 2 is also presented.展开更多
基金This work is supported by the Natural Science Foundation,China(Grant No.61802002)Natural Science Foundation of Anhui Province,China(Grant No.1708085MF162).
文摘The Quantum Approximate Optimization Algorithm(QAOA)is an algorithmic framework for finding approximate solutions to combinatorial optimization problems.It consists of interleaved unitary transformations induced by two operators labelled the mixing and problem Hamiltonians.To fit this framework,one needs to transform the original problem into a suitable form and embed it into these two Hamiltonians.In this paper,for the well-known NP-hard Traveling Salesman Problem(TSP),we encode its constraints into the mixing Hamiltonian rather than the conventional approach of adding penalty terms to the problem Hamiltonian.Moreover,we map edges(routes)connecting each pair of cities to qubits,which decreases the search space significantly in comparison to other approaches.As a result,our method can achieve a higher probability for the shortest round-trip route with only half the number of qubits consumed compared to IBM Q’s approach.We argue the formalization approach presented in this paper would lead to a generalized framework for finding,in the context of QAOA,high-quality approximate solutions to NP optimization problems.
基金Supported by the China Postdoctoral Science Foundation(2019M650981)Shandong Provincial Natural Science Foundation,China(ZR2018MG003)。
文摘In the era of big data,correlation analysis is significant because it can quickly detect the correlation between factors.And then,it has been received much attention.Due to the good properties of generality and equitability of the maximal information coefficient(MIC),MIC is a hotspot in the research of correlation analysis.However,if the original approximate algorithm of MIC is directly applied into mining correlations in big data,the computation time is very long.Then the theoretical time complexity of the original approximate algorithm is analyzed in depth and the time complexity is n2.4 when parameters are default.And the experiments show that the large number of candidate partitions of random relationships results in long computation time.The analysis is a good preparation for the next step work of designing new fast algorithms.
文摘This paper proposes two kinds of approximate proximal point algorithms (APPA) for monotone variational inequalities, both of which can be viewed as two extended versions of Solodov and Svaiter's APPA in the paper "Error bounds for proximal point subproblems and associated inexact proximal point algorithms" published in 2000. They are both prediction- correction methods which use the same inexactness restriction; the only difference is that they use different search directions in the correction steps. This paper also chooses an optimal step size in the two versions of the APPA to improve the profit at each iteration. Analysis also shows that the two APPAs are globally convergent under appropriate assumptions, and we can expect algorithm 2 to get more progress in every iteration than algorithm 1. Numerical experiments indicate that algorithm 2 is more efficient than algorithm 1 with the same correction step size,
基金Supported both by the Teaching and Research Award Fund for Outstanding Young Teachers inHigher Educational Institutions of MOEChinaand by the Dawn Program Fund in Shanghai
文摘In order to find roots of maximal monotone operators, this paper introduces and studies the modified approximate proximal point algorithm with an error sequence {e k} such that || ek || \leqslant hk || xk - [(x)\tilde]k ||\left\| { e^k } \right\| \leqslant \eta _k \left\| { x^k - \tilde x^k } \right\| with ?k = 0¥ ( hk - 1 ) < + ¥\sum\limits_{k = 0}^\infty {\left( {\eta _k - 1} \right)} and infk \geqslant 0 hk = m\geqslant 1\mathop {\inf }\limits_{k \geqslant 0} \eta _k = \mu \geqslant 1 . Here, the restrictions on {η k} are very different from the ones on {η k}, given by He et al (Science in China Ser. A, 2002, 32 (11): 1026–1032.) that supk \geqslant 0 hk = v < 1\mathop {\sup }\limits_{k \geqslant 0} \eta _k = v . Moreover, the characteristic conditions of the convergence of the modified approximate proximal point algorithm are presented by virtue of the new technique very different from the ones given by He et al.
文摘A novel approach that integrates occlusion culling within the view-dependent rendering framework is proposed. The algorithm uses the prioritized-layered projection(PLP) algorithm to occlude those obscured objects, and uses an approximate visibility technique to accurately and efficiently determine which objects will be visible in the coming future and prefetch those objects from disk before they are rendered, view-dependent rendering technique provides the ability to change level of detail over the surface seamlessly and smoothly in real-time according to cell solidity value.
基金supported by the National Natural Science Foundation of China under Grant No.11371001
文摘develop a mentation This paper considers the priority facility primal-dual 3-approximation algorithm for procedure, the authors further improve the location problem with penalties: The authors this problem. Combining with the greedy aug- previous ratio 3 to 1.8526.
基金Supported by the National Natural Science Foun-dation of China (60403027)
文摘This paper presents two one-pass algorithms for dynamically computing frequency counts in sliding window over a data stream-computing frequency counts exceeding user-specified threshold ε. The first algorithm constructs subwindows and deletes expired sub-windows periodically in sliding window, and each sub-window maintains a summary data structure. The first algorithm outputs at most 1/ε + 1 elements for frequency queries over the most recent N elements. The second algorithm adapts multiple levels method to deal with data stream. Once the sketch of the most recent N elements has been constructed, the second algorithm can provides the answers to the frequency queries over the most recent n ( n≤N) elements. The second algorithm outputs at most 1/ε + 2 elements. The analytical and experimental results show that our algorithms are accurate and effective.
基金Supported by the National Natural Science Foundation of China(1 9971 0 78)
文摘The multiple knapsack problem denoted by MKP (B,S,m,n) can be defined as fol- lows.A set B of n items and a set Sof m knapsacks are given such thateach item j has a profit pjand weightwj,and each knapsack i has a capacity Ci.The goal is to find a subset of items of maximum profit such that they have a feasible packing in the knapsacks.MKP(B,S,m,n) is strongly NP- Complete and no polynomial- time approximation algorithm can have an approxima- tion ratio better than0 .5 .In the last ten years,semi- definite programming has been empolyed to solve some combinatorial problems successfully.This paper firstly presents a semi- definite re- laxation algorithm (MKPS) for MKP (B,S,m,n) .It is proved that MKPS have a approxima- tion ratio better than 0 .5 for a subclass of MKP (B,S,m,n) with n≤ 1 0 0 ,m≤ 5 and maxnj=1{ wj} minmi=1{ Ci} ≤ 2 3 .
文摘In this paper, we propose a model for the epidemic control problem, the goal of which is to minimize the total cost of quarantining, vaccination and cure under the constraint on the maximum number of infected people allowed. A (1+ε+ε3 , 1+ ε+1/ε )- bicriteria approximation algorithm is given.
文摘This paper describes an extremely fast polynomial time algorithm, the NOVCA (Near Optimal Vertex Cover Algorithm) that produces an optimal or near optimal vertex cover for any known undirected graph G (V, E). NOVCA is based on the idea of(l) including the vertex having maximum degree in the vertex cover and (2) rendering the degree of a vertex to zero by including all its adjacent vertices. The three versions of algorithm, NOVCA-I, NOVCA-II, and NOVCA-random, have been developed. The results identifying bounds on the size of the minimum vertex cover as well as polynomial complexity of algorithm are given with experimental verification. Future research efforts will be directed at tuning the algorithm and providing proof for better approximation ratio with NOVCA compared to any available vertex cover algorithms.
文摘In this paper,attention is paid to study an algorithm for the common due datetotal weighted tardiness problem of single machine scheduling. Anapproximation alsorithm is given. It performs well in the sense of worst-casebehaviour and its worst-case performance ratio is 2.
基金supported by the National Natural Science Foundation of China under Grant No 60473090the National "11th Five-Year-Supporting-Plan" of China under Grant No 2006BAH02A0407
文摘The connected dominating set (CDS) problem, which consists of finding a smallest connected dominating set for graphs is an NP-hard problem in the unit disk graphs (UDGs). This paper focuses on the CDS problem in wireless networks. Investigation of some properties of independent set (IS) in UDGs shows that geometric features of nodes distribution like angle and area can be used to design efficient heuristics for the approximation algorithms. Several constant factor approximation algorithms are presented for the CDS problem in UDGs. Simulation results show that the proposed algorithms perform better than some known ones.
文摘A two-agent scheduling problem on parallel machines is considered in this paper. Our objective is to minimize the makespan for agent A, subject to an upper bound on the makespan for agent B. In this paper, we provide a new approximation algorithm called CLPT. On the one hand, we compare the performance between the CLPT algorithm and the optimal solution and find that the solution obtained by the CLPT algorithm is very close to the optimal solution. On the other hand, we design different experimental frameworks to compare the CLPT algorithm and the A-LS algorithm for a comprehensive performance evaluation. A large number of numerical simulation results show that the CLPT algorithm outperformed the A-LS algorithm.
基金the National Natural Science Foundation of China(Nos.11971447 and 11871442)the Fundamental Research Funds for the Central Universities.
文摘Submodular optimization is widely used in large datasets.In order to speed up the problems solving,it is essential to design low-adaptive algorithms to achieve acceleration in parallel.In general,the function values are given by a value oracle,but in practice,the oracle queries may consume a lot of time.Hence,how to strike a balance between optimizing them is important.In this paper,we focus on maximizing a normalized and strictly monotone set function with the diminishing-return ratio under a cardinality constraint,and propose two algorithms to deal with it.We apply the adaptive sequencing technique to devise the first algorithm,whose approximation ratio is arbitrarily close to 1-e^(-γ)in O(logn·log(log k/γ)) adaptive rounds,and requires O(logn^(2)·log(log k/γ)) queries.Then by adding preprocessing and parameter estimation steps to the first algorithm,we get the second one.The second algorithm trades a small sacrifice in adaptive complexity for a significant improvement in query complexity.With the same approximation and adaptive complexity,the query complexity is improved to.To the best of our knowledge,this is the first paper of designing adaptive algorithms for maximizing a monotone function using the diminishing-return ratio.
基金the National Natural Science Foundation of China(No.11971146)the Natural Science Foundation of Hebei Province of China(Nos.A2019205089 and A2019205092)+1 种基金Hebei Province Foundation for Returnees(No.CL201714)the Graduate Innovation Grant Program of Hebei Normal University(No.CXZZSS2022053).
文摘In this paper,we consider the parallel-machine customer order scheduling with delivery time and submodular rejection penalties.In this problem,we are given m dedicated machines in parallel and n customer orders.Each order has a delivery time and consists of m product types and each product type should be manufactured on a dedicated machine.An order is either rejected,in which case a rejection penalty has to be paid,or accepted and manufactured on the m dedicated machines.The objective is to find a solution to minimize the sum of the maximum delivery completion time of the accepted orders and the penalty of the rejected orders which is determined by a submodular function.We design an LP rounding algorithm with approximation ratio of n+1 for this problem.
基金Higher Educational Science and Technology Program of Shandong Province(No.J17KA171)Natural Science Foundation of Shandong Province(No.ZR2020MA029).
文摘As a classic NP-hard problem in machine learning and computational geometry,the k-means problem aims to partition the given dataset into k clusters according to the minimal squared Euclidean distance.Different from k-means problem and most of its variants,fuzzy k-means problem belongs to the soft clustering problem,where each given data point has relationship to every center point.Compared to fuzzy k-means problem,fuzzy k-means problem with penalties allows that some data points need not be clustered instead of being paid penalties.In this paper,we propose an O(αk In k)-approximation algorithm based on seeding algorithm for fuzzy k-means problem with penalties,whereαinvolves the ratio of the maximal penalty value to the minimal one.Furthermore,we implement numerical experiments to show the effectiveness of our algorithm.
基金supported in part by the National Natural Science Foundation of China(Nos.6202780103 and 62033001)the Innovation Key Project of Guangxi Province(No.AA22068059)+2 种基金the Key Research and Development Program of Guilin(No.2020010332)the Natural Science Foundation of Henan Province(No.222300420504)Academic Degrees and Graduate Education Reform Project of Henan Province(No.2021SJGLX262Y).
文摘When estimating the direction of arrival (DOA) of wideband signals from multiple sources, the performance of sparse Bayesian methods is influenced by the frequency bands occupied by signals in different directions. This is particularly true when multiple signal frequency bands overlap. Message passing algorithms (MPA) with Dirichlet process (DP) prior can be employed in a sparse Bayesian learning (SBL) framework with high precision. However, existing methods suffer from either high complexity or low precision. To address this, we propose a low-complexity DOA estimation algorithm based on a factor graph. This approach introduces two strong constraints via a stretching transformation of the factor graph. The first constraint separates the observation from the DP prior, enabling the application of the unitary approximate message passing (UAMP) algorithm for simplified inference and mitigation of divergence issues. The second constraint compensates for the deviation in estimation angle caused by the grid mismatch problem. Compared to state-of-the-art algorithms, our proposed method offers higher estimation accuracy and lower complexity.
基金the financial support from the National Science Foundation of China(22078190 and 12002196)the National Key Research and Development Program of China(2020YFB1505802)。
文摘Li transient concentration distribution in spherical active material particles can affect the maximum power density and the safe operating regime of the electric vehicles(EVs). On one hand, the quasiexact/exact solution obtained in the time/frequency domain is time-consuming and just as a reference value for approximate solutions;on the other hand, calculation errors and application range of approximate solutions not only rely on approximate algorithms but also on discharge modes. For the purpose to track the transient dynamics for Li solid-phase diffusion in spherical active particles with a tolerable error range and for a wide applicable range, it is necessary to choose optimal approximate algorithms in terms of discharge modes and the nature of active material particles. In this study, approximation methods,such as diffusion length method, polynomial profile approximation method, Padé approximation method,pseudo steady state method, eigenfunction-based Galerkin collocation method, and separation of variables method for solving Li solid-phase diffusion in spherical active particles are compared from calculation fundamentals to algorithm implementation. Furthermore, these approximate solutions are quantitatively compared to the quasi-exact/exact solution in the time/frequency domain under typical discharge modes, i.e., start-up, slow-down, and speed-up. The results obtained from the viewpoint of time-frequency analysis offer a theoretical foundation on how to track Li transient concentration profile in spherical active particles with a high precision and for a wide application range. In turn, optimal solutions of Li solid diffusion equations for spherical active particles can improve the reliability in predicting safe operating regime and estimating maximum power for automotive batteries.
文摘In this paper,a two-stage semi-hybrid flowshop problem which appears in graphics processing is studied. For this problem, there are two machines M1 and M2, and a set of independent jobs J= {J1 ,J2 ,…,Jn }. Each Ji consists of two tasks Ai and Bi ,and task Ai must be completed before task Bi can start. Furthermore ,task Ai can be processed on M1 for ai time units ,or on Mw for ai^J time units ,while task Bi can only be processed on M2 for bi time units. Jobs and machines are available at time zero and no preemption is allowed. The objective is to minimize the maximum job completion time. It is showed that this problem is NP-hard. And a pseudo-polynomial time optimal algorithm is presented. A polynomial time approximation algorithm with worst-case ratio 2 is also presented.