One of the open problems in the field of forward uncertainty quantification(UQ)is the ability to form accurate assessments of uncertainty having only incomplete information about the distribution of random inputs.Anot...One of the open problems in the field of forward uncertainty quantification(UQ)is the ability to form accurate assessments of uncertainty having only incomplete information about the distribution of random inputs.Another challenge is to efficiently make use of limited training data for UQ predictions of complex engineering problems,particularly with high dimensional random parameters.We address these challenges by combining data-driven polynomial chaos expansions with a recently developed preconditioned sparse approximation approach for UQ problems.The first task in this two-step process is to employ the procedure developed in[1]to construct an"arbitrary"polynomial chaos expansion basis using a finite number of statistical moments of the random inputs.The second step is a novel procedure to effect sparse approximation via l1 minimization in order to quantify the forward uncertainty.To enhance the performance of the preconditioned l1 minimization problem,we sample from the so-called induced distribution,instead of using Monte Carlo(MC)sampling from the original,unknown probability measure.We demonstrate on test problems that induced sampling is a competitive and often better choice compared with sampling from asymptotically optimal measures(such as the equilibrium measure)when we have incomplete information about the distribution.We demonstrate the capacity of the proposed induced sampling algorithm via sparse representation with limited data on test functions,and on a Kirchoff plating bending problem with random Young’s modulus.展开更多
The l1 norm is the tight convex relaxation for the l0 norm and has been successfully applied for recovering sparse signals.However,for problems with fewer samples than required for accurate l1 recovery,one needs to ap...The l1 norm is the tight convex relaxation for the l0 norm and has been successfully applied for recovering sparse signals.However,for problems with fewer samples than required for accurate l1 recovery,one needs to apply nonconvex penalties such as lp norm.As one method for solving lp minimization problems,iteratively reweighted l1 minimization updates the weight for each component based on the value of the same component at the previous iteration.It assigns large weights on small components in magnitude and small weights on large components in magnitude.The set of the weights is not fixed,and it makes the analysis of this method difficult.In this paper,we consider a weighted l1 penalty with the set of the weights fixed,and the weights are assigned based on the sort of all the components in magnitude.The smallest weight is assigned to the largest component in magnitude.This new penalty is called nonconvex sorted l1.Then we propose two methods for solving nonconvex sorted l1 minimization problems:iteratively reweighted l1 minimization and iterative sorted thresholding,and prove that both methods will converge to a local minimizer of the nonconvex sorted l1 minimization problems.We also show that both methods are generalizations of iterative support detection and iterative hard thresholding,respectively.The numerical experiments demonstrate the better performance of assigning weights by sort compared to assigning by value.展开更多
Considerable attempts have been made on removing the crosstalk noise in a simultaneous source data using the popular K-means Singular Value Decomposition algorithm(KSVD).Several hybrids of this method have been design...Considerable attempts have been made on removing the crosstalk noise in a simultaneous source data using the popular K-means Singular Value Decomposition algorithm(KSVD).Several hybrids of this method have been designed and successfully deployed,but the complex nature of blending noise makes it difficult to manipulate easily.One of the challenges of the K-means Singular Value Decomposition approach is the challenge to obtain an exact KSVD for each data patch which is believed to result in a better output.In this work,we propose a learnable architecture capable of data training while retaining the K-means Singular Value Decomposition essence to deblend simultaneous source data.展开更多
More competent learning models are demanded for data processing due to increasingly greater amounts of data available in applications.Data that we encounter often have certain embedded sparsity structures.That is,if t...More competent learning models are demanded for data processing due to increasingly greater amounts of data available in applications.Data that we encounter often have certain embedded sparsity structures.That is,if they are represented in an appropriate basis,their energies can concentrate on a small number of basis functions.This paper is devoted to a numerical study of adaptive approximation of solutions of nonlinear partial differential equations whose solutions may have singularities,by deep neural networks(DNNs)with a sparse regularization with multiple parameters.Noting that DNNs have an intrinsic multi-scale structure which is favorable for adaptive representation of functions,by employing a penalty with multiple parameters,we develop DNNs with a multi-scale sparse regularization(SDNN)for effectively representing functions having certain singularities.We then apply the proposed SDNN to numerical solutions of the Burgers equation and the Schrödinger equation.Numerical examples confirm that solutions generated by the proposed SDNN are sparse and accurate.展开更多
The method of data-driven tight frame has been shown very useful in image restoration problems.We consider in this paper extending this important technique,by incorporating L_(1) data fidelity into the original data-d...The method of data-driven tight frame has been shown very useful in image restoration problems.We consider in this paper extending this important technique,by incorporating L_(1) data fidelity into the original data-driven model,for removing impulsive noise which is a very common and basic type of noise in image data.The model contains three variables and can be solved through an efficient iterative alternating minimization algorithm in patch implementation,where the tight frame is dynamically updated.It constructs a tight frame system from the input corrupted image adaptively,and then removes impulsive noise by the derived system.We also show that the sequence generated by our algorithm converges globally to a stationary point of the optimization model.Numerical experiments and comparisons demonstrate that our approach performs well for various kinds of images.This benefits from its data-driven nature and the learned tight frames from input images capture richer image structures adaptively.展开更多
In quantitative susceptibility mapping(QSM),the background field removal is an essential data acquisition step because it has a significant effect on the restoration quality by generating a harmonic incompatibility in...In quantitative susceptibility mapping(QSM),the background field removal is an essential data acquisition step because it has a significant effect on the restoration quality by generating a harmonic incompatibility in the measured local field data.Even though the sparsity based first generation harmonic incompatibility removal(1GHIRE)model has achieved the performance gain over the traditional approaches,the 1GHIRE model has to be further improved as there is a basis mismatch underlying in numerically solving Poisson’s equation for the background removal.In this paper,we propose the second generation harmonic incompatibility removal(2GHIRE)model to reduce a basis mismatch,inspired by the balanced approach in the tight frame based image restoration.Experimental results shows the superiority of the proposed 2GHIRE model both in the restoration qualities and the computational efficiency.展开更多
This paper presents a new method for the estimation of the injection state and power factor of distributed energy resources (DERs) using voltage magnitude measurements only. A physics-based linear model is used to dev...This paper presents a new method for the estimation of the injection state and power factor of distributed energy resources (DERs) using voltage magnitude measurements only. A physics-based linear model is used to develop estimation heuristics for net injections of real and reactive power at a set of buses under study, allowing a distribution engineer to form a robust estimate for the operating state and the power factor of the DER at those buses. The method demonstrates and exploits a mathematical distinction between the voltage sensitivity signatures of real and reactive power injections for a fixed power system model. Case studies on various test feeders for a model of the distribution circuit and statistical analyses are presented to demonstrate the validity of the estimation method. The results of this paper can be used to improve the limited information about inverter parameters and operating state during renewable planning, which helps mitigate the uncertainty inherent in their integration.展开更多
In this paper,a novel approach for quantifying the parametric uncertainty associated with a stochastic problem output is presented.As with Monte-Carlo and stochastic collocation methods,only point-wise evaluations of ...In this paper,a novel approach for quantifying the parametric uncertainty associated with a stochastic problem output is presented.As with Monte-Carlo and stochastic collocation methods,only point-wise evaluations of the stochastic output response surface are required allowing the use of legacy deterministic codes and precluding the need for any dedicated stochastic code to solve the uncertain problem of interest.The new approach differs from these standard methods in that it is based on ideas directly linked to the recently developed compressed sensing theory.The technique allows the retrieval of the modes that contribute most significantly to the approximation of the solution using a minimal amount of information.The generation of this information,via many solver calls,is almost always the bottle-neck of an uncertainty quantification procedure.If the stochastic model output has a reasonably compressible representation in the retained approximation basis,the proposedmethod makes the best use of the available information and retrieves the dominantmodes.Uncertainty quantification of the solution of both a 2-D and 8-D stochastic Shallow Water problem is used to demonstrate the significant performance improvement of the new method,requiring up to several orders of magnitude fewer solver calls than the usual sparse grid-based Polynomial Chaos(Smolyak scheme)to achieve comparable approximation accuracy.展开更多
To improve the performance of sound source localization based on distributed microphone arrays in noisy and reverberant environments,a sound source localization method was proposed.This method exploited the inherent s...To improve the performance of sound source localization based on distributed microphone arrays in noisy and reverberant environments,a sound source localization method was proposed.This method exploited the inherent spatial sparsity to convert the localization problem into a sparse recovery problem based on the compressive sensing(CS) theory.In this method two-step discrete cosine transform(DCT)-based feature extraction was utilized to cover both short-time and long-time properties of the signal and reduce the dimensions of the sparse model.Moreover,an online dictionary learning(DL) method was used to dynamically adjust the dictionary for matching the changes of audio signals,and then the sparse solution could better represent location estimations.In addition,we proposed an improved approximate l_0norm minimization algorithm to enhance reconstruction performance for sparse signals in low signal-noise ratio(SNR).The effectiveness of the proposed scheme is demonstrated by simulation results where the locations of multiple sources can be obtained in the noisy and reverberant conditions.展开更多
基金supported by the NSF of China(No.11671265)partially supported by NSF DMS-1848508+4 种基金partially supported by the NSF of China(under grant numbers 11688101,11571351,and 11731006)science challenge project(No.TZ2018001)the youth innovation promotion association(CAS)supported by the National Science Foundation under Grant No.DMS-1439786the Simons Foundation Grant No.50736。
文摘One of the open problems in the field of forward uncertainty quantification(UQ)is the ability to form accurate assessments of uncertainty having only incomplete information about the distribution of random inputs.Another challenge is to efficiently make use of limited training data for UQ predictions of complex engineering problems,particularly with high dimensional random parameters.We address these challenges by combining data-driven polynomial chaos expansions with a recently developed preconditioned sparse approximation approach for UQ problems.The first task in this two-step process is to employ the procedure developed in[1]to construct an"arbitrary"polynomial chaos expansion basis using a finite number of statistical moments of the random inputs.The second step is a novel procedure to effect sparse approximation via l1 minimization in order to quantify the forward uncertainty.To enhance the performance of the preconditioned l1 minimization problem,we sample from the so-called induced distribution,instead of using Monte Carlo(MC)sampling from the original,unknown probability measure.We demonstrate on test problems that induced sampling is a competitive and often better choice compared with sampling from asymptotically optimal measures(such as the equilibrium measure)when we have incomplete information about the distribution.We demonstrate the capacity of the proposed induced sampling algorithm via sparse representation with limited data on test functions,and on a Kirchoff plating bending problem with random Young’s modulus.
基金This work is partially supported by European Research Council,the National Natural Science Foundation of China(No.11201079)the Fundamental Research Funds for the Central Universities of China(Nos.20520133238 and 20520131169)the National Natural Science Foundation of United States(Nos.DMS-0748839 and DMS-1317602).
文摘The l1 norm is the tight convex relaxation for the l0 norm and has been successfully applied for recovering sparse signals.However,for problems with fewer samples than required for accurate l1 recovery,one needs to apply nonconvex penalties such as lp norm.As one method for solving lp minimization problems,iteratively reweighted l1 minimization updates the weight for each component based on the value of the same component at the previous iteration.It assigns large weights on small components in magnitude and small weights on large components in magnitude.The set of the weights is not fixed,and it makes the analysis of this method difficult.In this paper,we consider a weighted l1 penalty with the set of the weights fixed,and the weights are assigned based on the sort of all the components in magnitude.The smallest weight is assigned to the largest component in magnitude.This new penalty is called nonconvex sorted l1.Then we propose two methods for solving nonconvex sorted l1 minimization problems:iteratively reweighted l1 minimization and iterative sorted thresholding,and prove that both methods will converge to a local minimizer of the nonconvex sorted l1 minimization problems.We also show that both methods are generalizations of iterative support detection and iterative hard thresholding,respectively.The numerical experiments demonstrate the better performance of assigning weights by sort compared to assigning by value.
基金Supported by State Key Research and Development Program of China(No.2018YFC0310104)National Natural Science Foundation of China(Nos.41974163,4213080)。
文摘Considerable attempts have been made on removing the crosstalk noise in a simultaneous source data using the popular K-means Singular Value Decomposition algorithm(KSVD).Several hybrids of this method have been designed and successfully deployed,but the complex nature of blending noise makes it difficult to manipulate easily.One of the challenges of the K-means Singular Value Decomposition approach is the challenge to obtain an exact KSVD for each data patch which is believed to result in a better output.In this work,we propose a learnable architecture capable of data training while retaining the K-means Singular Value Decomposition essence to deblend simultaneous source data.
基金Y.Xu is supported in part by US National Science Foundation under grant DMS1912958T.Zeng is supported in part by the National Natural Science Foundation of China under grants 12071160 and U1811464+2 种基金by the Natural Science Foundation of Guangdong Province under grant 2018A0303130067by the Opening Project of Guangdong Province Key Laboratory of Computational Science at the Sun Yat-sen University under grant 2021022by the Opening Project of Guangdong Key Laboratory of Big Data Analysis and Processing under grant 202101.
文摘More competent learning models are demanded for data processing due to increasingly greater amounts of data available in applications.Data that we encounter often have certain embedded sparsity structures.That is,if they are represented in an appropriate basis,their energies can concentrate on a small number of basis functions.This paper is devoted to a numerical study of adaptive approximation of solutions of nonlinear partial differential equations whose solutions may have singularities,by deep neural networks(DNNs)with a sparse regularization with multiple parameters.Noting that DNNs have an intrinsic multi-scale structure which is favorable for adaptive representation of functions,by employing a penalty with multiple parameters,we develop DNNs with a multi-scale sparse regularization(SDNN)for effectively representing functions having certain singularities.We then apply the proposed SDNN to numerical solutions of the Burgers equation and the Schrödinger equation.Numerical examples confirm that solutions generated by the proposed SDNN are sparse and accurate.
基金supports from NSF of China grants 11531013 and 11871035.
文摘The method of data-driven tight frame has been shown very useful in image restoration problems.We consider in this paper extending this important technique,by incorporating L_(1) data fidelity into the original data-driven model,for removing impulsive noise which is a very common and basic type of noise in image data.The model contains three variables and can be solved through an efficient iterative alternating minimization algorithm in patch implementation,where the tight frame is dynamically updated.It constructs a tight frame system from the input corrupted image adaptively,and then removes impulsive noise by the derived system.We also show that the sequence generated by our algorithm converges globally to a stationary point of the optimization model.Numerical experiments and comparisons demonstrate that our approach performs well for various kinds of images.This benefits from its data-driven nature and the learned tight frames from input images capture richer image structures adaptively.
基金The research of the first author is supported in part by the NSFC Youth Program 11901338The research of the second author is supported by the Hong Kong Research Grant Council(HKRGC)GRF 16306317 and 16309219+2 种基金The research of the third author is supported by the NSFC Youth Program 11901436 and the Fundamental Research Program of Science and Technology Commission of Shanghai Municipality(20JC1413500)The research of the fourth author is supported by the NSFC grant 11831002The research of the fifth author is supported by the National Natural Science Foundation of China Youth Program grant 11801088 and the Shanghai Sailing Program(18YF1401600).
文摘In quantitative susceptibility mapping(QSM),the background field removal is an essential data acquisition step because it has a significant effect on the restoration quality by generating a harmonic incompatibility in the measured local field data.Even though the sparsity based first generation harmonic incompatibility removal(1GHIRE)model has achieved the performance gain over the traditional approaches,the 1GHIRE model has to be further improved as there is a basis mismatch underlying in numerically solving Poisson’s equation for the background removal.In this paper,we propose the second generation harmonic incompatibility removal(2GHIRE)model to reduce a basis mismatch,inspired by the balanced approach in the tight frame based image restoration.Experimental results shows the superiority of the proposed 2GHIRE model both in the restoration qualities and the computational efficiency.
基金This material is based upon the work supported by the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy (EERE) under Solar Energy Technologies Office (SETO) Agreement Number 34226Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DENA0003525.
文摘This paper presents a new method for the estimation of the injection state and power factor of distributed energy resources (DERs) using voltage magnitude measurements only. A physics-based linear model is used to develop estimation heuristics for net injections of real and reactive power at a set of buses under study, allowing a distribution engineer to form a robust estimate for the operating state and the power factor of the DER at those buses. The method demonstrates and exploits a mathematical distinction between the voltage sensitivity signatures of real and reactive power injections for a fixed power system model. Case studies on various test feeders for a model of the distribution circuit and statistical analyses are presented to demonstrate the validity of the estimation method. The results of this paper can be used to improve the limited information about inverter parameters and operating state during renewable planning, which helps mitigate the uncertainty inherent in their integration.
基金supported by the French National Agency for Research(ANR)under projects ASRMEI JC08#375619 and CORMORED ANR-08-BLAN-0115 and by GdR Mo-MaS.
文摘In this paper,a novel approach for quantifying the parametric uncertainty associated with a stochastic problem output is presented.As with Monte-Carlo and stochastic collocation methods,only point-wise evaluations of the stochastic output response surface are required allowing the use of legacy deterministic codes and precluding the need for any dedicated stochastic code to solve the uncertain problem of interest.The new approach differs from these standard methods in that it is based on ideas directly linked to the recently developed compressed sensing theory.The technique allows the retrieval of the modes that contribute most significantly to the approximation of the solution using a minimal amount of information.The generation of this information,via many solver calls,is almost always the bottle-neck of an uncertainty quantification procedure.If the stochastic model output has a reasonably compressible representation in the retained approximation basis,the proposedmethod makes the best use of the available information and retrieves the dominantmodes.Uncertainty quantification of the solution of both a 2-D and 8-D stochastic Shallow Water problem is used to demonstrate the significant performance improvement of the new method,requiring up to several orders of magnitude fewer solver calls than the usual sparse grid-based Polynomial Chaos(Smolyak scheme)to achieve comparable approximation accuracy.
基金supported by the Doctoral Program of Higher Education of China(20133207120007)the National Natural Science Foundation of China(61405094)+1 种基金the Open Research Fund of Jiangsu Key Laboratory of Meteorological Observation and Information Processing(KDXS1408)the Science and Technology Support Project of Jiangsu Province-Industry(BE2014139)
文摘To improve the performance of sound source localization based on distributed microphone arrays in noisy and reverberant environments,a sound source localization method was proposed.This method exploited the inherent spatial sparsity to convert the localization problem into a sparse recovery problem based on the compressive sensing(CS) theory.In this method two-step discrete cosine transform(DCT)-based feature extraction was utilized to cover both short-time and long-time properties of the signal and reduce the dimensions of the sparse model.Moreover,an online dictionary learning(DL) method was used to dynamically adjust the dictionary for matching the changes of audio signals,and then the sparse solution could better represent location estimations.In addition,we proposed an improved approximate l_0norm minimization algorithm to enhance reconstruction performance for sparse signals in low signal-noise ratio(SNR).The effectiveness of the proposed scheme is demonstrated by simulation results where the locations of multiple sources can be obtained in the noisy and reverberant conditions.