为解决SWAT(soil and water assessment tool)模型在复杂情形下的参数不确定性分析问题,引入参数不确定性分析平台UQ-PyL(Uncertainty Quantification Python Laboratory),开发UQ-PyL与SWAT模型的耦合模块,使得UQ-PyL中的各种算法能够...为解决SWAT(soil and water assessment tool)模型在复杂情形下的参数不确定性分析问题,引入参数不确定性分析平台UQ-PyL(Uncertainty Quantification Python Laboratory),开发UQ-PyL与SWAT模型的耦合模块,使得UQ-PyL中的各种算法能够方便快捷地应用于SWAT模型的参数不确定性分析。为验证UQ-PyL用于SWAT模型参数不确定性分析的效果,在我国不同气候条件下的4个流域构建SWAT模型,综合对比评估UQ-PyL与SWAT-CUP对模型参数的不确定性分析结果。结果表明:UQ-PyL多种敏感性分析方法筛选出的敏感参数比SWAT-CUP单一方法筛选的结果更加合理;使用UQ-PyL率定的参数在4个流域应用中都表现良好,优化后模拟结果的纳什效率系数均在0.55以上,收敛次数在550次以内;在4个流域的模拟中,UQ-PyL能提供计算效率更高的算法ASMO,也能提供模拟结果更准确的算法SCE。综上,与SWAT模型相耦合的UQ-PyL能够支持SWAT模型用户在不同系统下对模型参数进行更高效的不确定性分析研究。展开更多
应力强度因子是预测荷载作用下结构中裂纹产生和扩展的重要参数。半解析的比例边界有限元法结合了有限元和边界元法的优势,在裂纹尖端或存在奇异应力的区域不需要局部网格细化,可以直接提取应力强度因子。在比例边界有限元法计算应力强...应力强度因子是预测荷载作用下结构中裂纹产生和扩展的重要参数。半解析的比例边界有限元法结合了有限元和边界元法的优势,在裂纹尖端或存在奇异应力的区域不需要局部网格细化,可以直接提取应力强度因子。在比例边界有限元法计算应力强度因子的框架下,引入随机参数进行蒙特卡罗模拟(Monte Carlo simulation, MCS),并提出一种新颖的基于MCS的不确定量化分析。与直接的MCS不同,采用奇异值分解构造低阶的子空间,降低系统的自由度,并使用径向基函数对子空间进行近似,通过子空间的线性组合获得新的结构响应,实现基于MCS的快速不确定量化分析。考虑不同荷载状况下,结构形状参数和材料属性参数对应力强度因子的影响,使用改进的MCS计算应力强度因子的统计特征,量化不确定参数对结构的影响。最后通过若干算例验证了该算法的准确性和有效性。展开更多
This paper presents a new computational method for forward uncertainty quantification(UQ)analyses on large-scale structural systems in the presence of arbitrary and dependent random inputs.The method consists of a gen...This paper presents a new computational method for forward uncertainty quantification(UQ)analyses on large-scale structural systems in the presence of arbitrary and dependent random inputs.The method consists of a generalized polynomial chaos expansion(GPCE)for statistical moment and reliability analyses associated with the stochastic output and a static reanalysis method to generate the input-output data set.In the reanalysis,we employ substructuring for a structure to isolate its local regions that vary due to random inputs.This allows for avoiding repeated computations of invariant substructures while generating the input-output data set.Combining substructuring with static condensation further improves the computational efficiency of the reanalysis without losing accuracy.Consequently,the GPCE with the static reanalysis method can achieve significant computational saving,thus mitigating the curse of dimensionality to some degree for UQ under high-dimensional inputs.The numerical results obtained from a simple structure indicate that the proposed method for UQ produces accurate solutions more efficiently than the GPCE using full finite element analyses(FEAs).We also demonstrate the efficiency and scalability of the proposed method by executing UQ for a large-scale wing-box structure under ten-dimensional(all-dependent)random inputs.展开更多
Physics-informed deep learning has recently emerged as an effective tool for leveraging both observational data and available physical laws.Physics-informed neural networks(PINNs)and deep operator networks(DeepONets)a...Physics-informed deep learning has recently emerged as an effective tool for leveraging both observational data and available physical laws.Physics-informed neural networks(PINNs)and deep operator networks(DeepONets)are two such models.The former encodes the physical laws via the automatic differentiation,while the latter learns the hidden physics from data.Generally,the noisy and limited observational data as well as the over-parameterization in neural networks(NNs)result in uncertainty in predictions from deep learning models.In paper“MENG,X.,YANG,L.,MAO,Z.,FERRANDIS,J.D.,and KARNIADAKIS,G.E.Learning functional priors and posteriors from data and physics.Journal of Computational Physics,457,111073(2022)”,a Bayesian framework based on the generative adversarial networks(GANs)has been proposed as a unified model to quantify uncertainties in predictions of PINNs as well as DeepONets.Specifically,the proposed approach in“MENG,X.,YANG,L.,MAO,Z.,FERRANDIS,J.D.,and KARNIADAKIS,G.E.Learning functional priors and posteriors from data and physics.Journal of Computational Physics,457,111073(2022)”has two stages:(i)prior learning,and(ii)posterior estimation.At the first stage,the GANs are utilized to learn a functional prior either from a prescribed function distribution,e.g.,the Gaussian process,or from historical data and available physics.At the second stage,the Hamiltonian Monte Carlo(HMC)method is utilized to estimate the posterior in the latent space of GANs.However,the vanilla HMC does not support the mini-batch training,which limits its applications in problems with big data.In the present work,we propose to use the normalizing flow(NF)models in the context of variational inference(VI),which naturally enables the mini-batch training,as the alternative to HMC for posterior estimation in the latent space of GANs.A series of numerical experiments,including a nonlinear differential equation problem and a 100-dimensional(100D)Darcy problem,are conducted to demonstrate that the NFs with full-/mini-batch training are able to achieve similar accuracy as the“gold rule”HMC.Moreover,the mini-batch training of NF makes it a promising tool for quantifying uncertainty in solving the high-dimensional partial differential equation(PDE)problems with big data.展开更多
As an alternative or complementary approach to the classical probability theory,the ability of the evidence theory in uncertainty quantification(UQ) analyses is subject of intense research in recent years.Two state-...As an alternative or complementary approach to the classical probability theory,the ability of the evidence theory in uncertainty quantification(UQ) analyses is subject of intense research in recent years.Two state-of-the-art numerical methods,the vertex method and the sampling method,are commonly used to calculate the resulting uncertainty based on the evidence theory.The vertex method is very effective for the monotonous system,but not for the non-monotonous one due to its high computational errors.The sampling method is applicable for both systems.But it always requires a high computational cost in UQ analyses,which makes it inefficient in most complex engineering systems.In this work,a computational intelligence approach is developed to reduce the computational cost and improve the practical utility of the evidence theory in UQ analyses.The method is demonstrated on two challenging problems proposed by Sandia National Laboratory.Simulation results show that the computational efficiency of the proposed method outperforms both the vertex method and the sampling method without decreasing the degree of accuracy.Especially,when the numbers of uncertain parameters and focal elements are large,and the system model is non-monotonic,the computational cost is five times less than that of the sampling method.展开更多
Invoice document digitization is crucial for efficient management in industries.The scanned invoice image is often noisy due to various reasons.This affects the OCR(optical character recognition)detection accuracy.In ...Invoice document digitization is crucial for efficient management in industries.The scanned invoice image is often noisy due to various reasons.This affects the OCR(optical character recognition)detection accuracy.In this paper,letter data obtained from images of invoices are denoised using a modified autoencoder based deep learning method.A stacked denoising autoencoder(SDAE)is implemented with two hidden layers each in encoder network and decoder network.In order to capture the most salient features of training samples,a undercomplete autoencoder is designed with non-linear encoder and decoder function.This autoencoder is regularized for denoising application using a combined loss function which considers both mean square error and binary cross entropy.A dataset consisting of 59,119 letter images,which contains both English alphabets(upper and lower case)and numbers(0 to 9)is prepared from many scanned invoices images and windows true type(.ttf)files,are used for training the neural network.Performance is analyzed in terms of Signal to Noise Ratio(SNR),Peak Signal to Noise Ratio(PSNR),Structural Similarity Index(SSIM)and Universal Image Quality Index(UQI)and compared with other filtering techniques like Nonlocal Means filter,Anisotropic diffusion filter,Gaussian filters and Mean filters.Denoising performance of proposed SDAE is compared with existing SDAE with single loss function in terms of SNR and PSNR values.Results show the superior performance of proposed SDAE method.展开更多
文摘为解决SWAT(soil and water assessment tool)模型在复杂情形下的参数不确定性分析问题,引入参数不确定性分析平台UQ-PyL(Uncertainty Quantification Python Laboratory),开发UQ-PyL与SWAT模型的耦合模块,使得UQ-PyL中的各种算法能够方便快捷地应用于SWAT模型的参数不确定性分析。为验证UQ-PyL用于SWAT模型参数不确定性分析的效果,在我国不同气候条件下的4个流域构建SWAT模型,综合对比评估UQ-PyL与SWAT-CUP对模型参数的不确定性分析结果。结果表明:UQ-PyL多种敏感性分析方法筛选出的敏感参数比SWAT-CUP单一方法筛选的结果更加合理;使用UQ-PyL率定的参数在4个流域应用中都表现良好,优化后模拟结果的纳什效率系数均在0.55以上,收敛次数在550次以内;在4个流域的模拟中,UQ-PyL能提供计算效率更高的算法ASMO,也能提供模拟结果更准确的算法SCE。综上,与SWAT模型相耦合的UQ-PyL能够支持SWAT模型用户在不同系统下对模型参数进行更高效的不确定性分析研究。
文摘应力强度因子是预测荷载作用下结构中裂纹产生和扩展的重要参数。半解析的比例边界有限元法结合了有限元和边界元法的优势,在裂纹尖端或存在奇异应力的区域不需要局部网格细化,可以直接提取应力强度因子。在比例边界有限元法计算应力强度因子的框架下,引入随机参数进行蒙特卡罗模拟(Monte Carlo simulation, MCS),并提出一种新颖的基于MCS的不确定量化分析。与直接的MCS不同,采用奇异值分解构造低阶的子空间,降低系统的自由度,并使用径向基函数对子空间进行近似,通过子空间的线性组合获得新的结构响应,实现基于MCS的快速不确定量化分析。考虑不同荷载状况下,结构形状参数和材料属性参数对应力强度因子的影响,使用改进的MCS计算应力强度因子的统计特征,量化不确定参数对结构的影响。最后通过若干算例验证了该算法的准确性和有效性。
基金Project supported by the National Research Foundation of Korea(Nos.NRF-2020R1C1C1011970 and NRF-2018R1A5A7023490)。
文摘This paper presents a new computational method for forward uncertainty quantification(UQ)analyses on large-scale structural systems in the presence of arbitrary and dependent random inputs.The method consists of a generalized polynomial chaos expansion(GPCE)for statistical moment and reliability analyses associated with the stochastic output and a static reanalysis method to generate the input-output data set.In the reanalysis,we employ substructuring for a structure to isolate its local regions that vary due to random inputs.This allows for avoiding repeated computations of invariant substructures while generating the input-output data set.Combining substructuring with static condensation further improves the computational efficiency of the reanalysis without losing accuracy.Consequently,the GPCE with the static reanalysis method can achieve significant computational saving,thus mitigating the curse of dimensionality to some degree for UQ under high-dimensional inputs.The numerical results obtained from a simple structure indicate that the proposed method for UQ produces accurate solutions more efficiently than the GPCE using full finite element analyses(FEAs).We also demonstrate the efficiency and scalability of the proposed method by executing UQ for a large-scale wing-box structure under ten-dimensional(all-dependent)random inputs.
基金Project supported by the National Natural Science Foundation of China(No.12201229)。
文摘Physics-informed deep learning has recently emerged as an effective tool for leveraging both observational data and available physical laws.Physics-informed neural networks(PINNs)and deep operator networks(DeepONets)are two such models.The former encodes the physical laws via the automatic differentiation,while the latter learns the hidden physics from data.Generally,the noisy and limited observational data as well as the over-parameterization in neural networks(NNs)result in uncertainty in predictions from deep learning models.In paper“MENG,X.,YANG,L.,MAO,Z.,FERRANDIS,J.D.,and KARNIADAKIS,G.E.Learning functional priors and posteriors from data and physics.Journal of Computational Physics,457,111073(2022)”,a Bayesian framework based on the generative adversarial networks(GANs)has been proposed as a unified model to quantify uncertainties in predictions of PINNs as well as DeepONets.Specifically,the proposed approach in“MENG,X.,YANG,L.,MAO,Z.,FERRANDIS,J.D.,and KARNIADAKIS,G.E.Learning functional priors and posteriors from data and physics.Journal of Computational Physics,457,111073(2022)”has two stages:(i)prior learning,and(ii)posterior estimation.At the first stage,the GANs are utilized to learn a functional prior either from a prescribed function distribution,e.g.,the Gaussian process,or from historical data and available physics.At the second stage,the Hamiltonian Monte Carlo(HMC)method is utilized to estimate the posterior in the latent space of GANs.However,the vanilla HMC does not support the mini-batch training,which limits its applications in problems with big data.In the present work,we propose to use the normalizing flow(NF)models in the context of variational inference(VI),which naturally enables the mini-batch training,as the alternative to HMC for posterior estimation in the latent space of GANs.A series of numerical experiments,including a nonlinear differential equation problem and a 100-dimensional(100D)Darcy problem,are conducted to demonstrate that the NFs with full-/mini-batch training are able to achieve similar accuracy as the“gold rule”HMC.Moreover,the mini-batch training of NF makes it a promising tool for quantifying uncertainty in solving the high-dimensional partial differential equation(PDE)problems with big data.
基金supported by the Advanced Research of National Defense Foundation of China(426010501)
文摘As an alternative or complementary approach to the classical probability theory,the ability of the evidence theory in uncertainty quantification(UQ) analyses is subject of intense research in recent years.Two state-of-the-art numerical methods,the vertex method and the sampling method,are commonly used to calculate the resulting uncertainty based on the evidence theory.The vertex method is very effective for the monotonous system,but not for the non-monotonous one due to its high computational errors.The sampling method is applicable for both systems.But it always requires a high computational cost in UQ analyses,which makes it inefficient in most complex engineering systems.In this work,a computational intelligence approach is developed to reduce the computational cost and improve the practical utility of the evidence theory in UQ analyses.The method is demonstrated on two challenging problems proposed by Sandia National Laboratory.Simulation results show that the computational efficiency of the proposed method outperforms both the vertex method and the sampling method without decreasing the degree of accuracy.Especially,when the numbers of uncertain parameters and focal elements are large,and the system model is non-monotonic,the computational cost is five times less than that of the sampling method.
文摘Invoice document digitization is crucial for efficient management in industries.The scanned invoice image is often noisy due to various reasons.This affects the OCR(optical character recognition)detection accuracy.In this paper,letter data obtained from images of invoices are denoised using a modified autoencoder based deep learning method.A stacked denoising autoencoder(SDAE)is implemented with two hidden layers each in encoder network and decoder network.In order to capture the most salient features of training samples,a undercomplete autoencoder is designed with non-linear encoder and decoder function.This autoencoder is regularized for denoising application using a combined loss function which considers both mean square error and binary cross entropy.A dataset consisting of 59,119 letter images,which contains both English alphabets(upper and lower case)and numbers(0 to 9)is prepared from many scanned invoices images and windows true type(.ttf)files,are used for training the neural network.Performance is analyzed in terms of Signal to Noise Ratio(SNR),Peak Signal to Noise Ratio(PSNR),Structural Similarity Index(SSIM)and Universal Image Quality Index(UQI)and compared with other filtering techniques like Nonlocal Means filter,Anisotropic diffusion filter,Gaussian filters and Mean filters.Denoising performance of proposed SDAE is compared with existing SDAE with single loss function in terms of SNR and PSNR values.Results show the superior performance of proposed SDAE method.