The original intention of visual question answering(VQA)models is to infer the answer based on the relevant information of the question text in the visual image,but many VQA models often yield answers that are biased ...The original intention of visual question answering(VQA)models is to infer the answer based on the relevant information of the question text in the visual image,but many VQA models often yield answers that are biased by some prior knowledge,especially the language priors.This paper proposes a mitigation model called language priors mitigation-VQA(LPM-VQA)for the language priors problem in VQA model,which divides language priors into positive and negative language priors.Different network branches are used to capture and process the different priors to achieve the purpose of mitigating language priors.A dynamically-changing language prior feedback objective function is designed with the intermediate results of some modules in the VQA model.The weight of the loss value for each answer is dynamically set according to the strength of its language priors to balance its proportion in the total VQA loss to further mitigate the language priors.This model does not depend on the baseline VQA architectures and can be configured like a plug-in to improve the performance of the model over most existing VQA models.The experimental results show that the proposed model is general and effective,achieving state-of-the-art accuracy in the VQA-CP v2 dataset.展开更多
The delayed S-shaped software reliability growth model (SRGM) is one of the non-homogeneous Poisson process (NHPP) models which have been proposed for software reliability assessment. The model is distinctive because ...The delayed S-shaped software reliability growth model (SRGM) is one of the non-homogeneous Poisson process (NHPP) models which have been proposed for software reliability assessment. The model is distinctive because it has a mean value function that reflects the delay in failure reporting: there is a delay between failure detection and reporting time. The model captures error detection, isolation, and removal processes, thus is appropriate for software reliability analysis. Predictive analysis in software testing is useful in modifying, debugging, and determining when to terminate software development testing processes. However, Bayesian predictive analyses on the delayed S-shaped model have not been extensively explored. This paper uses the delayed S-shaped SRGM to address four issues in one-sample prediction associated with the software development testing process. Bayesian approach based on non-informative priors was used to derive explicit solutions for the four issues, and the developed methodologies were illustrated using real data.展开更多
AIM To develop a framework to incorporate background domain knowledge into classification rule learning for knowledge discovery in biomedicine.METHODS Bayesian rule learning(BRL) is a rule-based classifier that uses a...AIM To develop a framework to incorporate background domain knowledge into classification rule learning for knowledge discovery in biomedicine.METHODS Bayesian rule learning(BRL) is a rule-based classifier that uses a greedy best-first search over a space of Bayesian belief-networks(BN) to find the optimal BN to explain the input dataset, and then infers classification rules from this BN. BRL uses a Bayesian score to evaluate the quality of BNs. In this paper, we extended the Bayesian score to include informative structure priors, which encodes our prior domain knowledge about the dataset. We call this extension of BRL as BRL_p. The structure prior has a λ hyperparameter that allows the user to tune the degree of incorporation of the prior knowledge in the model learning process. We studied the effect of λ on model learning using a simulated dataset and a real-world lung cancer prognostic biomarker dataset, by measuring the degree of incorporation of our specified prior knowledge. We also monitored its effect on the model predictive performance. Finally, we compared BRL_p to other stateof-the-art classifiers commonly used in biomedicine.RESULTS We evaluated the degree of incorporation of prior knowledge into BRL_p, with simulated data by measuring the Graph Edit Distance between the true datagenerating model and the model learned by BRL_p. We specified the true model using informative structurepriors. We observed that by increasing the value of λ we were able to increase the influence of the specified structure priors on model learning. A large value of λ of BRL_p caused it to return the true model. This also led to a gain in predictive performance measured by area under the receiver operator characteristic curve(AUC). We then obtained a publicly available real-world lung cancer prognostic biomarker dataset and specified a known biomarker from literature [the epidermal growth factor receptor(EGFR) gene]. We again observed that larger values of λ led to an increased incorporation of EGFR into the final BRL_p model. This relevant background knowledge also led to a gain in AUC.CONCLUSION BRL_p enables tunable structure priors to be incorporated during Bayesian classification rule learning that integrates data and knowledge as demonstrated using lung cancer biomarker data.展开更多
The Lomax distribution is an important member in the distribution family.In this paper,we systematically develop an objective Bayesian analysis of data from a Lomax distribution.Noninformative priors,including probabi...The Lomax distribution is an important member in the distribution family.In this paper,we systematically develop an objective Bayesian analysis of data from a Lomax distribution.Noninformative priors,including probability matching priors,the maximal data information(MDI)prior,Jeffreys prior and reference priors,are derived.The propriety of the posterior under each prior is subsequently validated.It is revealed that the MDI prior and one of the reference priors yield improper posteriors,and the other reference prior is a second-order probability matching prior.A simulation study is conducted to assess the frequentist performance of the proposed Bayesian approach.Finally,this approach along with the bootstrap method is applied to a real data set.展开更多
Sparse view 3D reconstruction has attracted increasing attention with the development of neural implicit 3D representation.Existing methods usually only make use of 2D views,requiring a dense set of input views for ac...Sparse view 3D reconstruction has attracted increasing attention with the development of neural implicit 3D representation.Existing methods usually only make use of 2D views,requiring a dense set of input views for accurate 3D reconstruction.In this paper,we show that accurate 3D reconstruction can be achieved by incorporating geometric priors into neural implicit 3D reconstruction.Our method adopts the signed distance function as the 3D representation,and learns a generalizable 3D surface reconstruction model from sparse views.Specifically,we build a more effective and sparse feature volume from the input views by using corresponding depth maps,which can be provided by depth sensors or directly predicted from the input views.We recover better geometric details by imposing both depth and surface normal constraints in addition to the color loss when training the neural implicit 3D representation.Experiments demonstrate that our method both outperforms state-of-the-art approaches,and achieves good generalizability.展开更多
We present GeoGlue,a novel method using high-resolution UAV imagery for accurate feature matching,which is normally challenging due to the complicated scenes.Current feature detection methods are performed without gui...We present GeoGlue,a novel method using high-resolution UAV imagery for accurate feature matching,which is normally challenging due to the complicated scenes.Current feature detection methods are performed without guidance of geometric priors(e.g.,geometric lines),lacking enough attention given to salient geometric features which are indispensable for accurate matching due to their stable existence across views.In this work,geometric lines arefirstly detected by a CNN-based geometry detector(GD)which is pre-trained in a self-supervised manner through automatically generated images.Then,geometric lines are naturally vectorized based on GD and thus non-significant features can be disregarded as judged by their disordered geometric morphology.A graph attention network(GAT)is utilized forfinal feature matching,spanning across the image pair with geometric priors informed by GD.Comprehensive experiments show that GeoGlue outperforms other state-of-the-art methods in feature-matching accuracy and performance stability,achieving pose estimation with maximum rotation and translation errors under 1%in challenging scenes from benchmark datasets,Tanks&Temples and ETH3D.This study also proposes thefirst self-supervised deep-learning model for curved line detection,generating geometric priors for matching so that more attention is put on prominent features and improving the visual effect of 3D reconstruction.展开更多
In situ NMR measurements of the diffusion coefficients,including an estimate of signal strength,of lithium ion conductor using diffusion-weighting pulse sequence are performed in this study.A cascade bilinear model is...In situ NMR measurements of the diffusion coefficients,including an estimate of signal strength,of lithium ion conductor using diffusion-weighting pulse sequence are performed in this study.A cascade bilinear model is proposed to estimate the diffusion sensitivity factors of pulsed-field gradient using prior information of the electrochemical performance and Arrhenius constraint.The model postulates that the active lithium nuclei participating electrochemical reaction are relevant to the NMR signal intensity,when discharge rate or temperature condition is varying.The electrochemical data and the NMR signal strength show a highly fit with the proposed model according our simulation and experiments.Furthermore,the diffusion time is constrained by temperature based on Arrhenius equation of reaction rates dependence.An experimental calculation of Li_4Ti_5O_(12)(LTO)/carbon nanotubes(CNTs) with the electrolyte evaluating at 20 ℃ is presented,which the b factor is estimated by the discharge rate.展开更多
We consider sparsity selection for the Cholesky factor L of the inverse covariance matrix in high-dimensional Gaussian DAG models.The sparsity is induced over the space of L via non-local priors,namely the product mom...We consider sparsity selection for the Cholesky factor L of the inverse covariance matrix in high-dimensional Gaussian DAG models.The sparsity is induced over the space of L via non-local priors,namely the product moment(pMOM)prior[Johnson,V.,&Rossell,D.(2012).Bayesian model selection in high-dimensional settings.Journal of the American Statistical Asso-ciation,107(498),649-660.https://doi.org/10.1080/01621459.2012.682536]and the hierarchi-cal hyper-pMOM prior[Cao,X.,Khare,K.,&Ghosh,M.(2020).High-dimensional posterior consistency for hierarchical non-local priors in regression.Bayesian Analysis,15(1),241-262.https://doi.org/10.1214/19-BA1154].We establish model selection consistency for Cholesky fac-tor under more relaxed conditions compared to those in the literature and implement an efficient MCMC algorithm for parallel selecting the sparsity pattern for each column of L.We demonstrate the validity of our theoretical results via numerical simulations,and also use further simulations to demonstrate that our sparsity selection approach is competitive with existing methods.展开更多
Plug-and-play priors are popular for solving illposed imaging inverse problems. Recent efforts indicate that the convergence guarantee of the imaging algorithms using plug-andplay priors relies on the assumption of bo...Plug-and-play priors are popular for solving illposed imaging inverse problems. Recent efforts indicate that the convergence guarantee of the imaging algorithms using plug-andplay priors relies on the assumption of bounded denoisers. However, the bounded properties of existing plugged Gaussian denoisers have not been proven explicitly. To bridge this gap, we detail a novel provable bounded denoiser termed as BMDual,which combines a trainable denoiser using dual tight frames and the well-known block-matching and 3D filtering(BM3D)denoiser. We incorporate multiple dual frames utilized by BMDual into a novel regularization model induced by a solver. The proposed regularization model is utilized for compressed sensing magnetic resonance imaging(CSMRI). We theoretically show the bound of the BMDual denoiser, the bounded gradient of the CSMRI data-fidelity function, and further demonstrate that the proposed CSMRI algorithm converges. Experimental results also demonstrate that the proposed algorithm has a good convergence behavior, and show the effectiveness of the proposed algorithm.展开更多
A crowdsourcing experiment in which viewers (the “crowd”) of a British Broadcasting Corporation (BBC) television show submitted estimates of the number of coins in a tumbler was shown in an antecedent paper (Part 1)...A crowdsourcing experiment in which viewers (the “crowd”) of a British Broadcasting Corporation (BBC) television show submitted estimates of the number of coins in a tumbler was shown in an antecedent paper (Part 1) to follow a log-normal distribution ∧(m,s2). The coin-estimation experiment is an archetype of a broad class of image analysis and object counting problems suitable for solution by crowdsourcing. The objective of the current paper (Part 2) is to determine the location and scale parameters (m,s) of ∧(m,s2) by both Bayesian and maximum likelihood (ML) methods and to compare the results. One outcome of the analysis is the resolution, by means of Jeffreys’ rule, of questions regarding the appropriate Bayesian prior. It is shown that Bayesian and ML analyses lead to the same expression for the location parameter, but different expressions for the scale parameter, which become identical in the limit of an infinite sample size. A second outcome of the analysis concerns use of the sample mean as the measure of information of the crowd in applications where the distribution of responses is not sought or known. In the coin-estimation experiment, the sample mean was found to differ widely from the mean number of coins calculated from ∧(m,s2). This discordance raises critical questions concerning whether, and under what conditions, the sample mean provides a reliable measure of the information of the crowd. This paper resolves that problem by use of the principle of maximum entropy (PME). The PME yields a set of equations for finding the most probable distribution consistent with given prior information and only that information. If there is no solution to the PME equations for a specified sample mean and sample variance, then the sample mean is an unreliable statistic, since no measure can be assigned to its uncertainty. Parts 1 and 2 together demonstrate that the information content of crowdsourcing resides in the distribution of responses (very often log-normal in form), which can be obtained empirically or by appropriate modeling.展开更多
The Goel-Okumoto software reliability model, also known as the Exponential Nonhomogeneous Poisson Process,is one of the earliest software reliability models to be proposed. From literature, it is evident that most of ...The Goel-Okumoto software reliability model, also known as the Exponential Nonhomogeneous Poisson Process,is one of the earliest software reliability models to be proposed. From literature, it is evident that most of the study that has been done on the Goel-Okumoto software reliability model is parameter estimation using the MLE method and model fit. It is widely known that predictive analysis is very useful for modifying, debugging and determining when to terminate software development testing process. However, there is a conspicuous absence of literature on both the classical and Bayesian predictive analyses on the model. This paper presents some results about predictive analyses for the Goel-Okumoto software reliability model. Driven by the requirement of highly reliable software used in computers embedded in automotive, mechanical and safety control systems, industrial and quality process control, real-time sensor networks, aircrafts, nuclear reactors among others, we address four issues in single-sample prediction associated closely with software development process. We have adopted Bayesian methods based on non-informative priors to develop explicit solutions to these problems. An example with real data in the form of time between software failures will be used to illustrate the developed methodologies.展开更多
Bayesian quantile regression has drawn more attention in widespread applications recently. Yu and Moyeed (2001) proposed an asymmetric Laplace distribution to provide likelihood based mechanism for Bayesian inference ...Bayesian quantile regression has drawn more attention in widespread applications recently. Yu and Moyeed (2001) proposed an asymmetric Laplace distribution to provide likelihood based mechanism for Bayesian inference of quantile regression models. In this work, the primary objective is to evaluate the performance of Bayesian quantile regression compared with simple regression and quantile regression through simulation and with application to a crime dataset from 50 USA states for assessing the effect of potential risk factors on the violent crime rate. This paper also explores improper priors, and conducts sensitivity analysis on the parameter estimates. The data analysis reveals that the percent of population that are single parents always has a significant positive influence on violent crimes occurrence, and Bayesian quantile regression provides more comprehensive statistical description of this association.展开更多
Smoothness prior approach for spectral smoothing is investigated using Fourier frequency filter analysis.We show that the regularization parameter in penalized least squares could continuously control the bandwidth of...Smoothness prior approach for spectral smoothing is investigated using Fourier frequency filter analysis.We show that the regularization parameter in penalized least squares could continuously control the bandwidth of low-pass filter.Besides,due to its property of interpolating the missing values automatically and smoothly,a spectral baseline correction algorithm based on the approach is proposed.This algorithm generally comprises spectral peak detection and baseline estimation.First,the spectral peak regions are detected and identified according to the second derivatives.Then,generalized smoothness prior approach combining identification information could estimate the baseline in peak regions.Results with both the simulated and real spectra show accurate baseline-corrected signals with this method.展开更多
The Goel-Okumoto software reliability model is one of the earliest attempts to use a non-homogeneous Poisson process to model failure times observed during software test interval. The model is known as exponential NHP...The Goel-Okumoto software reliability model is one of the earliest attempts to use a non-homogeneous Poisson process to model failure times observed during software test interval. The model is known as exponential NHPP model as it describes exponential software failure curve. Parameter estimation, model fit and predictive analyses based on one sample have been conducted on the Goel-Okumoto software reliability model. However, predictive analyses based on two samples have not been conducted on the model. In two-sample prediction, the parameters and characteristics of the first sample are used to analyze and to make predictions for the second sample. This helps in saving time and resources during the software development process. This paper presents some results about predictive analyses for the Goel-Okumoto software reliability model based on two samples. We have addressed three issues in two-sample prediction associated closely with software development testing process. Bayesian methods based on non-informative priors have been adopted to develop solutions to these issues. The developed methodologies have been illustrated by two sets of software failure data simulated from the Goel-Okumoto software reliability model.展开更多
Yin [1] has developed a new Bayesian measure of evidence for testing a point null hypothesis which agrees with the frequentist p-value thereby, solving Lindley’s paradox. Yin and Li [2] extended the methodology of Yi...Yin [1] has developed a new Bayesian measure of evidence for testing a point null hypothesis which agrees with the frequentist p-value thereby, solving Lindley’s paradox. Yin and Li [2] extended the methodology of Yin [1] to the case of the Behrens-Fisher problem by assigning Jeffreys’ independent prior to the nuisance parameters. In this paper, we were able to show both analytically and through the results from simulation studies that the methodology of Yin?[1] solves simultaneously, the Behrens-Fisher problem and Lindley’s paradox when a Gamma prior is assigned to the nuisance parameters.展开更多
The likelihood function plays a central role in statistical analysis in relation to information, from both frequentist and Bayesian perspectives. In large samples several new properties of the likelihood in relation t...The likelihood function plays a central role in statistical analysis in relation to information, from both frequentist and Bayesian perspectives. In large samples several new properties of the likelihood in relation to information are developed here. The Arrow-Pratt absolute risk aversion measure is shown to be related to the Cramer-Rao Information bound. The derivative of the log-likelihood function is seen to provide a measure of information related stability for the Bayesian posterior density. As well, information similar prior densities can be defined reflecting the central role of likelihood in the Bayes learning paradigm.展开更多
You are what you eat (diet) and where you eat (trophic level) in the food web. The relative abundance of pairs of stable isotopes of the organic elements carbon (e.g., the isotope ratio of <sup>13</sup>C v...You are what you eat (diet) and where you eat (trophic level) in the food web. The relative abundance of pairs of stable isotopes of the organic elements carbon (e.g., the isotope ratio of <sup>13</sup>C vs<sup> 12</sup>C), nitrogen, and sulfur, among others, in the tissues of a consumer reflects a weighted-average of the isotope ratios in the sources it consumes, after some corrections for the processes of digestion and assimilation. We extended a Bayesian mixing model to infer trophic positions of consumer organisms in a food web in addition to the degree to which distinct resource pools (diet sources) support consumers. The novel features in this work include: 1) trophic level estimation (vertical position in foodweb) and 2) the Bayesian exposition of a biologically realistic model [1] including stable isotope ratios and concentrations of carbon, nitrogen, and sulfur, isotopic fractionations, elemental assimilation efficiencies, as well as extensive use of prior information. We discuss issues of parameter identifiability in the complex and most realistic model. We apply our model to simulated data and to bottlenose dolphins (Tursiops truncatus) feeding on several numerically abundant fish species, which in turn feed on other fish and primary producing plants and algae present in St. George Sound, FL, USA. Finally, we discuss extensions from other work that apply to this model and three important general ecological applications. Online supplementary materials include data, OpenBUGS scripts, and simulation details.展开更多
When the event of interest never occurs for a proportion of subjects during the study period, survival models with a cure fraction are more appropriate in analyzing this type of data. Considering the non-linear relati...When the event of interest never occurs for a proportion of subjects during the study period, survival models with a cure fraction are more appropriate in analyzing this type of data. Considering the non-linear relationship between response variable and covariates, we propose a class of generalized transformation models motivated by Zeng et al. [1] transformed proportional time cure model, in which fractional polynomials are used instead of the simple linear combination of the covariates. Statistical properties of the proposed models are investigated, including identifiability of the parameters, asymptotic consistency, and asymptotic normality of the estimated regression coefficients. A simulation study is carried out to examine the performance of the power selection procedure. The generalized transformation cure rate models are applied to the First National Health and Nutrition Examination Survey Epidemiologic Follow-up Study (NHANES1) for the purpose of examining the relationship between survival time of patients and several risk factors.展开更多
In any side-channel attack, it is desirable to exploit all the available leakage data to compute the distinguisher’s values. The profiling phase is essential to obtain an accurate leakage model, yet it may not be exh...In any side-channel attack, it is desirable to exploit all the available leakage data to compute the distinguisher’s values. The profiling phase is essential to obtain an accurate leakage model, yet it may not be exhaustive. As a result, information theoretic distinguishers may come up on previously unseen data, a phenomenon yielding empty bins. A strict application of the maximum likelihood method yields a distinguisher that is not even sound. Ignoring empty bins reestablishes soundness, but seriously limits its performance in terms of success rate. The purpose of this paper is to remedy this situation. In this research, we propose six different techniques to improve the performance of information theoretic distinguishers. We study t</span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">hem thoroughly by applying them to timing attacks, both with synthetic and real leakages. Namely, we compare them in terms of success rate, and show that their performance depends on the amount of profiling, and can be explained by a bias-variance analysis. The result of our work is that there exist use-cases, especially when measurements are noisy, where our novel information theoretic distinguishers (typically the soft-drop distinguisher) perform the best compared to known side-channel distinguishers, despite the empty bin situation.展开更多
Optical flow estimation in human facial video,which provides 2D correspondences between adjacent frames,is a fundamental pre-processing step for many applications,like facial expression capture and recognition.However...Optical flow estimation in human facial video,which provides 2D correspondences between adjacent frames,is a fundamental pre-processing step for many applications,like facial expression capture and recognition.However,it is quite challenging as human facial images contain large areas of similar textures,rich expressions,and large rotations.These characteristics also result in the scarcity of large,annotated realworld datasets.We propose a robust and accurate method to learn facial optical flow in a self-supervised manner.Specifically,we utilize various shape priors,including face depth,landmarks,and parsing,to guide the self-supervised learning task via a differentiable nonrigid registration framework.Extensive experiments demonstrate that our method achieves remarkable improvements for facial optical flow estimation in the presence of significant expressions and large rotations.展开更多
文摘The original intention of visual question answering(VQA)models is to infer the answer based on the relevant information of the question text in the visual image,but many VQA models often yield answers that are biased by some prior knowledge,especially the language priors.This paper proposes a mitigation model called language priors mitigation-VQA(LPM-VQA)for the language priors problem in VQA model,which divides language priors into positive and negative language priors.Different network branches are used to capture and process the different priors to achieve the purpose of mitigating language priors.A dynamically-changing language prior feedback objective function is designed with the intermediate results of some modules in the VQA model.The weight of the loss value for each answer is dynamically set according to the strength of its language priors to balance its proportion in the total VQA loss to further mitigate the language priors.This model does not depend on the baseline VQA architectures and can be configured like a plug-in to improve the performance of the model over most existing VQA models.The experimental results show that the proposed model is general and effective,achieving state-of-the-art accuracy in the VQA-CP v2 dataset.
文摘The delayed S-shaped software reliability growth model (SRGM) is one of the non-homogeneous Poisson process (NHPP) models which have been proposed for software reliability assessment. The model is distinctive because it has a mean value function that reflects the delay in failure reporting: there is a delay between failure detection and reporting time. The model captures error detection, isolation, and removal processes, thus is appropriate for software reliability analysis. Predictive analysis in software testing is useful in modifying, debugging, and determining when to terminate software development testing processes. However, Bayesian predictive analyses on the delayed S-shaped model have not been extensively explored. This paper uses the delayed S-shaped SRGM to address four issues in one-sample prediction associated with the software development testing process. Bayesian approach based on non-informative priors was used to derive explicit solutions for the four issues, and the developed methodologies were illustrated using real data.
基金Supported by National Institute of General Medical Sciences of the National Institutes of Health,No.R01GM100387
文摘AIM To develop a framework to incorporate background domain knowledge into classification rule learning for knowledge discovery in biomedicine.METHODS Bayesian rule learning(BRL) is a rule-based classifier that uses a greedy best-first search over a space of Bayesian belief-networks(BN) to find the optimal BN to explain the input dataset, and then infers classification rules from this BN. BRL uses a Bayesian score to evaluate the quality of BNs. In this paper, we extended the Bayesian score to include informative structure priors, which encodes our prior domain knowledge about the dataset. We call this extension of BRL as BRL_p. The structure prior has a λ hyperparameter that allows the user to tune the degree of incorporation of the prior knowledge in the model learning process. We studied the effect of λ on model learning using a simulated dataset and a real-world lung cancer prognostic biomarker dataset, by measuring the degree of incorporation of our specified prior knowledge. We also monitored its effect on the model predictive performance. Finally, we compared BRL_p to other stateof-the-art classifiers commonly used in biomedicine.RESULTS We evaluated the degree of incorporation of prior knowledge into BRL_p, with simulated data by measuring the Graph Edit Distance between the true datagenerating model and the model learned by BRL_p. We specified the true model using informative structurepriors. We observed that by increasing the value of λ we were able to increase the influence of the specified structure priors on model learning. A large value of λ of BRL_p caused it to return the true model. This also led to a gain in predictive performance measured by area under the receiver operator characteristic curve(AUC). We then obtained a publicly available real-world lung cancer prognostic biomarker dataset and specified a known biomarker from literature [the epidermal growth factor receptor(EGFR) gene]. We again observed that larger values of λ led to an increased incorporation of EGFR into the final BRL_p model. This relevant background knowledge also led to a gain in AUC.CONCLUSION BRL_p enables tunable structure priors to be incorporated during Bayesian classification rule learning that integrates data and knowledge as demonstrated using lung cancer biomarker data.
基金the National Social Science Foundation of China(Grant No.21BTJ034).
文摘The Lomax distribution is an important member in the distribution family.In this paper,we systematically develop an objective Bayesian analysis of data from a Lomax distribution.Noninformative priors,including probability matching priors,the maximal data information(MDI)prior,Jeffreys prior and reference priors,are derived.The propriety of the posterior under each prior is subsequently validated.It is revealed that the MDI prior and one of the reference priors yield improper posteriors,and the other reference prior is a second-order probability matching prior.A simulation study is conducted to assess the frequentist performance of the proposed Bayesian approach.Finally,this approach along with the bootstrap method is applied to a real data set.
基金supported by the National Natural Science Foundation of China(Grant No.61902210).
文摘Sparse view 3D reconstruction has attracted increasing attention with the development of neural implicit 3D representation.Existing methods usually only make use of 2D views,requiring a dense set of input views for accurate 3D reconstruction.In this paper,we show that accurate 3D reconstruction can be achieved by incorporating geometric priors into neural implicit 3D reconstruction.Our method adopts the signed distance function as the 3D representation,and learns a generalizable 3D surface reconstruction model from sparse views.Specifically,we build a more effective and sparse feature volume from the input views by using corresponding depth maps,which can be provided by depth sensors or directly predicted from the input views.We recover better geometric details by imposing both depth and surface normal constraints in addition to the color loss when training the neural implicit 3D representation.Experiments demonstrate that our method both outperforms state-of-the-art approaches,and achieves good generalizability.
基金supported by the Strategic Priority Research Program of the Chinese Academy of Sciences[Grant No.XDA19080101]the Director Fund of the International Research Center of Big Data for Sus-tainable Development Goals[Grant No.CBAS2022DF015]+1 种基金the National Natural Science Foundation of China[Grant No.41901328 and 41974108]the National Key Research and Development Program of China[Grant No.2022YFC3800700].
文摘We present GeoGlue,a novel method using high-resolution UAV imagery for accurate feature matching,which is normally challenging due to the complicated scenes.Current feature detection methods are performed without guidance of geometric priors(e.g.,geometric lines),lacking enough attention given to salient geometric features which are indispensable for accurate matching due to their stable existence across views.In this work,geometric lines arefirstly detected by a CNN-based geometry detector(GD)which is pre-trained in a self-supervised manner through automatically generated images.Then,geometric lines are naturally vectorized based on GD and thus non-significant features can be disregarded as judged by their disordered geometric morphology.A graph attention network(GAT)is utilized forfinal feature matching,spanning across the image pair with geometric priors informed by GD.Comprehensive experiments show that GeoGlue outperforms other state-of-the-art methods in feature-matching accuracy and performance stability,achieving pose estimation with maximum rotation and translation errors under 1%in challenging scenes from benchmark datasets,Tanks&Temples and ETH3D.This study also proposes thefirst self-supervised deep-learning model for curved line detection,generating geometric priors for matching so that more attention is put on prominent features and improving the visual effect of 3D reconstruction.
基金supported by the National Major Scientific Equipment R&D Project (No. ZDYZ2010-2)the National Natural Science Foundation of China (No. 51307165)
文摘In situ NMR measurements of the diffusion coefficients,including an estimate of signal strength,of lithium ion conductor using diffusion-weighting pulse sequence are performed in this study.A cascade bilinear model is proposed to estimate the diffusion sensitivity factors of pulsed-field gradient using prior information of the electrochemical performance and Arrhenius constraint.The model postulates that the active lithium nuclei participating electrochemical reaction are relevant to the NMR signal intensity,when discharge rate or temperature condition is varying.The electrochemical data and the NMR signal strength show a highly fit with the proposed model according our simulation and experiments.Furthermore,the diffusion time is constrained by temperature based on Arrhenius equation of reaction rates dependence.An experimental calculation of Li_4Ti_5O_(12)(LTO)/carbon nanotubes(CNTs) with the electrolyte evaluating at 20 ℃ is presented,which the b factor is estimated by the discharge rate.
基金This work was supported by Simons Foundation’s collabora-tion grant(No.635213).
文摘We consider sparsity selection for the Cholesky factor L of the inverse covariance matrix in high-dimensional Gaussian DAG models.The sparsity is induced over the space of L via non-local priors,namely the product moment(pMOM)prior[Johnson,V.,&Rossell,D.(2012).Bayesian model selection in high-dimensional settings.Journal of the American Statistical Asso-ciation,107(498),649-660.https://doi.org/10.1080/01621459.2012.682536]and the hierarchi-cal hyper-pMOM prior[Cao,X.,Khare,K.,&Ghosh,M.(2020).High-dimensional posterior consistency for hierarchical non-local priors in regression.Bayesian Analysis,15(1),241-262.https://doi.org/10.1214/19-BA1154].We establish model selection consistency for Cholesky fac-tor under more relaxed conditions compared to those in the literature and implement an efficient MCMC algorithm for parallel selecting the sparsity pattern for each column of L.We demonstrate the validity of our theoretical results via numerical simulations,and also use further simulations to demonstrate that our sparsity selection approach is competitive with existing methods.
基金supported in part by the National Natural Science Foundation of China (62371414,61901406)the Hebei Natural Science Foundation (F2020203025)+2 种基金the Young Talent Program of Universities and Colleges in Hebei Province (BJ2021044)the Hebei Key Laboratory Project (202250701010046)the Central Government Guides Local Science and Technology Development Fund Projects(216Z1602G)。
文摘Plug-and-play priors are popular for solving illposed imaging inverse problems. Recent efforts indicate that the convergence guarantee of the imaging algorithms using plug-andplay priors relies on the assumption of bounded denoisers. However, the bounded properties of existing plugged Gaussian denoisers have not been proven explicitly. To bridge this gap, we detail a novel provable bounded denoiser termed as BMDual,which combines a trainable denoiser using dual tight frames and the well-known block-matching and 3D filtering(BM3D)denoiser. We incorporate multiple dual frames utilized by BMDual into a novel regularization model induced by a solver. The proposed regularization model is utilized for compressed sensing magnetic resonance imaging(CSMRI). We theoretically show the bound of the BMDual denoiser, the bounded gradient of the CSMRI data-fidelity function, and further demonstrate that the proposed CSMRI algorithm converges. Experimental results also demonstrate that the proposed algorithm has a good convergence behavior, and show the effectiveness of the proposed algorithm.
文摘A crowdsourcing experiment in which viewers (the “crowd”) of a British Broadcasting Corporation (BBC) television show submitted estimates of the number of coins in a tumbler was shown in an antecedent paper (Part 1) to follow a log-normal distribution ∧(m,s2). The coin-estimation experiment is an archetype of a broad class of image analysis and object counting problems suitable for solution by crowdsourcing. The objective of the current paper (Part 2) is to determine the location and scale parameters (m,s) of ∧(m,s2) by both Bayesian and maximum likelihood (ML) methods and to compare the results. One outcome of the analysis is the resolution, by means of Jeffreys’ rule, of questions regarding the appropriate Bayesian prior. It is shown that Bayesian and ML analyses lead to the same expression for the location parameter, but different expressions for the scale parameter, which become identical in the limit of an infinite sample size. A second outcome of the analysis concerns use of the sample mean as the measure of information of the crowd in applications where the distribution of responses is not sought or known. In the coin-estimation experiment, the sample mean was found to differ widely from the mean number of coins calculated from ∧(m,s2). This discordance raises critical questions concerning whether, and under what conditions, the sample mean provides a reliable measure of the information of the crowd. This paper resolves that problem by use of the principle of maximum entropy (PME). The PME yields a set of equations for finding the most probable distribution consistent with given prior information and only that information. If there is no solution to the PME equations for a specified sample mean and sample variance, then the sample mean is an unreliable statistic, since no measure can be assigned to its uncertainty. Parts 1 and 2 together demonstrate that the information content of crowdsourcing resides in the distribution of responses (very often log-normal in form), which can be obtained empirically or by appropriate modeling.
文摘The Goel-Okumoto software reliability model, also known as the Exponential Nonhomogeneous Poisson Process,is one of the earliest software reliability models to be proposed. From literature, it is evident that most of the study that has been done on the Goel-Okumoto software reliability model is parameter estimation using the MLE method and model fit. It is widely known that predictive analysis is very useful for modifying, debugging and determining when to terminate software development testing process. However, there is a conspicuous absence of literature on both the classical and Bayesian predictive analyses on the model. This paper presents some results about predictive analyses for the Goel-Okumoto software reliability model. Driven by the requirement of highly reliable software used in computers embedded in automotive, mechanical and safety control systems, industrial and quality process control, real-time sensor networks, aircrafts, nuclear reactors among others, we address four issues in single-sample prediction associated closely with software development process. We have adopted Bayesian methods based on non-informative priors to develop explicit solutions to these problems. An example with real data in the form of time between software failures will be used to illustrate the developed methodologies.
文摘Bayesian quantile regression has drawn more attention in widespread applications recently. Yu and Moyeed (2001) proposed an asymmetric Laplace distribution to provide likelihood based mechanism for Bayesian inference of quantile regression models. In this work, the primary objective is to evaluate the performance of Bayesian quantile regression compared with simple regression and quantile regression through simulation and with application to a crime dataset from 50 USA states for assessing the effect of potential risk factors on the violent crime rate. This paper also explores improper priors, and conducts sensitivity analysis on the parameter estimates. The data analysis reveals that the percent of population that are single parents always has a significant positive influence on violent crimes occurrence, and Bayesian quantile regression provides more comprehensive statistical description of this association.
基金Supported by the National Basic Research Program of China(61178072)
文摘Smoothness prior approach for spectral smoothing is investigated using Fourier frequency filter analysis.We show that the regularization parameter in penalized least squares could continuously control the bandwidth of low-pass filter.Besides,due to its property of interpolating the missing values automatically and smoothly,a spectral baseline correction algorithm based on the approach is proposed.This algorithm generally comprises spectral peak detection and baseline estimation.First,the spectral peak regions are detected and identified according to the second derivatives.Then,generalized smoothness prior approach combining identification information could estimate the baseline in peak regions.Results with both the simulated and real spectra show accurate baseline-corrected signals with this method.
文摘The Goel-Okumoto software reliability model is one of the earliest attempts to use a non-homogeneous Poisson process to model failure times observed during software test interval. The model is known as exponential NHPP model as it describes exponential software failure curve. Parameter estimation, model fit and predictive analyses based on one sample have been conducted on the Goel-Okumoto software reliability model. However, predictive analyses based on two samples have not been conducted on the model. In two-sample prediction, the parameters and characteristics of the first sample are used to analyze and to make predictions for the second sample. This helps in saving time and resources during the software development process. This paper presents some results about predictive analyses for the Goel-Okumoto software reliability model based on two samples. We have addressed three issues in two-sample prediction associated closely with software development testing process. Bayesian methods based on non-informative priors have been adopted to develop solutions to these issues. The developed methodologies have been illustrated by two sets of software failure data simulated from the Goel-Okumoto software reliability model.
文摘Yin [1] has developed a new Bayesian measure of evidence for testing a point null hypothesis which agrees with the frequentist p-value thereby, solving Lindley’s paradox. Yin and Li [2] extended the methodology of Yin [1] to the case of the Behrens-Fisher problem by assigning Jeffreys’ independent prior to the nuisance parameters. In this paper, we were able to show both analytically and through the results from simulation studies that the methodology of Yin?[1] solves simultaneously, the Behrens-Fisher problem and Lindley’s paradox when a Gamma prior is assigned to the nuisance parameters.
文摘The likelihood function plays a central role in statistical analysis in relation to information, from both frequentist and Bayesian perspectives. In large samples several new properties of the likelihood in relation to information are developed here. The Arrow-Pratt absolute risk aversion measure is shown to be related to the Cramer-Rao Information bound. The derivative of the log-likelihood function is seen to provide a measure of information related stability for the Bayesian posterior density. As well, information similar prior densities can be defined reflecting the central role of likelihood in the Bayes learning paradigm.
文摘You are what you eat (diet) and where you eat (trophic level) in the food web. The relative abundance of pairs of stable isotopes of the organic elements carbon (e.g., the isotope ratio of <sup>13</sup>C vs<sup> 12</sup>C), nitrogen, and sulfur, among others, in the tissues of a consumer reflects a weighted-average of the isotope ratios in the sources it consumes, after some corrections for the processes of digestion and assimilation. We extended a Bayesian mixing model to infer trophic positions of consumer organisms in a food web in addition to the degree to which distinct resource pools (diet sources) support consumers. The novel features in this work include: 1) trophic level estimation (vertical position in foodweb) and 2) the Bayesian exposition of a biologically realistic model [1] including stable isotope ratios and concentrations of carbon, nitrogen, and sulfur, isotopic fractionations, elemental assimilation efficiencies, as well as extensive use of prior information. We discuss issues of parameter identifiability in the complex and most realistic model. We apply our model to simulated data and to bottlenose dolphins (Tursiops truncatus) feeding on several numerically abundant fish species, which in turn feed on other fish and primary producing plants and algae present in St. George Sound, FL, USA. Finally, we discuss extensions from other work that apply to this model and three important general ecological applications. Online supplementary materials include data, OpenBUGS scripts, and simulation details.
文摘When the event of interest never occurs for a proportion of subjects during the study period, survival models with a cure fraction are more appropriate in analyzing this type of data. Considering the non-linear relationship between response variable and covariates, we propose a class of generalized transformation models motivated by Zeng et al. [1] transformed proportional time cure model, in which fractional polynomials are used instead of the simple linear combination of the covariates. Statistical properties of the proposed models are investigated, including identifiability of the parameters, asymptotic consistency, and asymptotic normality of the estimated regression coefficients. A simulation study is carried out to examine the performance of the power selection procedure. The generalized transformation cure rate models are applied to the First National Health and Nutrition Examination Survey Epidemiologic Follow-up Study (NHANES1) for the purpose of examining the relationship between survival time of patients and several risk factors.
文摘In any side-channel attack, it is desirable to exploit all the available leakage data to compute the distinguisher’s values. The profiling phase is essential to obtain an accurate leakage model, yet it may not be exhaustive. As a result, information theoretic distinguishers may come up on previously unseen data, a phenomenon yielding empty bins. A strict application of the maximum likelihood method yields a distinguisher that is not even sound. Ignoring empty bins reestablishes soundness, but seriously limits its performance in terms of success rate. The purpose of this paper is to remedy this situation. In this research, we propose six different techniques to improve the performance of information theoretic distinguishers. We study t</span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">hem thoroughly by applying them to timing attacks, both with synthetic and real leakages. Namely, we compare them in terms of success rate, and show that their performance depends on the amount of profiling, and can be explained by a bias-variance analysis. The result of our work is that there exist use-cases, especially when measurements are noisy, where our novel information theoretic distinguishers (typically the soft-drop distinguisher) perform the best compared to known side-channel distinguishers, despite the empty bin situation.
基金This work was supported by National Natural Science Foundation of China(No.62122071)the Youth Innovation Promotion Association CAS(No.2018495)+1 种基金the Fundamental Research Funds for the Central Universities(No.WK3470000021)through the Alibaba Innovation Research Program(AIR).
文摘Optical flow estimation in human facial video,which provides 2D correspondences between adjacent frames,is a fundamental pre-processing step for many applications,like facial expression capture and recognition.However,it is quite challenging as human facial images contain large areas of similar textures,rich expressions,and large rotations.These characteristics also result in the scarcity of large,annotated realworld datasets.We propose a robust and accurate method to learn facial optical flow in a self-supervised manner.Specifically,we utilize various shape priors,including face depth,landmarks,and parsing,to guide the self-supervised learning task via a differentiable nonrigid registration framework.Extensive experiments demonstrate that our method achieves remarkable improvements for facial optical flow estimation in the presence of significant expressions and large rotations.