期刊文献+
共找到24篇文章
< 1 2 >
每页显示 20 50 100
Deep Multi-Module Based Language Priors Mitigation Model for Visual Question Answering
1
作者 于守健 金学勤 +2 位作者 吴国文 石秀金 张红 《Journal of Donghua University(English Edition)》 CAS 2023年第6期684-694,共11页
The original intention of visual question answering(VQA)models is to infer the answer based on the relevant information of the question text in the visual image,but many VQA models often yield answers that are biased ... The original intention of visual question answering(VQA)models is to infer the answer based on the relevant information of the question text in the visual image,but many VQA models often yield answers that are biased by some prior knowledge,especially the language priors.This paper proposes a mitigation model called language priors mitigation-VQA(LPM-VQA)for the language priors problem in VQA model,which divides language priors into positive and negative language priors.Different network branches are used to capture and process the different priors to achieve the purpose of mitigating language priors.A dynamically-changing language prior feedback objective function is designed with the intermediate results of some modules in the VQA model.The weight of the loss value for each answer is dynamically set according to the strength of its language priors to balance its proportion in the total VQA loss to further mitigate the language priors.This model does not depend on the baseline VQA architectures and can be configured like a plug-in to improve the performance of the model over most existing VQA models.The experimental results show that the proposed model is general and effective,achieving state-of-the-art accuracy in the VQA-CP v2 dataset. 展开更多
关键词 visual question answering(VQA) language priors natural language processing multimodal fusion computer vision
下载PDF
One-Sample Bayesian Predictive Analyses for a Nonhomogeneous Poisson Process with Delayed S-Shaped Intensity Function Using Non-Informative Priors
2
作者 Otieno Collins Orawo Luke Akong’o Matiri George Munene 《Open Journal of Statistics》 2023年第5期717-733,共17页
The delayed S-shaped software reliability growth model (SRGM) is one of the non-homogeneous Poisson process (NHPP) models which have been proposed for software reliability assessment. The model is distinctive because ... The delayed S-shaped software reliability growth model (SRGM) is one of the non-homogeneous Poisson process (NHPP) models which have been proposed for software reliability assessment. The model is distinctive because it has a mean value function that reflects the delay in failure reporting: there is a delay between failure detection and reporting time. The model captures error detection, isolation, and removal processes, thus is appropriate for software reliability analysis. Predictive analysis in software testing is useful in modifying, debugging, and determining when to terminate software development testing processes. However, Bayesian predictive analyses on the delayed S-shaped model have not been extensively explored. This paper uses the delayed S-shaped SRGM to address four issues in one-sample prediction associated with the software development testing process. Bayesian approach based on non-informative priors was used to derive explicit solutions for the four issues, and the developed methodologies were illustrated using real data. 展开更多
关键词 Failure Intensity Non-Informative priors Software Reliability Model Bayesian Approach Non-Homogeneous Poisson Process
下载PDF
Tunable structure priors for Bayesian rule learning for knowledge integrated biomarker discovery
3
作者 Jeya Balaji Balasubramanian Vanathi Gopalakrishnan 《World Journal of Clinical Oncology》 CAS 2018年第5期98-109,共12页
AIM To develop a framework to incorporate background domain knowledge into classification rule learning for knowledge discovery in biomedicine.METHODS Bayesian rule learning(BRL) is a rule-based classifier that uses a... AIM To develop a framework to incorporate background domain knowledge into classification rule learning for knowledge discovery in biomedicine.METHODS Bayesian rule learning(BRL) is a rule-based classifier that uses a greedy best-first search over a space of Bayesian belief-networks(BN) to find the optimal BN to explain the input dataset, and then infers classification rules from this BN. BRL uses a Bayesian score to evaluate the quality of BNs. In this paper, we extended the Bayesian score to include informative structure priors, which encodes our prior domain knowledge about the dataset. We call this extension of BRL as BRL_p. The structure prior has a λ hyperparameter that allows the user to tune the degree of incorporation of the prior knowledge in the model learning process. We studied the effect of λ on model learning using a simulated dataset and a real-world lung cancer prognostic biomarker dataset, by measuring the degree of incorporation of our specified prior knowledge. We also monitored its effect on the model predictive performance. Finally, we compared BRL_p to other stateof-the-art classifiers commonly used in biomedicine.RESULTS We evaluated the degree of incorporation of prior knowledge into BRL_p, with simulated data by measuring the Graph Edit Distance between the true datagenerating model and the model learned by BRL_p. We specified the true model using informative structurepriors. We observed that by increasing the value of λ we were able to increase the influence of the specified structure priors on model learning. A large value of λ of BRL_p caused it to return the true model. This also led to a gain in predictive performance measured by area under the receiver operator characteristic curve(AUC). We then obtained a publicly available real-world lung cancer prognostic biomarker dataset and specified a known biomarker from literature [the epidermal growth factor receptor(EGFR) gene]. We again observed that larger values of λ led to an increased incorporation of EGFR into the final BRL_p model. This relevant background knowledge also led to a gain in AUC.CONCLUSION BRL_p enables tunable structure priors to be incorporated during Bayesian classification rule learning that integrates data and knowledge as demonstrated using lung cancer biomarker data. 展开更多
关键词 Supervised machine learning RULE-BASED models BAYESIAN methods Background KNOWLEDGE INFORMATIVE priors BIOMARKER discovery
下载PDF
Bayesian analysis for the Lomax model using noninformative priors 被引量:1
4
作者 Daojiang He Dongchu Sun Qing Zhu 《Statistical Theory and Related Fields》 CSCD 2023年第1期61-68,共8页
The Lomax distribution is an important member in the distribution family.In this paper,we systematically develop an objective Bayesian analysis of data from a Lomax distribution.Noninformative priors,including probabi... The Lomax distribution is an important member in the distribution family.In this paper,we systematically develop an objective Bayesian analysis of data from a Lomax distribution.Noninformative priors,including probability matching priors,the maximal data information(MDI)prior,Jeffreys prior and reference priors,are derived.The propriety of the posterior under each prior is subsequently validated.It is revealed that the MDI prior and one of the reference priors yield improper posteriors,and the other reference prior is a second-order probability matching prior.A simulation study is conducted to assess the frequentist performance of the proposed Bayesian approach.Finally,this approach along with the bootstrap method is applied to a real data set. 展开更多
关键词 Lomax model probability matching priors MDI prior Jeffreys prior reference priors posterior propriety
原文传递
Neural 3D reconstruction from sparse views using geometric priors
5
作者 Tai-Jiang Mu Hao-Xiang Chen +1 位作者 Jun-Xiong Cai Ning Guo 《Computational Visual Media》 SCIE EI CSCD 2023年第4期687-697,共11页
Sparse view 3D reconstruction has attracted increasing attention with the development of neural implicit 3D representation.Existing methods usually only make use of 2D views,requiring a dense set of input views for ac... Sparse view 3D reconstruction has attracted increasing attention with the development of neural implicit 3D representation.Existing methods usually only make use of 2D views,requiring a dense set of input views for accurate 3D reconstruction.In this paper,we show that accurate 3D reconstruction can be achieved by incorporating geometric priors into neural implicit 3D reconstruction.Our method adopts the signed distance function as the 3D representation,and learns a generalizable 3D surface reconstruction model from sparse views.Specifically,we build a more effective and sparse feature volume from the input views by using corresponding depth maps,which can be provided by depth sensors or directly predicted from the input views.We recover better geometric details by imposing both depth and surface normal constraints in addition to the color loss when training the neural implicit 3D representation.Experiments demonstrate that our method both outperforms state-of-the-art approaches,and achieves good generalizability. 展开更多
关键词 sparse views 3D reconstruction volume rendering geometric priors neural implicit 3D representation
原文传递
GeoGlue: feature matching with self-supervised geometric priors for high-resolution UAV images
6
作者 Weijia Bei Xiangtao Fan +2 位作者 Hongdeng Jian Xiaoping Du Dongmei Yan 《International Journal of Digital Earth》 SCIE EI 2023年第1期1246-1275,共30页
We present GeoGlue,a novel method using high-resolution UAV imagery for accurate feature matching,which is normally challenging due to the complicated scenes.Current feature detection methods are performed without gui... We present GeoGlue,a novel method using high-resolution UAV imagery for accurate feature matching,which is normally challenging due to the complicated scenes.Current feature detection methods are performed without guidance of geometric priors(e.g.,geometric lines),lacking enough attention given to salient geometric features which are indispensable for accurate matching due to their stable existence across views.In this work,geometric lines arefirstly detected by a CNN-based geometry detector(GD)which is pre-trained in a self-supervised manner through automatically generated images.Then,geometric lines are naturally vectorized based on GD and thus non-significant features can be disregarded as judged by their disordered geometric morphology.A graph attention network(GAT)is utilized forfinal feature matching,spanning across the image pair with geometric priors informed by GD.Comprehensive experiments show that GeoGlue outperforms other state-of-the-art methods in feature-matching accuracy and performance stability,achieving pose estimation with maximum rotation and translation errors under 1%in challenging scenes from benchmark datasets,Tanks&Temples and ETH3D.This study also proposes thefirst self-supervised deep-learning model for curved line detection,generating geometric priors for matching so that more attention is put on prominent features and improving the visual effect of 3D reconstruction. 展开更多
关键词 Feature matching geometric priors self-supervised learning graph attention network 3D reconstruction digital earth
原文传递
In situ NMR diffusion coefficients assessment of lithium ion conductor using electrochemical priors and Arrhenius constraint——A computational study 被引量:1
7
作者 Liang Deng Wen-Hui Yang +3 位作者 Xing Lyu Shu-Feng Wei Zheng Wang Hui-Xian Wang 《Chinese Chemical Letters》 SCIE CAS CSCD 2017年第2期362-366,共5页
In situ NMR measurements of the diffusion coefficients,including an estimate of signal strength,of lithium ion conductor using diffusion-weighting pulse sequence are performed in this study.A cascade bilinear model is... In situ NMR measurements of the diffusion coefficients,including an estimate of signal strength,of lithium ion conductor using diffusion-weighting pulse sequence are performed in this study.A cascade bilinear model is proposed to estimate the diffusion sensitivity factors of pulsed-field gradient using prior information of the electrochemical performance and Arrhenius constraint.The model postulates that the active lithium nuclei participating electrochemical reaction are relevant to the NMR signal intensity,when discharge rate or temperature condition is varying.The electrochemical data and the NMR signal strength show a highly fit with the proposed model according our simulation and experiments.Furthermore,the diffusion time is constrained by temperature based on Arrhenius equation of reaction rates dependence.An experimental calculation of Li_4Ti_5O_(12)(LTO)/carbon nanotubes(CNTs) with the electrolyte evaluating at 20 ℃ is presented,which the b factor is estimated by the discharge rate. 展开更多
关键词 Lithium ion conductor Diffusion coefficient Nuclear magnetic resonance Pulsed-field gradient Electrochemical priors
原文传递
On the non-local priors for sparsity selection in high-dimensional Gaussian DAG models
8
作者 Xuan Cao Fang Yang 《Statistical Theory and Related Fields》 2021年第4期332-345,共14页
We consider sparsity selection for the Cholesky factor L of the inverse covariance matrix in high-dimensional Gaussian DAG models.The sparsity is induced over the space of L via non-local priors,namely the product mom... We consider sparsity selection for the Cholesky factor L of the inverse covariance matrix in high-dimensional Gaussian DAG models.The sparsity is induced over the space of L via non-local priors,namely the product moment(pMOM)prior[Johnson,V.,&Rossell,D.(2012).Bayesian model selection in high-dimensional settings.Journal of the American Statistical Asso-ciation,107(498),649-660.https://doi.org/10.1080/01621459.2012.682536]and the hierarchi-cal hyper-pMOM prior[Cao,X.,Khare,K.,&Ghosh,M.(2020).High-dimensional posterior consistency for hierarchical non-local priors in regression.Bayesian Analysis,15(1),241-262.https://doi.org/10.1214/19-BA1154].We establish model selection consistency for Cholesky fac-tor under more relaxed conditions compared to those in the literature and implement an efficient MCMC algorithm for parallel selecting the sparsity pattern for each column of L.We demonstrate the validity of our theoretical results via numerical simulations,and also use further simulations to demonstrate that our sparsity selection approach is competitive with existing methods. 展开更多
关键词 Bayesian DAG models non-local priors high-dimensional data posterior consistency graph selection
原文传递
Regularization by Multiple Dual Frames for Compressed Sensing Magnetic Resonance Imaging With Convergence Analysis 被引量:1
9
作者 Baoshun Shi Kexun Liu 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第11期2136-2153,共18页
Plug-and-play priors are popular for solving illposed imaging inverse problems. Recent efforts indicate that the convergence guarantee of the imaging algorithms using plug-andplay priors relies on the assumption of bo... Plug-and-play priors are popular for solving illposed imaging inverse problems. Recent efforts indicate that the convergence guarantee of the imaging algorithms using plug-andplay priors relies on the assumption of bounded denoisers. However, the bounded properties of existing plugged Gaussian denoisers have not been proven explicitly. To bridge this gap, we detail a novel provable bounded denoiser termed as BMDual,which combines a trainable denoiser using dual tight frames and the well-known block-matching and 3D filtering(BM3D)denoiser. We incorporate multiple dual frames utilized by BMDual into a novel regularization model induced by a solver. The proposed regularization model is utilized for compressed sensing magnetic resonance imaging(CSMRI). We theoretically show the bound of the BMDual denoiser, the bounded gradient of the CSMRI data-fidelity function, and further demonstrate that the proposed CSMRI algorithm converges. Experimental results also demonstrate that the proposed algorithm has a good convergence behavior, and show the effectiveness of the proposed algorithm. 展开更多
关键词 Bounded denoiser compressed sensing magnetic resonance imaging(CSMRI) dual frames plug-and-play priors REGULARIZATION
下载PDF
Extraction of Information from Crowdsourcing: Experimental Test Employing Bayesian, Maximum Likelihood, and Maximum Entropy Methods 被引量:2
10
作者 M. P. Silverman 《Open Journal of Statistics》 2019年第5期571-600,共30页
A crowdsourcing experiment in which viewers (the “crowd”) of a British Broadcasting Corporation (BBC) television show submitted estimates of the number of coins in a tumbler was shown in an antecedent paper (Part 1)... A crowdsourcing experiment in which viewers (the “crowd”) of a British Broadcasting Corporation (BBC) television show submitted estimates of the number of coins in a tumbler was shown in an antecedent paper (Part 1) to follow a log-normal distribution ∧(m,s2). The coin-estimation experiment is an archetype of a broad class of image analysis and object counting problems suitable for solution by crowdsourcing. The objective of the current paper (Part 2) is to determine the location and scale parameters (m,s) of ∧(m,s2) by both Bayesian and maximum likelihood (ML) methods and to compare the results. One outcome of the analysis is the resolution, by means of Jeffreys’ rule, of questions regarding the appropriate Bayesian prior. It is shown that Bayesian and ML analyses lead to the same expression for the location parameter, but different expressions for the scale parameter, which become identical in the limit of an infinite sample size. A second outcome of the analysis concerns use of the sample mean as the measure of information of the crowd in applications where the distribution of responses is not sought or known. In the coin-estimation experiment, the sample mean was found to differ widely from the mean number of coins calculated from ∧(m,s2). This discordance raises critical questions concerning whether, and under what conditions, the sample mean provides a reliable measure of the information of the crowd. This paper resolves that problem by use of the principle of maximum entropy (PME). The PME yields a set of equations for finding the most probable distribution consistent with given prior information and only that information. If there is no solution to the PME equations for a specified sample mean and sample variance, then the sample mean is an unreliable statistic, since no measure can be assigned to its uncertainty. Parts 1 and 2 together demonstrate that the information content of crowdsourcing resides in the distribution of responses (very often log-normal in form), which can be obtained empirically or by appropriate modeling. 展开更多
关键词 Crowdsourcing BAYESIAN priors MAXIMUM LIKELIHOOD PRINCIPLE of MAXIMUM ENTROPY Parameter Estimation Log-Normal Distribution
下载PDF
One-Sample Bayesian Predictive Analyses for an Exponential Non-Homogeneous Poisson Process in Software Reliability 被引量:1
11
作者 Albert Orwa Akuno Luke Akong’o Orawo Ali Salim Islam 《Open Journal of Statistics》 2014年第5期402-411,共10页
The Goel-Okumoto software reliability model, also known as the Exponential Nonhomogeneous Poisson Process,is one of the earliest software reliability models to be proposed. From literature, it is evident that most of ... The Goel-Okumoto software reliability model, also known as the Exponential Nonhomogeneous Poisson Process,is one of the earliest software reliability models to be proposed. From literature, it is evident that most of the study that has been done on the Goel-Okumoto software reliability model is parameter estimation using the MLE method and model fit. It is widely known that predictive analysis is very useful for modifying, debugging and determining when to terminate software development testing process. However, there is a conspicuous absence of literature on both the classical and Bayesian predictive analyses on the model. This paper presents some results about predictive analyses for the Goel-Okumoto software reliability model. Driven by the requirement of highly reliable software used in computers embedded in automotive, mechanical and safety control systems, industrial and quality process control, real-time sensor networks, aircrafts, nuclear reactors among others, we address four issues in single-sample prediction associated closely with software development process. We have adopted Bayesian methods based on non-informative priors to develop explicit solutions to these problems. An example with real data in the form of time between software failures will be used to illustrate the developed methodologies. 展开更多
关键词 NONHOMOGENEOUS POISSON Process Non-Informative priors Software Reliability Models BAYESIAN Approach
下载PDF
A Bayesian Quantile Regression Analysis of Potential Risk Factors for Violent Crimes in USA 被引量:1
12
作者 Ming Wang Lijun Zhang 《Open Journal of Statistics》 2012年第5期526-533,共8页
Bayesian quantile regression has drawn more attention in widespread applications recently. Yu and Moyeed (2001) proposed an asymmetric Laplace distribution to provide likelihood based mechanism for Bayesian inference ... Bayesian quantile regression has drawn more attention in widespread applications recently. Yu and Moyeed (2001) proposed an asymmetric Laplace distribution to provide likelihood based mechanism for Bayesian inference of quantile regression models. In this work, the primary objective is to evaluate the performance of Bayesian quantile regression compared with simple regression and quantile regression through simulation and with application to a crime dataset from 50 USA states for assessing the effect of potential risk factors on the violent crime rate. This paper also explores improper priors, and conducts sensitivity analysis on the parameter estimates. The data analysis reveals that the percent of population that are single parents always has a significant positive influence on violent crimes occurrence, and Bayesian quantile regression provides more comprehensive statistical description of this association. 展开更多
关键词 BAYESIAN QUANTILE Regression Asymmetric LAPLACE Distribution IMPROPER priors Sensitivity Ordinary Least Square
下载PDF
Smoothness Prior Approach for Spectral Smoothing and Baseline Correction
13
作者 Lu Li He Chen +2 位作者 Siying Chen Yinchao Zhang Long Gao 《Journal of Beijing Institute of Technology》 EI CAS 2017年第1期121-128,共8页
Smoothness prior approach for spectral smoothing is investigated using Fourier frequency filter analysis.We show that the regularization parameter in penalized least squares could continuously control the bandwidth of... Smoothness prior approach for spectral smoothing is investigated using Fourier frequency filter analysis.We show that the regularization parameter in penalized least squares could continuously control the bandwidth of low-pass filter.Besides,due to its property of interpolating the missing values automatically and smoothly,a spectral baseline correction algorithm based on the approach is proposed.This algorithm generally comprises spectral peak detection and baseline estimation.First,the spectral peak regions are detected and identified according to the second derivatives.Then,generalized smoothness prior approach combining identification information could estimate the baseline in peak regions.Results with both the simulated and real spectra show accurate baseline-corrected signals with this method. 展开更多
关键词 smoothness priors low-pass filter second derivative spectral smoothing baselinecorrection
下载PDF
Two-Sample Bayesian Predictive Analyses for an Exponential Non-Homogeneous Poisson Process in Software Reliability
14
作者 Albert Orwa Akuno Luke Akong’o Orawo Ali Salim Islam 《Open Journal of Statistics》 2014年第9期742-750,共9页
The Goel-Okumoto software reliability model is one of the earliest attempts to use a non-homogeneous Poisson process to model failure times observed during software test interval. The model is known as exponential NHP... The Goel-Okumoto software reliability model is one of the earliest attempts to use a non-homogeneous Poisson process to model failure times observed during software test interval. The model is known as exponential NHPP model as it describes exponential software failure curve. Parameter estimation, model fit and predictive analyses based on one sample have been conducted on the Goel-Okumoto software reliability model. However, predictive analyses based on two samples have not been conducted on the model. In two-sample prediction, the parameters and characteristics of the first sample are used to analyze and to make predictions for the second sample. This helps in saving time and resources during the software development process. This paper presents some results about predictive analyses for the Goel-Okumoto software reliability model based on two samples. We have addressed three issues in two-sample prediction associated closely with software development testing process. Bayesian methods based on non-informative priors have been adopted to develop solutions to these issues. The developed methodologies have been illustrated by two sets of software failure data simulated from the Goel-Okumoto software reliability model. 展开更多
关键词 NONHOMOGENEOUS POISSON Process Software Reliability Models Non-Informative priors BAYESIAN Approach
下载PDF
Bayesian Analysis of the Behrens-Fisher Problem under a Gamma Prior
15
作者 Nengak Emmanuel Goltong Sani Ibrahim Doguwa 《Open Journal of Statistics》 2018年第6期902-914,共13页
Yin [1] has developed a new Bayesian measure of evidence for testing a point null hypothesis which agrees with the frequentist p-value thereby, solving Lindley’s paradox. Yin and Li [2] extended the methodology of Yi... Yin [1] has developed a new Bayesian measure of evidence for testing a point null hypothesis which agrees with the frequentist p-value thereby, solving Lindley’s paradox. Yin and Li [2] extended the methodology of Yin [1] to the case of the Behrens-Fisher problem by assigning Jeffreys’ independent prior to the nuisance parameters. In this paper, we were able to show both analytically and through the results from simulation studies that the methodology of Yin?[1] solves simultaneously, the Behrens-Fisher problem and Lindley’s paradox when a Gamma prior is assigned to the nuisance parameters. 展开更多
关键词 Behrens-Fisher PROBLEM Lindley’s PARADOX METROPOLIS-HASTINGS Algorithm INFORMATIVE priors
下载PDF
Some Likelihood Based Properties in Large Samples: Utility and Risk Aversion, Second Order Prior Selection and Posterior Density Stability
16
作者 Michael Brimacombe 《Open Journal of Statistics》 2016年第6期1037-1049,共14页
The likelihood function plays a central role in statistical analysis in relation to information, from both frequentist and Bayesian perspectives. In large samples several new properties of the likelihood in relation t... The likelihood function plays a central role in statistical analysis in relation to information, from both frequentist and Bayesian perspectives. In large samples several new properties of the likelihood in relation to information are developed here. The Arrow-Pratt absolute risk aversion measure is shown to be related to the Cramer-Rao Information bound. The derivative of the log-likelihood function is seen to provide a measure of information related stability for the Bayesian posterior density. As well, information similar prior densities can be defined reflecting the central role of likelihood in the Bayes learning paradigm. 展开更多
关键词 Arrow-Pratt Theorem Expected Utility Information Similar priors Likelihood Function Prior Stability Score Function Risk Aversion
下载PDF
Foodweb Trophic Level and Diet Inference Using an Extended Bayesian Stable Isotope Mixing Model
17
作者 Erik Barry Erhardt Rachel Marie Wilson 《Open Journal of Ecology》 2022年第6期333-359,共27页
You are what you eat (diet) and where you eat (trophic level) in the food web. The relative abundance of pairs of stable isotopes of the organic elements carbon (e.g., the isotope ratio of <sup>13</sup>C v... You are what you eat (diet) and where you eat (trophic level) in the food web. The relative abundance of pairs of stable isotopes of the organic elements carbon (e.g., the isotope ratio of <sup>13</sup>C vs<sup> 12</sup>C), nitrogen, and sulfur, among others, in the tissues of a consumer reflects a weighted-average of the isotope ratios in the sources it consumes, after some corrections for the processes of digestion and assimilation. We extended a Bayesian mixing model to infer trophic positions of consumer organisms in a food web in addition to the degree to which distinct resource pools (diet sources) support consumers. The novel features in this work include: 1) trophic level estimation (vertical position in foodweb) and 2) the Bayesian exposition of a biologically realistic model [1] including stable isotope ratios and concentrations of carbon, nitrogen, and sulfur, isotopic fractionations, elemental assimilation efficiencies, as well as extensive use of prior information. We discuss issues of parameter identifiability in the complex and most realistic model. We apply our model to simulated data and to bottlenose dolphins (Tursiops truncatus) feeding on several numerically abundant fish species, which in turn feed on other fish and primary producing plants and algae present in St. George Sound, FL, USA. Finally, we discuss extensions from other work that apply to this model and three important general ecological applications. Online supplementary materials include data, OpenBUGS scripts, and simulation details. 展开更多
关键词 Stable Isotope Animal Ecology Trophic Level Animal Diet Informative priors
下载PDF
Transformation Models for Survival Data Analysis with Applications
18
作者 Yang Liu Qiusheng Chen Xufeng Niu 《Open Journal of Statistics》 2016年第1期133-155,共23页
When the event of interest never occurs for a proportion of subjects during the study period, survival models with a cure fraction are more appropriate in analyzing this type of data. Considering the non-linear relati... When the event of interest never occurs for a proportion of subjects during the study period, survival models with a cure fraction are more appropriate in analyzing this type of data. Considering the non-linear relationship between response variable and covariates, we propose a class of generalized transformation models motivated by Zeng et al. [1] transformed proportional time cure model, in which fractional polynomials are used instead of the simple linear combination of the covariates. Statistical properties of the proposed models are investigated, including identifiability of the parameters, asymptotic consistency, and asymptotic normality of the estimated regression coefficients. A simulation study is carried out to examine the performance of the power selection procedure. The generalized transformation cure rate models are applied to the First National Health and Nutrition Examination Survey Epidemiologic Follow-up Study (NHANES1) for the purpose of examining the relationship between survival time of patients and several risk factors. 展开更多
关键词 Link Functions Mixture Cure Rate Models Noninformative Improper priors Proportional Hazards Models Proportional Odds Models
下载PDF
Information Theoretic Distinguishers for Timing Attacks with Partial Profiles: Solving the Empty Bin Issue
19
作者 Eloi De Chérisey Sylvain Guilley +1 位作者 Olivier Rioul Darshana Jayasinghe 《Journal of Information Security》 2021年第1期1-33,共33页
In any side-channel attack, it is desirable to exploit all the available leakage data to compute the distinguisher’s values. The profiling phase is essential to obtain an accurate leakage model, yet it may not be exh... In any side-channel attack, it is desirable to exploit all the available leakage data to compute the distinguisher’s values. The profiling phase is essential to obtain an accurate leakage model, yet it may not be exhaustive. As a result, information theoretic distinguishers may come up on previously unseen data, a phenomenon yielding empty bins. A strict application of the maximum likelihood method yields a distinguisher that is not even sound. Ignoring empty bins reestablishes soundness, but seriously limits its performance in terms of success rate. The purpose of this paper is to remedy this situation. In this research, we propose six different techniques to improve the performance of information theoretic distinguishers. We study t</span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">hem thoroughly by applying them to timing attacks, both with synthetic and real leakages. Namely, we compare them in terms of success rate, and show that their performance depends on the amount of profiling, and can be explained by a bias-variance analysis. The result of our work is that there exist use-cases, especially when measurements are noisy, where our novel information theoretic distinguishers (typically the soft-drop distinguisher) perform the best compared to known side-channel distinguishers, despite the empty bin situation. 展开更多
关键词 Timing Attacks Profiling Attacks Dirichlet priors Success Rates
下载PDF
Facial optical flow estimation via neural non-rigid registration
20
作者 Zhuang Peng Boyi Jiang +2 位作者 Haofei Xu Wanquan Feng Juyong Zhang 《Computational Visual Media》 SCIE EI CSCD 2023年第1期109-122,共14页
Optical flow estimation in human facial video,which provides 2D correspondences between adjacent frames,is a fundamental pre-processing step for many applications,like facial expression capture and recognition.However... Optical flow estimation in human facial video,which provides 2D correspondences between adjacent frames,is a fundamental pre-processing step for many applications,like facial expression capture and recognition.However,it is quite challenging as human facial images contain large areas of similar textures,rich expressions,and large rotations.These characteristics also result in the scarcity of large,annotated realworld datasets.We propose a robust and accurate method to learn facial optical flow in a self-supervised manner.Specifically,we utilize various shape priors,including face depth,landmarks,and parsing,to guide the self-supervised learning task via a differentiable nonrigid registration framework.Extensive experiments demonstrate that our method achieves remarkable improvements for facial optical flow estimation in the presence of significant expressions and large rotations. 展开更多
关键词 human face optical flow self-supervised non-rigid registration neural networks facial priors
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部