期刊文献+
共找到88篇文章
< 1 2 5 >
每页显示 20 50 100
Why Can Multiple Imputations and How (MICE) Algorithm Work?
1
作者 Abdullah Z. Alruhaymi Charles J. Kim 《Open Journal of Statistics》 2021年第5期759-777,共19页
Multiple imputations compensate for missing data and produce multiple datasets by regression model and are considered the solver of the old problem of univariate imputation. The univariate imputes data only from a spe... Multiple imputations compensate for missing data and produce multiple datasets by regression model and are considered the solver of the old problem of univariate imputation. The univariate imputes data only from a specific column where the data cell was missing. Multivariate imputation works simultaneously, with all variables in all columns, whether missing or observed. It has emerged as a principal method of solving missing data problems. All incomplete datasets analyzed before Multiple Imputation by Chained Equations <span style="font-family:Verdana;">(MICE) presented were misdiagnosed;results obtained were invalid and should</span><span style="font-family:Verdana;"> not be countable to yield reasonable conclusions. This article will highlight why multiple imputations and how the MICE work with a particular focus on the cyber-security dataset.</span><b> </b><span style="font-family:Verdana;">Removing missing data in any dataset and replac</span><span style="font-family:Verdana;">ing it is imperative in analyzing the data and creating prediction models. Therefore,</span><span style="font-family:Verdana;"> a good imputation technique should recover the missingness, which involves extracting the good features. However, the widely used univariate imputation method does not impute missingness reasonably if the values are too large and may thus lead to bias. Therefore, we aim to propose an alternative imputation method that is efficient and removes potential bias after removing the missingness.</span> 展开更多
关键词 Multiple imputations imputations ALGORITHMS MICE Algorithm
下载PDF
Missing Data Imputations for Upper Air Temperature at 24 Standard Pressure Levels over Pakistan Collected from Aqua Satellite 被引量:4
2
作者 Muhammad Usman Saleem Sajid Rashid Ahmed 《Journal of Data Analysis and Information Processing》 2016年第3期132-146,共16页
This research was an effort to select best imputation method for missing upper air temperature data over 24 standard pressure levels. We have implemented four imputation techniques like inverse distance weighting, Bil... This research was an effort to select best imputation method for missing upper air temperature data over 24 standard pressure levels. We have implemented four imputation techniques like inverse distance weighting, Bilinear, Natural and Nearest interpolation for missing data imputations. Performance indicators for these techniques were the root mean square error (RMSE), absolute mean error (AME), correlation coefficient and coefficient of determination ( R<sup>2</sup> ) adopted in this research. We randomly make 30% of total samples (total samples was 324) predictable from 70% remaining data. Although four interpolation methods seem good (producing <1 RMSE, AME) for imputations of air temperature data, but bilinear method was the most accurate with least errors for missing data imputations. RMSE for bilinear method remains <0.01 on all pressure levels except 1000 hPa where this value was 0.6. The low value of AME (<0.1) came at all pressure levels through bilinear imputations. Very strong correlation (>0.99) found between actual and predicted air temperature data through this method. The high value of the coefficient of determination (0.99) through bilinear interpolation method, tells us best fit to the surface. We have also found similar results for imputation with natural interpolation method in this research, but after investigating scatter plots over each month, imputations with this method seem to little obtuse in certain months than bilinear method. 展开更多
关键词 Missing Data imputations Spatial Interpolation AQUA Satellite Upper Level Air Temperature AIRX3STML
下载PDF
Determining Sufficient Number of Imputations Using Variance of Imputation Variances: Data from 2012 NAMCS Physician Workflow Mail Survey
3
作者 Qiyuan Pan Rong Wei +1 位作者 Iris Shimizu Eric Jamoom 《Applied Mathematics》 2014年第21期3421-3430,共10页
How many imputations are sufficient in multiple imputations? The answer given by different researchers varies from as few as 2 - 3 to as many as hundreds. Perhaps no single number of imputations would fit all situatio... How many imputations are sufficient in multiple imputations? The answer given by different researchers varies from as few as 2 - 3 to as many as hundreds. Perhaps no single number of imputations would fit all situations. In this study, η, the minimally sufficient number of imputations, was determined based on the relationship between m, the number of imputations, and ω, the standard error of imputation variances using the 2012 National Ambulatory Medical Care Survey (NAMCS) Physician Workflow mail survey. Five variables of various value ranges, variances, and missing data percentages were tested. For all variables tested, ω decreased as m increased. The m value above which the cost of further increase in m would outweigh the benefit of reducing ω was recognized as the η. This method has a potential to be used by anyone to determine η that fits his or her own data situation. 展开更多
关键词 Multiple IMPUTATION SUFFICIENT NUMBER of imputations Hot-Deck IMPUTATION
下载PDF
Comparative Study of Four Methods in Missing Value Imputations under Missing Completely at Random Mechanism 被引量:3
4
作者 Michikazu Nakai Ding-Geng Chen +1 位作者 Kunihiro Nishimura Yoshihiro Miyamoto 《Open Journal of Statistics》 2014年第1期27-37,共11页
In analyzing data from clinical trials and longitudinal studies, the issue of missing values is always a fundamental challenge since the missing data could introduce bias and lead to erroneous statistical inferences. ... In analyzing data from clinical trials and longitudinal studies, the issue of missing values is always a fundamental challenge since the missing data could introduce bias and lead to erroneous statistical inferences. To deal with this challenge, several imputation methods have been developed in the literature to handle missing values where the most commonly used are complete case method, mean imputation method, last observation carried forward (LOCF) method, and multiple imputation (MI) method. In this paper, we conduct a simulation study to investigate the efficiency of these four typical imputation methods with longitudinal data setting under missing completely at random (MCAR). We categorize missingness with three cases from a lower percentage of 5% to a higher percentage of 30% and 50% missingness. With this simulation study, we make a conclusion that LOCF method has more bias than the other three methods in most situations. MI method has the least bias with the best coverage probability. Thus, we conclude that MI method is the most effective imputation method in our MCAR simulation study. 展开更多
关键词 MISSING Data IMPUTATION MCAR COMPLETE Case LOCF
下载PDF
Missing Value Imputation for Radar-Derived Time-Series Tracks of Aerial Targets Based on Improved Self-Attention-Based Network
5
作者 Zihao Song Yan Zhou +2 位作者 Wei Cheng Futai Liang Chenhao Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第3期3349-3376,共28页
The frequent missing values in radar-derived time-series tracks of aerial targets(RTT-AT)lead to significant challenges in subsequent data-driven tasks.However,the majority of imputation research focuses on random mis... The frequent missing values in radar-derived time-series tracks of aerial targets(RTT-AT)lead to significant challenges in subsequent data-driven tasks.However,the majority of imputation research focuses on random missing(RM)that differs significantly from common missing patterns of RTT-AT.The method for solving the RM may experience performance degradation or failure when applied to RTT-AT imputation.Conventional autoregressive deep learning methods are prone to error accumulation and long-term dependency loss.In this paper,a non-autoregressive imputation model that addresses the issue of missing value imputation for two common missing patterns in RTT-AT is proposed.Our model consists of two probabilistic sparse diagonal masking self-attention(PSDMSA)units and a weight fusion unit.It learns missing values by combining the representations outputted by the two units,aiming to minimize the difference between the missing values and their actual values.The PSDMSA units effectively capture temporal dependencies and attribute correlations between time steps,improving imputation quality.The weight fusion unit automatically updates the weights of the output representations from the two units to obtain a more accurate final representation.The experimental results indicate that,despite varying missing rates in the two missing patterns,our model consistently outperforms other methods in imputation performance and exhibits a low frequency of deviations in estimates for specific missing entries.Compared to the state-of-the-art autoregressive deep learning imputation model Bidirectional Recurrent Imputation for Time Series(BRITS),our proposed model reduces mean absolute error(MAE)by 31%~50%.Additionally,the model attains a training speed that is 4 to 8 times faster when compared to both BRITS and a standard Transformer model when trained on the same dataset.Finally,the findings from the ablation experiments demonstrate that the PSDMSA,the weight fusion unit,cascade network design,and imputation loss enhance imputation performance and confirm the efficacy of our design. 展开更多
关键词 Missing value imputation time-series tracks probabilistic sparsity diagonal masking self-attention weight fusion
下载PDF
A Study of EM Algorithm as an Imputation Method: A Model-Based Simulation Study with Application to a Synthetic Compositional Data
6
作者 Yisa Adeniyi Abolade Yichuan Zhao 《Open Journal of Modelling and Simulation》 2024年第2期33-42,共10页
Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear mode... Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance. 展开更多
关键词 Compositional Data Linear Regression Model Least Square Method Robust Least Square Method Synthetic Data Aitchison Distance Maximum Likelihood Estimation Expectation-Maximization Algorithm k-Nearest Neighbor and Mean imputation
下载PDF
Genetic dissection and genomic prediction for pork cuts and carcass morphology traits in pig
7
作者 Lei Xie Jiangtao Qin +6 位作者 Lin Rao Dengshuai Cui Xi Tang Liqing Chen Shijun Xiao Zhiyan Zhang Lusheng Huang 《Journal of Animal Science and Biotechnology》 SCIE CAS CSCD 2023年第6期2345-2362,共18页
Background As pre-cut and pre-packaged chilled meat becomes increasingly popular,integrating the carcasscutting process into the pig industry chain has become a trend.Identifying quantitative trait loci(QTLs)of pork c... Background As pre-cut and pre-packaged chilled meat becomes increasingly popular,integrating the carcasscutting process into the pig industry chain has become a trend.Identifying quantitative trait loci(QTLs)of pork cuts would facilitate the selection of pigs with a higher overall value.However,previous studies solely focused on evaluating the phenotypic and genetic parameters of pork cuts,neglecting the investigation of QTLs influencing these traits.This study involved 17 pork cuts and 12 morphology traits from 2,012 pigs across four populations genotyped using CC1 PorcineSNP50 BeadChips.Our aim was to identify QTLs and evaluate the accuracy of genomic estimated breed values(GEBVs)for pork cuts.Results We identified 14 QTLs and 112 QTLs for 17 pork cuts by GWAS using haplotype and imputation genotypes,respectively.Specifically,we found that HMGA1,VRTN and BMP2 were associated with body length and weight.Subsequent analysis revealed that HMGA1 primarily affects the size of fore leg bones,VRTN primarily affects the number of vertebrates,and BMP2 primarily affects the length of vertebrae and the size of hind leg bones.The prediction accuracy was defined as the correlation between the adjusted phenotype and GEBVs in the validation population,divided by the square root of the trait’s heritability.The prediction accuracy of GEBVs for pork cuts varied from 0.342 to 0.693.Notably,ribs,boneless picnic shoulder,tenderloin,hind leg bones,and scapula bones exhibited prediction accuracies exceeding 0.600.Employing better models,increasing marker density through genotype imputation,and pre-selecting markers significantly improved the prediction accuracy of GEBVs.Conclusions We performed the first study to dissect the genetic mechanism of pork cuts and identified a large number of significant QTLs and potential candidate genes.These findings carry significant implications for the breeding of pork cuts through marker-assisted and genomic selection.Additionally,we have constructed the first reference populations for genomic selection of pork cuts in pigs. 展开更多
关键词 Carcass morphology traits Genomic selection Genotype imputation GWAS Pork cuts
下载PDF
Genome-wide association study for numbers of vertebrae in Dezhou donkey population reveals new candidate genes
8
作者 SUN Yan LI Yu-hua +11 位作者 ZHAO Chang-heng TENG Jun WANG Yong-hui WANG Tian-qi SHI Xiaoyuan LIU Zi-wen LI Hai-jing WANG Ji-jing WANG Wen-wen NING Chao WANG Chang-fa ZHANG Qin 《Journal of Integrative Agriculture》 SCIE CAS CSCD 2023年第10期3159-3169,共11页
Numbers of vertebrae is an important economic trait associated with body size and meat productivity in animals.However,the genetic basis of vertebrae number in donkey remains to be well understood.The aim of this stud... Numbers of vertebrae is an important economic trait associated with body size and meat productivity in animals.However,the genetic basis of vertebrae number in donkey remains to be well understood.The aim of this study was to identify candidate genes affecting the number of thoracic(TVn)and the number of lumbar vertebrae(LVn)in Dezhou donkey.A genome-wide association study was conducted using whole genome sequence data imputed from low-coverage genome sequencing.For TVn,we identified 38 genome-wide significant and 64 suggestive SNPs,which relate to 7 genes(NLGN1,DCC,SLC26A7,TOX,WNT7A,LOC123286078,and LOC123280142).For LVn,we identified 9 genome-wide significant and 38 suggestive SNPs,which relate to 8 genes(GABBR2,FBXO4,LOC123277146,LOC123277359,BMP7,B3GAT1,EML2,and LRP5).The genes involve in the Wnt and TGF-βsignaling pathways and may play an important role in embryonic development or bone formation and could be good candidate genes for TVn and LVn. 展开更多
关键词 numbers of vertebrae GWAS genotype imputation Dezhou donkey
下载PDF
Evaluating the potential of(epi)genotype‑by‑low pass nanopore sequencing in dairy cattle:a study on direct genomic value and methylation analysis
9
作者 Oscar Gonzalez‑Recio Adrian Lopez‑Catalina +3 位作者 Ramon Peiro‑Pastor Alicia Nieto‑Valle Monica Castro Almudena Fernandez 《Journal of Animal Science and Biotechnology》 SCIE CAS CSCD 2023年第6期2276-2289,共14页
Background Genotype-by-sequencing has been proposed as an alternative to SNP genotyping arrays in genomic selection to obtain a high density of markers along the genome.It requires a low sequencing depth to be cost ef... Background Genotype-by-sequencing has been proposed as an alternative to SNP genotyping arrays in genomic selection to obtain a high density of markers along the genome.It requires a low sequencing depth to be cost effective,which may increase the error at the genotype assigment.Third generation nanopore sequencing technology offers low cost sequencing and the possibility to detect genome methylation,which provides added value to genotype-by-sequencing.The aim of this study was to evaluate the performance of genotype-by-low pass nanopore sequencing for estimating the direct genomic value in dairy cattle,and the possibility to obtain methylation marks simultaneously.Results Latest nanopore chemistry(LSK14 and Q20)achieved a modal base calling accuracy of 99.55%,whereas previous kit(LSK109)achieved slightly lower accuracy(99.1%).The direct genomic value accuracy from genotype-by-low pass sequencing ranged between 0.79 and 0.99,depending on the trait(milk,fat or protein yield),with a sequencing depth as low as 2×and using the latest chemistry(LSK114).Lower sequencing depth led to biased estimates,yet with high rank correlations.The LSK109 and Q20 achieved lower accuracies(0.57-0.93).More than one million high reliable methylated sites were obtained,even at low sequencing depth,located mainly in distal intergenic(87%)and promoter(5%)regions.Conclusions This study showed that the latest nanopore technology in useful in a LowPass sequencing framework to estimate direct genomic values with high reliability.It may provide advantages in populations with no available SNP chip,or when a large density of markers with a wide range of allele frequencies is needed.In addition,low pass sequencing provided nucleotide methylation status of>1 million nucleotides at≥10×,which is an added value for epigenetic studies. 展开更多
关键词 Genomic selection Genomic values Low pass sequencing Low sequencing imputation Polygenic risk score
下载PDF
Superiority of Bayesian Imputation to Mice in Logit Panel Data Models
10
作者 Peter Otieno Opeyo Weihu Cheng Zhao Xu 《Open Journal of Statistics》 2023年第3期316-358,共43页
Non-responses leading to missing data are common in most studies and causes inefficient and biased statistical inferences if ignored. When faced with missing data, many studies choose to employ complete case analysis ... Non-responses leading to missing data are common in most studies and causes inefficient and biased statistical inferences if ignored. When faced with missing data, many studies choose to employ complete case analysis approach to estimate the parameters of the model. This however compromises on the susceptibility of the estimates to reduced bias and minimum variance as expected. Several classical and model based techniques of imputing the missing values have been mentioned in literature. Bayesian approach to missingness is deemed superior amongst the other techniques through its natural self-lending to missing data settings where the missing values are treated as unobserved random variables that have a distribution which depends on the observed data. This paper digs up the superiority of Bayesian imputation to Multiple Imputation with Chained Equations (MICE) when estimating logistic panel data models with single fixed effects. The study validates the superiority of conditional maximum likelihood estimates for nonlinear binary choice logit panel model in the presence of missing observations. A Monte Carlo simulation was designed to determine the magnitude of bias and root mean square errors (RMSE) arising from MICE and Full Bayesian imputation. The simulation results show that the conditional maximum likelihood (ML) logit estimator presented in this paper is less biased and more efficient when Bayesian imputation is performed to curb non-responses. 展开更多
关键词 Panel Data IMPUTATION Monte Carlo BIAS Conditional Maximum Likelihood
下载PDF
特征价格法在房地产价格指数中的应用 被引量:6
11
作者 孙宪华 刘振惠 张臣曦 《现代财经(天津财经大学学报)》 CSSCI 北大核心 2008年第5期61-65,共5页
特征价格法(Hedonic method)是将房地产价格变动中的质量特征因素进行分解,以显现出各项特征的隐含价格。并从价格的总变动中逐项剔除质量特征变动的影响,达到仅仅反映纯价格变动的目的。本文通过双重Imputation过程估计缺失价格和剔除... 特征价格法(Hedonic method)是将房地产价格变动中的质量特征因素进行分解,以显现出各项特征的隐含价格。并从价格的总变动中逐项剔除质量特征变动的影响,达到仅仅反映纯价格变动的目的。本文通过双重Imputation过程估计缺失价格和剔除异常值的影响,解决了可比性问题,并增强了Hedonic模型的稳定性。 展开更多
关键词 房地产价格指数 质量调整 特征价格法 双重Imputation
下载PDF
Establishment and verification of a surgical prognostic model for cervical spinal cord injury without radiological abnormality 被引量:4
12
作者 Jie Wang Shuai Guo +2 位作者 Xuan Cai Jia-Wei Xu Hao-Peng Li 《Neural Regeneration Research》 SCIE CAS CSCD 2019年第4期713-720,共8页
Some studies have suggested that early surgical treatment can effectively improve the prognosis of cervical spinal cord injury without radiological abnormality, but no research has focused on the development of a prog... Some studies have suggested that early surgical treatment can effectively improve the prognosis of cervical spinal cord injury without radiological abnormality, but no research has focused on the development of a prognostic model of cervical spinal cord injury without radiological abnormality. This retrospective analysis included 43 patients with cervical spinal cord injury without radiological abnormality. Seven potential factors were assessed: age, sex, external force strength causing damage, duration of disease, degree of cervical spinal stenosis, Japanese Orthopaedic Association score, and physiological cervical curvature. A model was established using multiple binary logistic regression analysis. The model was evaluated by concordant profiling and the area under the receiver operating characteristic curve. Bootstrapping was used for internal validation. The prognostic model was as follows: logit(P) =-25.4545 + 21.2576 VALUE + 1.2160SCORE-3.4224 TIME, where VALUE refers to the Pavlov ratio indicating the extent of cervical spinal stenosis, SCORE refers to the Japanese Orthopaedic Association score(0–17) after the operation, and TIME refers to the disease duration(from injury to operation). The area under the receiver operating characteristic curve for all patients was 0.8941(95% confidence interval, 0.7930–0.9952). Three factors assessed in the predictive model were associated with patient outcomes: a great extent of cervical stenosis, a poor preoperative neurological status, and a long disease duration. These three factors could worsen patient outcomes. Moreover, the disease prognosis was considered good when logit(P) ≥-2.5105. Overall, the model displayed a certain clinical value. This study was approved by the Biomedical Ethics Committee of the Second Affiliated Hospital of Xi'an Jiaotong University, China(approval number: 2018063) on May 8, 2018. 展开更多
关键词 nerve REGENERATION SURGICAL prognostic model CERVICAL SPINAL cord injury retrospective study MULTIPLE binary logistic regression analysis bootstrapping internal validation MULTIPLE imputations CERVICAL SPINAL stenosis duration of disease Pavlov ratio neural REGENERATION
下载PDF
Comparative Variance and Multiple Imputation Used for Missing Values in Land Price DataSet 被引量:1
13
作者 Longqing Zhang Xinwei Zhang +2 位作者 Liping Bai Yanghong Zhang Feng Sun Changcheng Chen 《Computers, Materials & Continua》 SCIE EI 2019年第9期1175-1187,共13页
Based on the two-dimensional relation table,this paper studies the missing values in the sample data of land price of Shunde District of Foshan City.GeoDa software was used to eliminate the insignificant factors by st... Based on the two-dimensional relation table,this paper studies the missing values in the sample data of land price of Shunde District of Foshan City.GeoDa software was used to eliminate the insignificant factors by stepwise regression analysis;NORM software was adopted to construct the multiple imputation models;EM algorithm and the augmentation algorithm were applied to fit multiple linear regression equations to construct five different filling datasets.Statistical analysis is performed on the imputation data set in order to calculate the mean and variance of each data set,and the weight is determined according to the differences.Finally,comprehensive integration is implemented to achieve the imputation expression of missing values.The results showed that in the three missing cases where the PRICE variable was missing and the deletion rate was 5%,the PRICE variable was missing and the deletion rate was 10%,and the PRICE variable and the CBD variable were both missing.The new method compared to the traditional multiple filling methods of true value closer ratio is 75%to 25%,62.5%to 37.5%,100%to 0%.Therefore,the new method is obviously better than the traditional multiple imputation methods,and the missing value data estimated by the new method bears certain reference value. 展开更多
关键词 Imputation method multiple imputations probabilistic model
下载PDF
AQUA Satellite Data and Imputation of Geopotential Height: A Case Study for Pakistan
14
作者 Usman Saleem Mian Sohail Akram +2 位作者 Muhammad Fahad Ullah Faisal Rehman Muhammad Riaz Khan 《Open Journal of Geology》 2018年第10期1002-1018,共17页
In current study an attempt is carried out by filling missing data of geopotiential height over Pakistan and identifying the optimum method for interpolation. In last thirteen years geopotential height values over wer... In current study an attempt is carried out by filling missing data of geopotiential height over Pakistan and identifying the optimum method for interpolation. In last thirteen years geopotential height values over were missing over Pakistan. These gaps are tried to be filled by interpolation Techniques. The techniques for interpolations included Bilinear interpolations [BI], Nearest Neighbor [NN], Natural [NI] and Inverse distance weighting [IDW]. These imputations were judged on the basis of performance parameters which include Root Mean Square Error [RMSE], Mean Absolute Error [MAE], Correlation Coefficient [Corr] and Coefficient of Determination [R2]. The NN and IDW interpolation Imputations were not precise and accurate. The Natural Neighbors and Bilinear interpolations immaculately fitted to the data set. A good correlation was found for Natural Neighbor interpolation imputations and perfectly fit to the surface of geopotential height. The root mean square error [maximum and minimum] values were ranges from ±5.10 to ±2.28 m respectively. However mean absolute error was near to 1. The validation of imputation revealed that NN interpolation produced more accurate results than BI. It can be concluded that Natural Interpolation was the best suited interpolation technique for filling missing data sets from AQUA satellite for geopotential height. 展开更多
关键词 AIRX3STML MISSING DATA imputations MISSING CLIMATIC DATA UPPER Air Temperature
下载PDF
Fraction of Missing Information (γ) at Different Missing Data Fractions in the 2012 NAMCS Physician Workflow Mail Survey
15
作者 Qiyuan Pan Rong Wei 《Applied Mathematics》 2016年第10期1057-1067,共11页
In his 1987 classic book on multiple imputation (MI), Rubin used the fraction of missing information, γ, to define the relative efficiency (RE) of MI as RE = (1 + γ/m)?1/2, where m is the number of imputations, lead... In his 1987 classic book on multiple imputation (MI), Rubin used the fraction of missing information, γ, to define the relative efficiency (RE) of MI as RE = (1 + γ/m)?1/2, where m is the number of imputations, leading to the conclusion that a small m (≤5) would be sufficient for MI. However, evidence has been accumulating that many more imputations are needed. Why would the apparently sufficient m deduced from the RE be actually too small? The answer may lie with γ. In this research, γ was determined at the fractions of missing data (δ) of 4%, 10%, 20%, and 29% using the 2012 Physician Workflow Mail Survey of the National Ambulatory Medical Care Survey (NAMCS). The γ values were strikingly small, ranging in the order of 10?6 to 0.01. As δ increased, γ usually increased but sometimes decreased. How the data were analysed had the dominating effects on γ, overshadowing the effect of δ. The results suggest that it is impossible to predict γ using δ and that it may not be appropriate to use the γ-based RE to determine sufficient m. 展开更多
关键词 Multiple Imputation Fraction of Missing Information (γ) Sufficient Number of imputations Missing Data NAMCS
下载PDF
基于IMPUTE2的全基因组关联性研究的基因型填补
16
作者 辛俊逸 葛雨秋 +5 位作者 邵卫 杜牧龙 马高祥 储海燕 王美林 张正东 《科学技术与工程》 北大核心 2018年第15期56-60,共5页
多数全基因组关联性研究(GWAS)采用不同的分型芯片,导致遗传变异位点的数目及选择准则不同。基因型填补可以依据已有的基因分型数据,对未分型的位点进行填补。在应用IMPUTE2软件对基因型和表型数据库(db Ga P)中胃癌GWAS数据进行全基因... 多数全基因组关联性研究(GWAS)采用不同的分型芯片,导致遗传变异位点的数目及选择准则不同。基因型填补可以依据已有的基因分型数据,对未分型的位点进行填补。在应用IMPUTE2软件对基因型和表型数据库(db Ga P)中胃癌GWAS数据进行全基因组填补,以详细介绍全基因组填补的原理和过程。以第九号染色体为例,使用1000 Genome Project模板介绍全基因组填补的过程,包括填补前的质量控制、Pre-phasing、填补过程、填补的质量评估及填补后的关联性分析。第九号染色体在填补前有21 033个位点;而在填补后有1 630 406个SNP;其中INFO>0.3的SNP位点有817 494个;而填补质量较高(INFO>0.5)的位点数目有584 755个。IMPUTE2软件可以快速准确的对未分型的基因型进行填补,从而可以将多个GWAS数据整合到相同的位点数和密度上,再进行联合分析可以提高检验的把握度以便发现新的遗传易感性位点。 展开更多
关键词 GWAS 基因型填补 IMPUTE2 填补质量
下载PDF
Imputation from SNP chip to sequence: a case study in a Chinese indigenous chicken population 被引量:6
17
作者 Shaopan Ye Xiaolong Yuan +6 位作者 Xiran Lin Ning Gao Yuanyu Luo Zanmou Chen Jiaqi Li Xiquan Zhang Zhe Zhang 《Journal of Animal Science and Biotechnology》 SCIE CAS CSCD 2018年第2期294-305,共12页
Background: Genome-wide association studies and genomic predictions are thought to be optimized by using whole-genome sequence(WGS) data. However, sequencing thousands of individuals of interest is expensive.Imputatio... Background: Genome-wide association studies and genomic predictions are thought to be optimized by using whole-genome sequence(WGS) data. However, sequencing thousands of individuals of interest is expensive.Imputation from SNP panels to WGS data is an attractive and less expensive approach to obtain WGS data. The aims of this study were to investigate the accuracy of imputation and to provide insight into the design and execution of genotype imputation.Results: We genotyped 450 chickens with a 600 K SNP array, and sequenced 24 key individuals by whole genome re-sequencing. Accuracy of imputation from putative 60 K and 600 K array data to WGS data was 0.620 and 0.812 for Beagle, and 0.810 and 0.914 for FImpute, respectively. By increasing the sequencing cost from 24 X to 144 X, the imputation accuracy increased from 0.525 to 0.698 for Beagle and from 0.654 to 0.823 for FImpute. With fixed sequence depth(12 X), increasing the number of sequenced animals from 1 to 24, improved accuracy from 0.421 to0.897 for FImpute and from 0.396 to 0.777 for Beagle. Using optimally selected key individuals resulted in a higher imputation accuracy compared with using randomly selected individuals as a reference population for resequencing. With fixed reference population size(24), imputation accuracy increased from 0.654 to 0.875 for FImpute and from 0.512 to 0.762 for Beagle as the sequencing depth increased from 1 X to 12 X. With a given total cost of genotyping, accuracy increased with the size of the reference population for FImpute, but the pattern was not valid for Beagle, which showed the highest accuracy at six fold coverage for the scenarios used in this study.Conclusions: In conclusion, we comprehensively investigated the impacts of several key factors on genotype imputation. Generally, increasing sequencing cost gave a higher imputation accuracy. But with a fixed sequencing cost, the optimal imputation enhance the performance of WGP and GWAS. An optimal imputation strategy should take size of reference population, imputation algorithms, marker density, and population structure of the target population and methods to select key individuals into consideration comprehensively. This work sheds additional light on how to design and execute genotype imputation for livestock populations. 展开更多
关键词 CHICKENS IMPUTATION RE-SEQUENCING SNP
下载PDF
New insights into the associations among feed efficiency, metabolizable efficiency traits and related QTL regions in broiler chickens 被引量:4
18
作者 Wei Li Ranran Liu +5 位作者 Maiqing Zheng Furong Feng Dawei Liu Yuming Guo Guiping Zhao Jie Wen 《Journal of Animal Science and Biotechnology》 SCIE CAS CSCD 2020年第4期950-964,共15页
Background: Improving the feed efficiency would increase profitability for producers while also reducing the environmental footprint of livestock production. This study was conducted to investigate the relationships a... Background: Improving the feed efficiency would increase profitability for producers while also reducing the environmental footprint of livestock production. This study was conducted to investigate the relationships among feed efficiency traits and metabolizable efficiency traits in 180 male broilers. Significant loci and genes affecting the metabolizable efficiency traits were explored with an imputation-based genome-wide association study. The traits measured or calculated comprised three growth traits, five feed efficiency related traits, and nine metabolizable efficiency traits.Results: The residual feed intake(RFI) showed moderate to high and positive phenotypic correlations with eight other traits measured, including average daily feed intake(ADFI), dry excreta weight(DEW), gross energy excretion(GEE), crude protein excretion(CPE), metabolizable dry matter(MDM), nitrogen corrected apparent metabolizable energy(AMEn), abdominal fat weight(Ab F), and percentage of abdominal fat(Ab P). Greater correlations were observed between growth traits and the feed conversion ratio(FCR) than RFI. In addition, the RFI, FCR, ADFI, DEW,GEE, CPE, MDM, AMEn, Ab F, and Ab P were lower in low-RFI birds than high-RFI birds(P < 0.01 or P < 0.05), whereas the coefficients of MDM and MCP of low-RFI birds were greater than those of high-RFI birds(P < 0.01). Five narrow QTLs for metabolizable efficiency traits were detected, including one 82.46-kb region for DEW and GEE on Gallus gallus chromosome(GGA) 26, one 120.13-kb region for MDM and AMEn on GGA1, one 691.25-kb region for the coefficients of MDM and AMEn on GGA5, one region for the coefficients of MDM and MCP on GGA2(103.45–103.53 Mb), and one 690.50-kb region for the coefficient of MCP on GGA14. Linkage disequilibrium(LD) analysis indicated that the five regions contained high LD blocks, as well as the genes chromosome 26 C6 orf106 homolog(C26 H6 orf106), LOC396098, SH3 and multiple ankyrin repeat domains 2(SHANK2), ETS homologous factor(EHF), and histamine receptor H3-like(HRH3 L), which are known to be involved in the regulation of neurodevelopment, cell proliferation and differentiation, and food intake.Conclusions: Selection for low RFI significantly decreased chicken feed intake, excreta output, and abdominal fat deposition, and increased nutrient digestibility without changing the weight gain. Five novel QTL regions involved in the control of metabolizable efficiency in chickens were identified. These results, combined through nutritional and genetic approaches, should facilitate novel insights into improving feed efficiency in poultry and other species. 展开更多
关键词 BROILER Feed efficiency Genome-wide association study IMPUTATION Metabolizable efficiency
下载PDF
Comparisons of improved genomic predictions generated by different imputation methods for genotyping by sequencing data in livestock populations 被引量:4
19
作者 Xiao Wang Guosheng Su +2 位作者 Dan Hao Mogens SandøLund Haja N.Kadarmideen 《Journal of Animal Science and Biotechnology》 CAS CSCD 2020年第2期316-326,共11页
Background:Genotyping by sequencing(GBS)still has problems with missing genotypes.Imputation is important for using GBS for genomic predictions,especially for low depths,due to the large number of missing genotypes.Mi... Background:Genotyping by sequencing(GBS)still has problems with missing genotypes.Imputation is important for using GBS for genomic predictions,especially for low depths,due to the large number of missing genotypes.Minor allele frequency(MAF)is widely used as a marker data editing criteria for genomic predictions.In this study,three imputation methods(Beagle,IMPUTE2 and FImpute software)based on four MAF editing criteria were investigated with regard to imputation accuracy of missing genotypes and accuracy of genomic predictions,based on simulated data of livestock population.Results:Four MAFs(no MAF limit,MAF≥0.001,MAF≥0.01 and MAF≥0.03)were used for editing marker data before imputation.Beagle,IMPUTE2 and FImpute software were applied to impute the original GBS.Additionally,IMPUTE2 also imputed the expected genotype dosage after genotype correction(GcIM).The reliability of genomic predictions was calculated using GBS and imputed GBS data.The results showed that imputation accuracies were the same for the three imputation methods,except for the data of sequencing read depth(depth)=2,where FImpute had a slightly lower imputation accuracy than Beagle and IMPUTE2.GcIM was observed to be the best for all of the imputations at depth=4,5 and 10,but the worst for depth=2.For genomic prediction,retaining more SNPs with no MAF limit resulted in higher reliability.As the depth increased to 10,the prediction reliabilities approached those using true genotypes in the GBS loci.Beagle and IMPUTE2 had the largest increases in prediction reliability of 5 percentage points,and FImpute gained 3 percentage points at depth=2.The best prediction was observed at depth=4,5 and 10 using GcIM,but the worst prediction was also observed using GcIM at depth=2.Conclusions:The current study showed that imputation accuracies were relatively low for GBS with low depths and high for GBS with high depths.Imputation resulted in larger gains in the reliability of genomic predictions for GBS with lower depths.These results suggest that the application of IMPUTE2,based on a corrected GBS(GcIM)to improve genomic predictions for higher depths,and FImpute software could be a good alternative for routine imputation. 展开更多
关键词 Genomic prediction Genotyping by sequencing IMPUTATION MAF Simulation
下载PDF
Energy Consumption Prediction of a CNC Machining Process With Incomplete Data 被引量:5
20
作者 Jian Pan Congbo Li +2 位作者 Ying Tang Wei Li Xiaoou Li 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第5期987-1000,共14页
Energy consumption prediction of a CNC machining process is important for energy efficiency optimization strategies.To improve the generalization abilities,more and more parameters are acquired for energy prediction m... Energy consumption prediction of a CNC machining process is important for energy efficiency optimization strategies.To improve the generalization abilities,more and more parameters are acquired for energy prediction modeling.While the data collected from workshops may be incomplete because of misoperation,unstable network connections,and frequent transfers,etc.This work proposes a framework for energy modeling based on incomplete data to address this issue.First,some necessary preliminary operations are used for incomplete data sets.Then,missing values are estimated to generate a new complete data set based on generative adversarial imputation nets(GAIN).Next,the gene expression programming(GEP)algorithm is utilized to train the energy model based on the generated data sets.Finally,we test the predictive accuracy of the obtained model.Computational experiments are designed to investigate the performance of the proposed framework with different rates of missing data.Experimental results demonstrate that even when the missing data rate increases to 30%,the proposed framework can still make efficient predictions,with the corresponding RMSE and MAE 0.903 k J and 0.739 k J,respectively. 展开更多
关键词 Energy consumption prediction incomplete data generative adversarial imputation nets(GAIN) gene expression programming(GEP)
下载PDF
上一页 1 2 5 下一页 到第
使用帮助 返回顶部