Studies on high-throughput global gene expression using microarray technology have generated ever larger amounts of systematic transcriptome data. A major challenge in exploiting these heterogeneous datasets is how to...Studies on high-throughput global gene expression using microarray technology have generated ever larger amounts of systematic transcriptome data. A major challenge in exploiting these heterogeneous datasets is how to normalize the expression profiles by inter-assay methods. Different non-linear and linear normalization methods have been developed, which essentially rely on the hypothesis that the true or perceived logarithmic fold-change distributions between two different assays are symmetric in nature. However, asymmetric gene expression changes are frequently observed, leading to suboptimal normalization results and in consequence potentially to thousands of false calls. Therefore, we have specifically investigated asymmetric comparative transcriptome profiles and developed the normalization using weighted negative second order exponential error functions (NeONORM) for robust and global inter-assay normalization. NeONORM efficiently damps true gene regulatory events in order to minimize their misleading impact on the norrealization process. We evaluated NeONORM's applicability using artificial and true experimental datasets, both of which demonstrated that NeONORM could be systematically applied to inter-assay and inter-condition comparisons.展开更多
The problems of online pricing with offline data,among other similar online decision making with offline data problems,aim at designing and evaluating online pricing policies in presence of a certain amount of existin...The problems of online pricing with offline data,among other similar online decision making with offline data problems,aim at designing and evaluating online pricing policies in presence of a certain amount of existing offline data.To evaluate pricing policies when offline data are available,the decision maker can either position herself at the time point when the offline data are already observed and viewed as deterministic,or at the time point when the offline data are not yet generated and viewed as stochastic.We write a framework to discuss how and why these two different positions are relevant to online policy evaluations,from a worst-case perspective and from a Bayesian perspective.We then use a simple online pricing setting with offline data to illustrate the constructions of optimal policies for these two approaches and discuss their differences,especially whether we can decompose the searching for the optimal policy into independent subproblems and optimize separately,and whether there exists a deterministic optimal policy.展开更多
Novel microarray technologies such as the AB1700 platform from Applied Biosysterns promise significant increases in the signal dynamic range and a higher sensitivity for weakly expressed transcripts. We have compared ...Novel microarray technologies such as the AB1700 platform from Applied Biosysterns promise significant increases in the signal dynamic range and a higher sensitivity for weakly expressed transcripts. We have compared a representative set of AB1700 data with a similarly representative Affymetrix HG-U133A dataset. The AB1700 design extends the signal dynamic detection range at the lower bound by one order of magnitude. The lognormal signal distribution profiles of these highsensitivity data need to be represented by two independent distributions. The additional second distribution covers those transcripts that would have gone undetected using the Affymetrix technology. The signal-dependent variance distribution in the AB1700 data is a non-trivial function of signal intensity, describable using a composite function. The drastically different structure of these highsensitivity transcriptome profiles requires adaptation or even redevelopment of the standard microarray analysis methods. Based on the statistical properties, we have derived a signal variance distribution model for AB1700 data that is necessary for such development. Interestingly, the dual lognormal distribution observed in the AB1700 data reflects two fundamentally different biologic mechanisms of transcription initiation.展开更多
We have previously developed a combined signal/variance distribution model that accounts for the particular statistical properties of datasets generated on the Applied Biosystems AB1700 transcriptome system. Here we s...We have previously developed a combined signal/variance distribution model that accounts for the particular statistical properties of datasets generated on the Applied Biosystems AB1700 transcriptome system. Here we show that this model can be efficiently used to generate synthetic datasets with statistical properties virtually identical to those of the actual data by aid of the JAVA application ace.map creator 1.0 that we have developed. The fundamentally different structure of AB1700 transcriptome profiles requires re-evaluation, adaptation, or even redevelopment of many of the standard microarray analysis methods in order to avoid misinterpretation of the data on the one hand, and to draw full benefit from their increased specificity and sensitivity on the other hand. Our composite data model and the ace.map creator 1.0 application thereby not only present proof of the correctness of our parameter estimation, but also provide a tool for the generation of synthetic test data that will be useful for further development and testing of analysis methods.展开更多
The problem of identifying differential activity such as in gene expression is a major defeat in biostatistics and bioinformatics. Equally important, however much less frequently studied, is the question of similar ac...The problem of identifying differential activity such as in gene expression is a major defeat in biostatistics and bioinformatics. Equally important, however much less frequently studied, is the question of similar activity from one biological condition to another. The fold- change, or ratio, is usually considered a relevant criterion for stating difference and similarity between measurements. Importantly, no statistical method for concomitant evaluation of similarity and distinctness currently exists for biological applications. Modern micro- array, digital PCR (dPCR), and Next-Generation Sequencing (NGS) technologies frequently provide a means of coeff^cient of variation estimation for individual measurements. Using fold-change, and by making the assumption that measurements are normally distributed with known variances, we designed a novel statistical test that allows us to detect concomitantly, thus using the same formalism, differ- entially and similarly expressed genes (http:]]cds.ihes.fr). Given two sets of gene measurements in different biological conditions, the probabilities of making type I and type II errors in stating that a gene is differentially or similarly expressed from one condition to the other can be calculated. Furthermore, a confidence interval for the fold-change can be delineated. Finally, we demonstrate that the assumption of normality can be relaxed to consider arbitrary distributions numerically. The Concomitant evaluation of Distinctness and Similarity (CDS) statistical test correctly estimates similarities and differences between measurements of gene expression. The imple- mentation, being time and memory efficient, allows the use of the CDS test in high-throughput data analysis such as microarray, dPCR, and NGS experiments. Importantly, the CDS test can be applied to the comparison of single measurements (N = 1) provided the var- iance (or coefficient of variation) of the signals is known, making CDS a valuable tool also in biomedical analysis where typically a single measurement per subject is available.展开更多
The author considers mass critical nonlinear Schrdinger and Korteweg-de Vries equations. A review on results related to the blow-up of solution of these equations is given.
Many measurements of B decays involve admixtures of B hadrons. Previously we arbitrarily included such admixtures in the B±section, but because of their importance we have created two new sections:
The Review summarizes much of particle physics and cosmology. Using data from previous editions, plus 3,283 new measurements from 899 papers, we list, evaluate, and average measured properties of gauge bosons and the ...The Review summarizes much of particle physics and cosmology. Using data from previous editions, plus 3,283 new measurements from 899 papers, we list, evaluate, and average measured properties of gauge bosons and the recently discovered Higgs boson, leptons, quarks, mesons, and baryons. We summarize searches for hypothetical particles such as heavy neutrinos, supersymmetric and technicolor particles, axions, dark photons, etc. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as Supersymmetry, Extra Dimensions, Particle Detectors, Probability, and Statistics. Among the 112 reviews are many that are new or heavily revised including those on: Dark Energy, Higgs Boson Physics, Electroweak Model, Neutrino Cross Section Measurements, Monte Carlo Neutrino Generators, Top Quark, Dark Matter, Dynamical Electroweak Symmetry Breaking, Accelerator Physics of Colliders, High-Energy Collider Parameters, Big Bang Nucleosynthesis, Astrophysical Constants and Cosmological Parameters. A booklet is available containing the Summary Tables and abbreviated versions of some of the other sections of this full Review. All tables, listings, and reviews (and errata) are also available on the Particle Data Group website: http ://pdg. lbl. gov.展开更多
1. Overview The Review of Particle Physics and the abbreviated version, the Particle Physics Booklet, are reviews of the field of Particle Physics. This complete Review includes a compilation/evaluation of data on par...1. Overview The Review of Particle Physics and the abbreviated version, the Particle Physics Booklet, are reviews of the field of Particle Physics. This complete Review includes a compilation/evaluation of data on particle properties, called the "Particle Listings." These Listings include 3,283 new measurements from 899 papers, in addition to the 32,153 measurements from 8,944 papers that first appeared in previous editions [1].展开更多
Written by R.L. Kelly (LBNL). The most commonly used SU(3) isoscalar factors, corresponding to the singlet, octet, and deeuplet content of 8 8 and 10 8, are shown at the right. The notation uses particle nam...Written by R.L. Kelly (LBNL). The most commonly used SU(3) isoscalar factors, corresponding to the singlet, octet, and deeuplet content of 8 8 and 10 8, are shown at the right. The notation uses particle names to identify the coefficients, so that the pattern of relative couplings may be seen at a glance. We illustrate the use of the coefficients below.展开更多
CHARMED BARYONS Revised March 2012 by C.G. Wohl (LBNL). There are 17 known charmed baryons, and four other candidates not well enough established to be promoted to the Summary Tables.* Fig. l(a) shows the mass sp...CHARMED BARYONS Revised March 2012 by C.G. Wohl (LBNL). There are 17 known charmed baryons, and four other candidates not well enough established to be promoted to the Summary Tables.* Fig. l(a) shows the mass spectrum,展开更多
33.1. Introduction This review summarizes the detector technologies employed at accelerator particle physics experiments. Several of these detectors are also used in a non-accelerator context and examples of such appl...33.1. Introduction This review summarizes the detector technologies employed at accelerator particle physics experiments. Several of these detectors are also used in a non-accelerator context and examples of such applications will be provided. The detector techniques which are specific to non-accelerator particle physics experiments are the subject of Chap.展开更多
Revised August 2013 by M.J. Syphers (MSU) and F. Zimmermann (CERN).29.1. Luminosity This article provides background for the High-Energy Collider Parameter Tables that follow. The number of events, Nexp, is the pr...Revised August 2013 by M.J. Syphers (MSU) and F. Zimmermann (CERN).29.1. Luminosity This article provides background for the High-Energy Collider Parameter Tables that follow. The number of events, Nexp, is the product of the cross section of interest,展开更多
文摘Studies on high-throughput global gene expression using microarray technology have generated ever larger amounts of systematic transcriptome data. A major challenge in exploiting these heterogeneous datasets is how to normalize the expression profiles by inter-assay methods. Different non-linear and linear normalization methods have been developed, which essentially rely on the hypothesis that the true or perceived logarithmic fold-change distributions between two different assays are symmetric in nature. However, asymmetric gene expression changes are frequently observed, leading to suboptimal normalization results and in consequence potentially to thousands of false calls. Therefore, we have specifically investigated asymmetric comparative transcriptome profiles and developed the normalization using weighted negative second order exponential error functions (NeONORM) for robust and global inter-assay normalization. NeONORM efficiently damps true gene regulatory events in order to minimize their misleading impact on the norrealization process. We evaluated NeONORM's applicability using artificial and true experimental datasets, both of which demonstrated that NeONORM could be systematically applied to inter-assay and inter-condition comparisons.
文摘The problems of online pricing with offline data,among other similar online decision making with offline data problems,aim at designing and evaluating online pricing policies in presence of a certain amount of existing offline data.To evaluate pricing policies when offline data are available,the decision maker can either position herself at the time point when the offline data are already observed and viewed as deterministic,or at the time point when the offline data are not yet generated and viewed as stochastic.We write a framework to discuss how and why these two different positions are relevant to online policy evaluations,from a worst-case perspective and from a Bayesian perspective.We then use a simple online pricing setting with offline data to illustrate the constructions of optimal policies for these two approaches and discuss their differences,especially whether we can decompose the searching for the optimal policy into independent subproblems and optimize separately,and whether there exists a deterministic optimal policy.
文摘Novel microarray technologies such as the AB1700 platform from Applied Biosysterns promise significant increases in the signal dynamic range and a higher sensitivity for weakly expressed transcripts. We have compared a representative set of AB1700 data with a similarly representative Affymetrix HG-U133A dataset. The AB1700 design extends the signal dynamic detection range at the lower bound by one order of magnitude. The lognormal signal distribution profiles of these highsensitivity data need to be represented by two independent distributions. The additional second distribution covers those transcripts that would have gone undetected using the Affymetrix technology. The signal-dependent variance distribution in the AB1700 data is a non-trivial function of signal intensity, describable using a composite function. The drastically different structure of these highsensitivity transcriptome profiles requires adaptation or even redevelopment of the standard microarray analysis methods. Based on the statistical properties, we have derived a signal variance distribution model for AB1700 data that is necessary for such development. Interestingly, the dual lognormal distribution observed in the AB1700 data reflects two fundamentally different biologic mechanisms of transcription initiation.
文摘We have previously developed a combined signal/variance distribution model that accounts for the particular statistical properties of datasets generated on the Applied Biosystems AB1700 transcriptome system. Here we show that this model can be efficiently used to generate synthetic datasets with statistical properties virtually identical to those of the actual data by aid of the JAVA application ace.map creator 1.0 that we have developed. The fundamentally different structure of AB1700 transcriptome profiles requires re-evaluation, adaptation, or even redevelopment of many of the standard microarray analysis methods in order to avoid misinterpretation of the data on the one hand, and to draw full benefit from their increased specificity and sensitivity on the other hand. Our composite data model and the ace.map creator 1.0 application thereby not only present proof of the correctness of our parameter estimation, but also provide a tool for the generation of synthetic test data that will be useful for further development and testing of analysis methods.
基金funds from the Centre National de la Recherche Scientifique,the Agence Nationale pour la Recherche(Grant No.ANR-07-PHYSIO-013-01)the Fondation pour la Recherche sur l'Hypertension Arterielle (Grant No.AO 2007)the Agence Nationale de Recherches sur le SIDA et les hepatites virales (ANRS) and the Genopole Evry (all awarded to AB),JFBG was recipient of a CONACYTMexico PhD Fellowship (Grant No.207676/302245)
文摘The problem of identifying differential activity such as in gene expression is a major defeat in biostatistics and bioinformatics. Equally important, however much less frequently studied, is the question of similar activity from one biological condition to another. The fold- change, or ratio, is usually considered a relevant criterion for stating difference and similarity between measurements. Importantly, no statistical method for concomitant evaluation of similarity and distinctness currently exists for biological applications. Modern micro- array, digital PCR (dPCR), and Next-Generation Sequencing (NGS) technologies frequently provide a means of coeff^cient of variation estimation for individual measurements. Using fold-change, and by making the assumption that measurements are normally distributed with known variances, we designed a novel statistical test that allows us to detect concomitantly, thus using the same formalism, differ- entially and similarly expressed genes (http:]]cds.ihes.fr). Given two sets of gene measurements in different biological conditions, the probabilities of making type I and type II errors in stating that a gene is differentially or similarly expressed from one condition to the other can be calculated. Furthermore, a confidence interval for the fold-change can be delineated. Finally, we demonstrate that the assumption of normality can be relaxed to consider arbitrary distributions numerically. The Concomitant evaluation of Distinctness and Similarity (CDS) statistical test correctly estimates similarities and differences between measurements of gene expression. The imple- mentation, being time and memory efficient, allows the use of the CDS test in high-throughput data analysis such as microarray, dPCR, and NGS experiments. Importantly, the CDS test can be applied to the comparison of single measurements (N = 1) provided the var- iance (or coefficient of variation) of the signals is known, making CDS a valuable tool also in biomedical analysis where typically a single measurement per subject is available.
基金supported by the E.R.C.Advanced Grant(No.291214)BLOWDISOL
文摘The author considers mass critical nonlinear Schrdinger and Korteweg-de Vries equations. A review on results related to the blow-up of solution of these equations is given.
文摘Many measurements of B decays involve admixtures of B hadrons. Previously we arbitrarily included such admixtures in the B±section, but because of their importance we have created two new sections:
基金supported by the Director,Office of Science,Office of High Energy Physics of the U.S.Department of Energy under Contract No.DE-AC02-05CH11231the U.S.National Science Foundation under Agreement No.PHY-0652989+3 种基金the European Laboratory for Particle Physics(CERN)an implementing arrangement between the governments of Japan(MEXT:Ministry of Education,Culture,Sports,Science and Technology)and the United States(DOE)on cooperative research and developmentthe Italian National Institute of Nuclear Physics(INFN)B.C.F.was supported by the U.S.National Science Foundation Grant PHY-1214082
文摘The Review summarizes much of particle physics and cosmology. Using data from previous editions, plus 3,283 new measurements from 899 papers, we list, evaluate, and average measured properties of gauge bosons and the recently discovered Higgs boson, leptons, quarks, mesons, and baryons. We summarize searches for hypothetical particles such as heavy neutrinos, supersymmetric and technicolor particles, axions, dark photons, etc. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as Supersymmetry, Extra Dimensions, Particle Detectors, Probability, and Statistics. Among the 112 reviews are many that are new or heavily revised including those on: Dark Energy, Higgs Boson Physics, Electroweak Model, Neutrino Cross Section Measurements, Monte Carlo Neutrino Generators, Top Quark, Dark Matter, Dynamical Electroweak Symmetry Breaking, Accelerator Physics of Colliders, High-Energy Collider Parameters, Big Bang Nucleosynthesis, Astrophysical Constants and Cosmological Parameters. A booklet is available containing the Summary Tables and abbreviated versions of some of the other sections of this full Review. All tables, listings, and reviews (and errata) are also available on the Particle Data Group website: http ://pdg. lbl. gov.
基金supported by the Director,Office of Science,Office of High Energy Physics of the U.S.Department of Energy under Contract No.DE-AC02-05CH11231by the U.S.National Science Foundation under Agreement No.PHY-0652989+2 种基金by the European Laboratory for Particle Physics(CERN)by an implementing arrangement between the governments of Japan(MEXT:Ministry of Education,Culture,Sports, Science and Technology) and the United States(DOE) on cooperative research and developmentby the Italian National Institute of Nuclear Physics(INFN)
文摘1. Overview The Review of Particle Physics and the abbreviated version, the Particle Physics Booklet, are reviews of the field of Particle Physics. This complete Review includes a compilation/evaluation of data on particle properties, called the "Particle Listings." These Listings include 3,283 new measurements from 899 papers, in addition to the 32,153 measurements from 8,944 papers that first appeared in previous editions [1].
文摘Written by R.L. Kelly (LBNL). The most commonly used SU(3) isoscalar factors, corresponding to the singlet, octet, and deeuplet content of 8 8 and 10 8, are shown at the right. The notation uses particle names to identify the coefficients, so that the pattern of relative couplings may be seen at a glance. We illustrate the use of the coefficients below.
文摘CHARMED BARYONS Revised March 2012 by C.G. Wohl (LBNL). There are 17 known charmed baryons, and four other candidates not well enough established to be promoted to the Summary Tables.* Fig. l(a) shows the mass spectrum,
文摘33.1. Introduction This review summarizes the detector technologies employed at accelerator particle physics experiments. Several of these detectors are also used in a non-accelerator context and examples of such applications will be provided. The detector techniques which are specific to non-accelerator particle physics experiments are the subject of Chap.
基金supported by PAPIIT(DGAPA-UNAM) project IN106913 and CONACyT(Mexico) project 151234support by the Mainz Institute for Theoretical Physics(MITP) where part of this work was completed.A.F.is supported in part by the National Science Foundation under grant no. PHY-1212635
文摘Revised November 2013 by J. Erler (U. Mexico) and A. Freit&s (Pittsburgh U.).10.1 Introduction 10.2 Renormalization and radiative corrections
文摘Revised August 2013 by M.J. Syphers (MSU) and F. Zimmermann (CERN).29.1. Luminosity This article provides background for the High-Energy Collider Parameter Tables that follow. The number of events, Nexp, is the product of the cross section of interest,