In this study,methods based on the distribution model(with and without personal opinion)were used for the separation of anomalous zones,which include two different methods of U-spatial statistics and mean plus values ...In this study,methods based on the distribution model(with and without personal opinion)were used for the separation of anomalous zones,which include two different methods of U-spatial statistics and mean plus values of standard deviation(X+nS).The primary purpose is to compare the results of these methods with each other.To increase the accuracy of comparison,regional geochemical data were used where occurrences and mineralization zones of epithermal gold have been introduced.The study area is part of the Hashtjin geological map,which is structurally part of the folded and thrust belt and part of the Alborz Tertiary magmatic complex.Samples were taken from secondary lithogeochemical environments.Au element data concerning epithermal gold reserves were used to investigate the efficacy of these two methods.In the U-spatial statistics method,and criteria were used to determine the threshold,and in the method,the element enrichment index of the region rock units was obtained with grouping these units.The anomalous areas were identified by,and criteria.Comparison of methods was made considering the position of discovered occurrences and the occurrences obtained from these methods,the flexibility of the methods in separating the anomalous zones,and the two-dimensional spatial correlation of the three elements As,Pb,and Ag with Au element.The ability of two methods to identify potential areas is acceptable.Among these methods,it seems the method with criteria has a high degree of flexibility in separating anomalous regions in the case of epithermal type gold deposits.展开更多
A multi-objective linear programming problem is made from fuzzy linear programming problem. It is due the fact that it is used fuzzy programming method during the solution. The Multi objective linear programming probl...A multi-objective linear programming problem is made from fuzzy linear programming problem. It is due the fact that it is used fuzzy programming method during the solution. The Multi objective linear programming problem can be converted into the single objective function by various methods as Chandra Sen’s method, weighted sum method, ranking function method, statistical averaging method. In this paper, Chandra Sen’s method and statistical averaging method both are used here for making single objective function from multi-objective function. Two multi-objective programming problems are solved to verify the result. One is numerical example and the other is real life example. Then the problems are solved by ordinary simplex method and fuzzy programming method. It can be seen that fuzzy programming method gives better optimal values than the ordinary simplex method.展开更多
A method named interval analysis method, which solves the buckling load of composite laminate with uncertainties, is presented. Based on interval mathematics and Taylor series expansion, the interval analysis method i...A method named interval analysis method, which solves the buckling load of composite laminate with uncertainties, is presented. Based on interval mathematics and Taylor series expansion, the interval analysis method is used to deal with uncertainties. Not necessarily knowing the probabilistic statistics characteristics of the uncertain variables, only little information on physical properties of material is needed in the interval analysis method, that is, the upper bound and lower bound of the uncertain variable. So the interval of response of the structure can be gotten through less computational efforts. The interval analysis method is efficient under the condition that probability approach cannot work well because of small samples and deficient statistics characteristics. For buckling load of a special cross-ply laminates and antisymmetric angle-ply laminates with all edges simply supported, calculations and comparisons between interval analysis method and probability method are performed.展开更多
Aim To improve the efficiency of fatigue material tests and relevant statistical treatment of test data. Methods\ Least square approach and other special treatments were used. Results and Conclusion\ The concepts...Aim To improve the efficiency of fatigue material tests and relevant statistical treatment of test data. Methods\ Least square approach and other special treatments were used. Results and Conclusion\ The concepts of each phase in fatigue tests and statistical treatment are clarified. The method proposed leads to three important properties. Reduced number of specimens brings to the advantage of lowering test expenditures. The whole test procedure has more flexibility for there is no need to conduct many tests at the same stress level as in traditional cases.展开更多
In this study,geochemical anomaly separation was carried out with methods based on the distribution model,which includes probability diagram(MPD),fractal(concentration-area technique),and U-statistic methods.The main ...In this study,geochemical anomaly separation was carried out with methods based on the distribution model,which includes probability diagram(MPD),fractal(concentration-area technique),and U-statistic methods.The main objective is to evaluate the efficiency and accuracy of the methods in separation of anomalies on the shear zone gold mineralization.For this purpose,samples were taken from the secondary lithogeochemical environment(stream sediment samples)on the gold mineralization in Saqqez,NW of Iran.Interpretation of the histograms and diagrams showed that the MPD is capable of identifying two phases of mineralization.The fractal method could separate only one phase of change based on the fractal dimension with high concentration areas of the Au element.The spatial analysis showed two mixed subpopulations after U=0 and another subpopulation with very high U values.The MPD analysis followed spatial analysis,which shows the detail of the variations.Six mineralized zones detected from local geochemical exploration results were used for validating the methods mentioned above.The MPD method was able to identify the anomalous areas higher than 90%,whereas the two other methods identified 60%(maximum)of the anomalous areas.The raw data without any estimation for the concentration was used by the MPD method using aminimum of calculations to determine the threshold values.Therefore,the MPD method is more robust than the other methods.The spatial analysis identified the detail soft hegeological and mineralization events that were affected in the study area.MPD is recommended as the best,and the spatial U-analysis is the next reliable method to be used.The fractal method could show more detail of the events and variations in the area with asymmetrical grid net and a higher density of sampling or at the detailed exploration stage.展开更多
The correlation between close-in super Earths and distant cold Jupiters in planetary systems has important implications for their formation and evolution.Contrary to some earlier findings,a recent study conducted by B...The correlation between close-in super Earths and distant cold Jupiters in planetary systems has important implications for their formation and evolution.Contrary to some earlier findings,a recent study conducted by Bonomo et al.suggests that the occurrence of cold Jupiter companions is not excessive in super-Earth systems.Here we show that this discrepancy can be seen as a Simpson’s paradox and is resolved once the metallicity dependence of the super-Earth-cold Jupiter relation is taken into account.A common feature is noticed that almost all the cold Jupiter detections with inner super-Earth companions are found around metal-rich stars.Focusing on the Sun-like hosts with super-solar metallicities,we show that the frequency of cold Jupiters conditioned on the presence of inner super Earths is 39_(-11)^(+12)%,whereas the frequency of cold Jupiters in the same metallicity range is no more than 20%.Therefore,the occurrences of close-in super Earths and distant cold Jupiters appear correlated around metal-rich hosts.The relation between the two types of planets remains unclear for stars with metal-poor hosts due to the limited sample size and the much lower occurrence rate of cold Jupiters,but a correlation between the two cannot be ruled out.展开更多
We present a study of low surface brightness galaxies(LSBGs) selected by fitting the images for all the galaxies inα.40 SDSS DR7 sample with two kinds of single-component models and two kinds of two-component models(...We present a study of low surface brightness galaxies(LSBGs) selected by fitting the images for all the galaxies inα.40 SDSS DR7 sample with two kinds of single-component models and two kinds of two-component models(disk+bulge):single exponential,single sersic,exponential+deVaucular(exp+deV),and exponential+sérsic(exp+ser).Under the criteria of the B band disk central surface brightness μ_(0,disk)(B)≥22.5 mag arcsec^(-2) and the axis ratio b/a> 0.3,we selected four none-edge-on LSBG samples from each of the models which contain 1105,1038,207,and 75 galaxies,respectively.There are 756 galaxies in common between LSBGs selected by exponential and sersic models,corresponding to 68.42% of LSBGs selected by the exponential model and 72.83% of LSBGs selected by the sersic model,the rest of the discrepancy is due to the difference in obtaining μ_(0) between the exponential and sersic models.Based on the fitting,in the range of 0.5≤n≤1.5,the relation of μ_(0) from two models can be written as μ_(0,sérsic)-μ_(0,exp)=-1.34(n-1).The LSBGs selected by disk+bulge models(LSBG_(2)comps) are more massive than LSBGs selected by single-component models(LSBG_1comp),and also show a larger disk component.Though the bulges in the majority of our LSBG_(2)comps are not prominent,more than 60% of our LSBG_(2)comps will not be selected if we adopt a single-component model only.We also identified 31 giant low surface brightness galaxies(gLSBGs) from LSBG_(2)comps.They are located at the same region in the color-magnitude diagram as other gLSBGs.After we compared different criteria of gLSBGs selection,we find that for gas-rich LSBGs,M_(*)> 10^(10)M_⊙ is the best to distinguish between gLSBGs and normal LSBGs with bulge.展开更多
Glitch activity refers to the mean increase in pulsar spin frequency per year due to rotational glitches.It is an important tool for studying super-nuclear matter using neutron star interiors as templates.Glitch event...Glitch activity refers to the mean increase in pulsar spin frequency per year due to rotational glitches.It is an important tool for studying super-nuclear matter using neutron star interiors as templates.Glitch events are typically observed in the spin frequency(ν) and frequency derivative( ν) of pulsars.The rate of glitch recurrence decreases as the pulsar ages,and the activity parameter is usually measured by linear regression of cumulative glitches over a given period.This method is effective for pulsars with multiple regular glitch events.However,due to the scarcity of glitch events and the difficulty of monitoring all known pulsars,only a few have multiple records of glitch events.This limits the use of the activity parameter in studying neutron star interiors with multiple pulsars.In this study,we examined the relationship between the activity parameters and pulsar spin parameters(spin frequency,frequency derivative,and pulsar characteristic age).We found that a quadratic function provides a better fit for the relationship between activity parameters and spin parameters than the commonly used linear functions.Using this information,we were able to estimate the activity parameters of other pulsars that do not have records of glitches.Our analysis shows that the relationship between the estimated activity parameters and pulsar spin parameters is consistent with that of the observed activity parameters in the ensemble of pulsars.展开更多
A novel damage detection method is applied to a 3-story frame structure, to obtain statistical quantification control criterion of the existence, location and identification of damage. The mean, standard deviation, an...A novel damage detection method is applied to a 3-story frame structure, to obtain statistical quantification control criterion of the existence, location and identification of damage. The mean, standard deviation, and exponentially weighted moving average (EWMA) are applied to detect damage information according to statistical process control (SPC) theory. It is concluded that the detection is insignificant with the mean and EWMA because the structural response is not independent and is not a normal distribution. On the other hand, the damage information is detected well with the standard deviation because the influence of the data distribution is not pronounced with this parameter. A suitable moderate confidence level is explored for more significant damage location and quantification detection, and the impact of noise is investigated to illustrate the robustness of the method.展开更多
Identification of modal parameters of a linear structure with output-only measurements has received much attention over the past decades. In the paper, the Natural Excitation Technique (NExT) is used for acquisition o...Identification of modal parameters of a linear structure with output-only measurements has received much attention over the past decades. In the paper, the Natural Excitation Technique (NExT) is used for acquisition of the impulse signals from the structural responses. Then Eigensystem Realization Algorithm (ERA) is utilized for modal identification. For disregarding the fictitious ‘computational modes', a procedure, Statistically Averaging Modal Frequency Method (SAMFM), is developed to distinguish the true modes from noise modes, and to improve the precision of the identified modal frequencies of the structure. An offshore platform is modeled with the finite element method. The theoretical modal parameters are obtained for a comparison with the identified values. The dynamic responses of the platform under random wave loading are computed for providing the output signals used for identification with ERA. Results of simulation demonstrate that the proposed method can determine the system modal frequency with high precision.展开更多
We present a statistical method to derive the stellar density profiles of the Milky Way from spectroscopic survey data, taking into account selection effects. We assume the selection function, which can be altered dur...We present a statistical method to derive the stellar density profiles of the Milky Way from spectroscopic survey data, taking into account selection effects. We assume the selection function, which can be altered during observations and data reductions, of the spectroscopic survey is based on photometric colors and magnitude. Then the underlying selection function for a line-of-sight can be recovered well by comparing the distribution of the spectroscopic stars in a color-magnitude plane with that of the photometric dataset. Subsequently, the stellar density profile along a line-of-sight can be derived from the spectroscopically measured stellar density profile multiplied by the selection function. The method is validated using Galaxia mock data with two different selection functions. We demonstrate that the derived stellar density profiles reconstruct the true ones well not only for the full set of targets, but also for sub-populations selected from the full dataset. Finally, the method is applied to map the density pro- files for the Galactic disk and halo, using the LAMOST RGB stars. The Galactic disk extends to about R = 19 kpc, where the disk still contributes about 10% to the total stellar surface density. Beyond this radius, the disk smoothly transitions to the halo without any truncation, bending or breaking. Moreover, no over-density corresponding to the Monoceros ring is found in the Galactic anti-center direction. The disk shows moderate north-south asymmetry at radii larger than 12 kpc. On the other hand, the R-Z tomographic map directly shows that the stellar halo is substantially oblate within a Galactocentric radius of 20 kpc and gradually becomes nearly spherical beyond 30 kpc.展开更多
In radio astronomy,radio frequency interference(RFI)becomes more and more serious for radio observational facilities.The RFI always influences the search and study of the interesting astronomical objects.Mitigating th...In radio astronomy,radio frequency interference(RFI)becomes more and more serious for radio observational facilities.The RFI always influences the search and study of the interesting astronomical objects.Mitigating the RFI becomes an essential procedure in any survey data processing.The Five-hundred-meter Aperture Spherical radio Telescope(FAST)is an extremely sensitive radio telescope.It is necessary to find out an effective and precise RFI mitigation method for FAST data processing.In this work,we introduce a method to mitigate the RFI in FAST spectral observation and make a statistic for the RFI using~300 h FAST data.The details are as follows.First,according to the characteristics of FAST spectra,we propose to use the Asymmetrically Reweighted Penalized Least Squares algorithm for baseline fitting.Our test results show that it has a good performance.Second,we flag the RFI with four strategies,which are to flag extremely strong RFI,flag long-lasting RFI,flag polarized RFI,and flag beam-combined RFI,respectively.The test results show that all the RFI above a preset threshold could be flagged.Third,we make a statistic for the probabilities of polarized XX and YY RFI in FAST observations.The statistical results could tell us which frequencies are relatively quiescent.With such statistical data,we are able to avoid using such frequencies in our spectral observations.Finally,based on the~300 h FAST data,we obtained an RFI table,which is the most complete database currently for FAST.展开更多
The Chinese Space Station Telescope(CSST)spectroscopic survey aims to deliver high-quality low-resolution(R>200)slitless spectra for hundreds of millions of targets down to a limiting magnitude of about 21 mag,dist...The Chinese Space Station Telescope(CSST)spectroscopic survey aims to deliver high-quality low-resolution(R>200)slitless spectra for hundreds of millions of targets down to a limiting magnitude of about 21 mag,distributed within a large survey area(17500 deg2)and covering a wide wavelength range(255-1000 nm by three bands GU,GV,and GI).As slitless spectroscopy precludes the usage of wavelength calibration lamps,wavelength calibration is one of the most challenging issues in the reduction of slitless spectra,yet it plays a key role in measuring precise radial velocities of stars and redshifts of galaxies.In this work,we propose a star-based method that can monitor and correct for possible errors in the CSST wavelength calibration using normal scientific observations,taking advantage of the facts that(ⅰ)there are about ten million stars with reliable radial velocities now available thanks to spectroscopic surveys like LAMOST,(ⅱ)the large field of view of CSST enables efficient observations of such stars in a short period of time,and(ⅲ)radial velocities of such stars can be reliably measured using only a narrow segment of CSST spectra.We demonstrate that it is possible to achieve a wavelength calibration precision of a few km s^(-1) for the GU band,and about 10 to 20 kms^(-1) for the GV and GI bands,with only a few hundred velocity standard stars.Implementations of the method to other surveys are also discussed.展开更多
Spectrum denoising is an important procedure for large-scale spectroscopical surveys. This work proposes a novel stellar spectrum denoising method based on deep Bayesian modeling. The construction of our model include...Spectrum denoising is an important procedure for large-scale spectroscopical surveys. This work proposes a novel stellar spectrum denoising method based on deep Bayesian modeling. The construction of our model includes a prior distribution for each stellar subclass, a spectrum generator and a flow-based noise model. Our method takes into account the noise correlation structure, and it is not susceptible to strong sky emission lines and cosmic rays. Moreover, it is able to naturally handle spectra with missing flux values without ad-hoc imputation. The proposed method is evaluated on real stellar spectra from the Sloan Digital Sky Survey(SDSS) with a comprehensive list of common stellar subclasses and compared to the standard denoising auto-encoder. Our denoising method demonstrates a superior performance to the standard denoising auto-encoder, in respect of denoising quality and missing flux imputation. It may be potentially helpful in improving the accuracy of the classification and physical parameter measurement of stars when applying our method during data preprocessing.展开更多
A multi-model integration method is proposed to develop a multi-source and heterogeneous model for short-term solar flare prediction. Different prediction models are constructed on the basis of extracted predictors fr...A multi-model integration method is proposed to develop a multi-source and heterogeneous model for short-term solar flare prediction. Different prediction models are constructed on the basis of extracted predictors from a pool of observation databases. The outputs of the base models are normal- ized first because these established models extract predictors from many data resources using different prediction methods. Then weighted integration of the base models is used to develop a multi-model integrated model (MIM). The weight set that single models assign is optimized by a genetic algorithm. Seven base models and data from Solar and Heliospheric Observatory/Michelson Doppler Imager lon- gitudinal magnetograms are used to construct the MIM, and then its performance is evaluated by cross validation. Experimental results showed that the MIM outperforms any individual model in nearly every data group, and the richer the diversity of the base models, the better the performance of the MIM. Thus, integrating more diversified models, such as an expert system, a statistical model and a physical model, will greatly improve the performance of the MIM.展开更多
Ag-sheathed (Bi,Pb)(2)SoCa(2)Cu(3)O(x) tapes were prepared by the powder-in-tube method. The influences of rolling parameters on superconducting characteristics of Bi(2223)/Ag tapes were analyzed qualitatively with a ...Ag-sheathed (Bi,Pb)(2)SoCa(2)Cu(3)O(x) tapes were prepared by the powder-in-tube method. The influences of rolling parameters on superconducting characteristics of Bi(2223)/Ag tapes were analyzed qualitatively with a statistical method. The results demonstrate that roll diameter and reduction per pass significantly influence the properties of Bi(2223)/Ag superconducting tapes while roll speed does less and working friction the least. An optimized rolling process was therefore achieved according to the above results.展开更多
The significance of the fluctuation and randomness of the time series of each pollutant in environmental quality assessment is described for the first time in this paper. A comparative study was made of three differen...The significance of the fluctuation and randomness of the time series of each pollutant in environmental quality assessment is described for the first time in this paper. A comparative study was made of three different computing methods: the same starting point method, the striding averaging method, and the stagger phase averaging method. All of them can be used to calculate the Hurst index, which quantifies fluctuation and randomness. This study used real water quality data from Shazhu monitoring station on Taihu Lake in Wuxi, Jiangsu Province. The results show that, of the three methods, the stagger phase averaging method is best for calculating the Hurst index of a pollutant time series from the perspective of statistical regularity.展开更多
Statistical approaches for evaluating causal effects and for discovering causal networks are discussed in this paper.A causal relation between two variables is different from an association or correlation between them...Statistical approaches for evaluating causal effects and for discovering causal networks are discussed in this paper.A causal relation between two variables is different from an association or correlation between them.An association measurement between two variables and may be changed dramatically from positive to negative by omitting a third variable,which is called Yule-Simpson paradox.We shall discuss how to evaluate the causal effect of a treatment or exposure on an outcome to avoid the phenomena of Yule-Simpson paradox. Surrogates and intermediate variables are often used to reduce measurement costs or duration when measurement of endpoint variables is expensive,inconvenient,infeasible or unobservable in practice.There have been many criteria for surrogates.However,it is possible that for a surrogate satisfying these criteria,a treatment has a positive effect on the surrogate,which in turn has a positive effect on the outcome,but the treatment has a negative effect on the outcome,which is called the surrogate paradox.We shall discuss criteria for surrogates to avoid the phenomena of the surrogate paradox. Causal networks which describe the causal relationships among a large number of variables have been applied to many research fields.It is important to discover structures of causal networks from observed data.We propose a recursive approach for discovering a causal network in which a structural learning of a large network is decomposed recursively into learning of small networks.Further to discover causal relationships,we present an active learning approach in terms of external interventions on some variables.When we focus on the causes of an interest outcome, instead of discovering a whole network,we propose a local learning approach to discover these causes that affect the outcome.展开更多
In pulsar astronomy, detecting effective pulsar signals among numerous pulsar candidates is an important research topic. Starting from space X-ray pulsar signals, the two-dimensional autocorrelation profile map(2 D-AP...In pulsar astronomy, detecting effective pulsar signals among numerous pulsar candidates is an important research topic. Starting from space X-ray pulsar signals, the two-dimensional autocorrelation profile map(2 D-APM) feature modelling method, which utilizes epoch folding of the autocorrelation function of X-ray signals and expands the time-domain information of the periodic axis, is proposed. A uniform setting criterion regarding the time resolution of the periodic axis addresses pulsar signals without any prior information. Compared with the traditional profile, the model has a strong anti-noise ability, a greater abundance of information and consistent characteristics. The new feature is simulated with double Gaussian components, and the characteristic distribution of the model is revealed to be closely related to the distance between the double peaks of the profile. Next, a deep convolutional neural network(DCNN)is built, named Inception-Res Net. According to the order of the peak separation and number of arriving photons, 30 data sets based on the Poisson process are simulated to construct the training set, and the observation data of PSRs B0531+21, B0540-69 and B1509-58 from the Rossi X-ray Timing Explorer(RXTE) are selected to generate the test set. The number of training sets and the number of test sets are 30 000 and 5400, respectively. After achieving convergence stability, more than 99% of the pulsar signals are recognized, and more than 99% of the interference is successfully rejected, which verifies the high degree of agreement between the network and the feature model and the high potential of the proposed method in searching for pulsars.展开更多
The quality of the low frequency electromagnetic data is affected by the spike and the trend noises.Failure in removal of the spikes and the trends reduces the credibility of data explanation.Based on the analyses of ...The quality of the low frequency electromagnetic data is affected by the spike and the trend noises.Failure in removal of the spikes and the trends reduces the credibility of data explanation.Based on the analyses of the causes and characteristics of these noises,this paper presents the results of a preset statistics stacking method(PSSM)and a piecewise linear fitting method(PLFM)in de-noising the spikes and trends,respectively.The magnitudes of the spikes are either higher or lower than the normal values,which leads to distortion of the useful signal.Comparisons have been performed in removing of the spikes among the average,the statistics and the PSSM methods,and the results indicate that only the PSSM can remove the spikes successfully.On the other hand,the spectrums of the linear and nonlinear trends mainly lie in the low frequency band and can change the calculated resistivity significantly.No influence of the trends is observed when the frequency is higher than a certain threshold value.The PLSM can remove effectively both the linear and nonlinear trends with errors around 1% in the power spectrum.The proposed methods present an effective way for de-noising the spike and the trend noises in the low frequency electromagnetic data,and establish a research basis for de-noising the low frequency noises.展开更多
文摘In this study,methods based on the distribution model(with and without personal opinion)were used for the separation of anomalous zones,which include two different methods of U-spatial statistics and mean plus values of standard deviation(X+nS).The primary purpose is to compare the results of these methods with each other.To increase the accuracy of comparison,regional geochemical data were used where occurrences and mineralization zones of epithermal gold have been introduced.The study area is part of the Hashtjin geological map,which is structurally part of the folded and thrust belt and part of the Alborz Tertiary magmatic complex.Samples were taken from secondary lithogeochemical environments.Au element data concerning epithermal gold reserves were used to investigate the efficacy of these two methods.In the U-spatial statistics method,and criteria were used to determine the threshold,and in the method,the element enrichment index of the region rock units was obtained with grouping these units.The anomalous areas were identified by,and criteria.Comparison of methods was made considering the position of discovered occurrences and the occurrences obtained from these methods,the flexibility of the methods in separating the anomalous zones,and the two-dimensional spatial correlation of the three elements As,Pb,and Ag with Au element.The ability of two methods to identify potential areas is acceptable.Among these methods,it seems the method with criteria has a high degree of flexibility in separating anomalous regions in the case of epithermal type gold deposits.
文摘A multi-objective linear programming problem is made from fuzzy linear programming problem. It is due the fact that it is used fuzzy programming method during the solution. The Multi objective linear programming problem can be converted into the single objective function by various methods as Chandra Sen’s method, weighted sum method, ranking function method, statistical averaging method. In this paper, Chandra Sen’s method and statistical averaging method both are used here for making single objective function from multi-objective function. Two multi-objective programming problems are solved to verify the result. One is numerical example and the other is real life example. Then the problems are solved by ordinary simplex method and fuzzy programming method. It can be seen that fuzzy programming method gives better optimal values than the ordinary simplex method.
文摘A method named interval analysis method, which solves the buckling load of composite laminate with uncertainties, is presented. Based on interval mathematics and Taylor series expansion, the interval analysis method is used to deal with uncertainties. Not necessarily knowing the probabilistic statistics characteristics of the uncertain variables, only little information on physical properties of material is needed in the interval analysis method, that is, the upper bound and lower bound of the uncertain variable. So the interval of response of the structure can be gotten through less computational efforts. The interval analysis method is efficient under the condition that probability approach cannot work well because of small samples and deficient statistics characteristics. For buckling load of a special cross-ply laminates and antisymmetric angle-ply laminates with all edges simply supported, calculations and comparisons between interval analysis method and probability method are performed.
文摘Aim To improve the efficiency of fatigue material tests and relevant statistical treatment of test data. Methods\ Least square approach and other special treatments were used. Results and Conclusion\ The concepts of each phase in fatigue tests and statistical treatment are clarified. The method proposed leads to three important properties. Reduced number of specimens brings to the advantage of lowering test expenditures. The whole test procedure has more flexibility for there is no need to conduct many tests at the same stress level as in traditional cases.
文摘In this study,geochemical anomaly separation was carried out with methods based on the distribution model,which includes probability diagram(MPD),fractal(concentration-area technique),and U-statistic methods.The main objective is to evaluate the efficiency and accuracy of the methods in separation of anomalies on the shear zone gold mineralization.For this purpose,samples were taken from the secondary lithogeochemical environment(stream sediment samples)on the gold mineralization in Saqqez,NW of Iran.Interpretation of the histograms and diagrams showed that the MPD is capable of identifying two phases of mineralization.The fractal method could separate only one phase of change based on the fractal dimension with high concentration areas of the Au element.The spatial analysis showed two mixed subpopulations after U=0 and another subpopulation with very high U values.The MPD analysis followed spatial analysis,which shows the detail of the variations.Six mineralized zones detected from local geochemical exploration results were used for validating the methods mentioned above.The MPD method was able to identify the anomalous areas higher than 90%,whereas the two other methods identified 60%(maximum)of the anomalous areas.The raw data without any estimation for the concentration was used by the MPD method using aminimum of calculations to determine the threshold values.Therefore,the MPD method is more robust than the other methods.The spatial analysis identified the detail soft hegeological and mineralization events that were affected in the study area.MPD is recommended as the best,and the spatial U-analysis is the next reliable method to be used.The fractal method could show more detail of the events and variations in the area with asymmetrical grid net and a higher density of sampling or at the detailed exploration stage.
基金supported by the National Natural Science Foundation of China(NSFC,grant Nos.12173021 and 12133005)CASSACA grant CCJRF2105。
文摘The correlation between close-in super Earths and distant cold Jupiters in planetary systems has important implications for their formation and evolution.Contrary to some earlier findings,a recent study conducted by Bonomo et al.suggests that the occurrence of cold Jupiter companions is not excessive in super-Earth systems.Here we show that this discrepancy can be seen as a Simpson’s paradox and is resolved once the metallicity dependence of the super-Earth-cold Jupiter relation is taken into account.A common feature is noticed that almost all the cold Jupiter detections with inner super-Earth companions are found around metal-rich stars.Focusing on the Sun-like hosts with super-solar metallicities,we show that the frequency of cold Jupiters conditioned on the presence of inner super Earths is 39_(-11)^(+12)%,whereas the frequency of cold Jupiters in the same metallicity range is no more than 20%.Therefore,the occurrences of close-in super Earths and distant cold Jupiters appear correlated around metal-rich hosts.The relation between the two types of planets remains unclear for stars with metal-poor hosts due to the limited sample size and the much lower occurrence rate of cold Jupiters,but a correlation between the two cannot be ruled out.
基金supported by the National Key R&D Program of China (grant No.2022YFA1602901)support of the National Natural Science Foundation of China(NSFC) grant Nos. 12090040, 12090041, and 12003043+5 种基金supported by the Youth Innovation Promotion AssociationCAS (No. 2020057)the science research grants of CSST from the China Manned Space Projectsupport of the NSFC grant Nos.11733006 and U1931109supported by the Strategic Priority Research Program of the Chinese Academy of Sciences,Grant No. XDB0550100partially supported by the Open Project Program of the Key Laboratory of Optical Astronomy,National Astronomical Observatories,Chinese Academy of Sciences。
文摘We present a study of low surface brightness galaxies(LSBGs) selected by fitting the images for all the galaxies inα.40 SDSS DR7 sample with two kinds of single-component models and two kinds of two-component models(disk+bulge):single exponential,single sersic,exponential+deVaucular(exp+deV),and exponential+sérsic(exp+ser).Under the criteria of the B band disk central surface brightness μ_(0,disk)(B)≥22.5 mag arcsec^(-2) and the axis ratio b/a> 0.3,we selected four none-edge-on LSBG samples from each of the models which contain 1105,1038,207,and 75 galaxies,respectively.There are 756 galaxies in common between LSBGs selected by exponential and sersic models,corresponding to 68.42% of LSBGs selected by the exponential model and 72.83% of LSBGs selected by the sersic model,the rest of the discrepancy is due to the difference in obtaining μ_(0) between the exponential and sersic models.Based on the fitting,in the range of 0.5≤n≤1.5,the relation of μ_(0) from two models can be written as μ_(0,sérsic)-μ_(0,exp)=-1.34(n-1).The LSBGs selected by disk+bulge models(LSBG_(2)comps) are more massive than LSBGs selected by single-component models(LSBG_1comp),and also show a larger disk component.Though the bulges in the majority of our LSBG_(2)comps are not prominent,more than 60% of our LSBG_(2)comps will not be selected if we adopt a single-component model only.We also identified 31 giant low surface brightness galaxies(gLSBGs) from LSBG_(2)comps.They are located at the same region in the color-magnitude diagram as other gLSBGs.After we compared different criteria of gLSBGs selection,we find that for gas-rich LSBGs,M_(*)> 10^(10)M_⊙ is the best to distinguish between gLSBGs and normal LSBGs with bulge.
文摘Glitch activity refers to the mean increase in pulsar spin frequency per year due to rotational glitches.It is an important tool for studying super-nuclear matter using neutron star interiors as templates.Glitch events are typically observed in the spin frequency(ν) and frequency derivative( ν) of pulsars.The rate of glitch recurrence decreases as the pulsar ages,and the activity parameter is usually measured by linear regression of cumulative glitches over a given period.This method is effective for pulsars with multiple regular glitch events.However,due to the scarcity of glitch events and the difficulty of monitoring all known pulsars,only a few have multiple records of glitch events.This limits the use of the activity parameter in studying neutron star interiors with multiple pulsars.In this study,we examined the relationship between the activity parameters and pulsar spin parameters(spin frequency,frequency derivative,and pulsar characteristic age).We found that a quadratic function provides a better fit for the relationship between activity parameters and spin parameters than the commonly used linear functions.Using this information,we were able to estimate the activity parameters of other pulsars that do not have records of glitches.Our analysis shows that the relationship between the estimated activity parameters and pulsar spin parameters is consistent with that of the observed activity parameters in the ensemble of pulsars.
基金Natural Natural Science Foundation of China Under Grant No 50778077 & 50608036the Graduate Innovation Fund of Huazhong University of Science and Technology Under Grant No HF-06-028
文摘A novel damage detection method is applied to a 3-story frame structure, to obtain statistical quantification control criterion of the existence, location and identification of damage. The mean, standard deviation, and exponentially weighted moving average (EWMA) are applied to detect damage information according to statistical process control (SPC) theory. It is concluded that the detection is insignificant with the mean and EWMA because the structural response is not independent and is not a normal distribution. On the other hand, the damage information is detected well with the standard deviation because the influence of the data distribution is not pronounced with this parameter. A suitable moderate confidence level is explored for more significant damage location and quantification detection, and the impact of noise is investigated to illustrate the robustness of the method.
文摘Identification of modal parameters of a linear structure with output-only measurements has received much attention over the past decades. In the paper, the Natural Excitation Technique (NExT) is used for acquisition of the impulse signals from the structural responses. Then Eigensystem Realization Algorithm (ERA) is utilized for modal identification. For disregarding the fictitious ‘computational modes', a procedure, Statistically Averaging Modal Frequency Method (SAMFM), is developed to distinguish the true modes from noise modes, and to improve the precision of the identified modal frequencies of the structure. An offshore platform is modeled with the finite element method. The theoretical modal parameters are obtained for a comparison with the identified values. The dynamic responses of the platform under random wave loading are computed for providing the output signals used for identification with ERA. Results of simulation demonstrate that the proposed method can determine the system modal frequency with high precision.
基金supported by the Strategic Priority Research Program“The Emergence of Cosmological Structures”of the Chinese Academy of Sciences(Grant No.XDB09000000)the National Key Basic Research Program of China(2014CB845700)+1 种基金the National Natural Science Foundation of China(Grant Nos.11373032 and 11333003)a National Major Scientific Project built by the Chinese Academy of Sciences.Funding for the project has been provided by the project has been provided by the National Development and Reform Commission
文摘We present a statistical method to derive the stellar density profiles of the Milky Way from spectroscopic survey data, taking into account selection effects. We assume the selection function, which can be altered during observations and data reductions, of the spectroscopic survey is based on photometric colors and magnitude. Then the underlying selection function for a line-of-sight can be recovered well by comparing the distribution of the spectroscopic stars in a color-magnitude plane with that of the photometric dataset. Subsequently, the stellar density profile along a line-of-sight can be derived from the spectroscopically measured stellar density profile multiplied by the selection function. The method is validated using Galaxia mock data with two different selection functions. We demonstrate that the derived stellar density profiles reconstruct the true ones well not only for the full set of targets, but also for sub-populations selected from the full dataset. Finally, the method is applied to map the density pro- files for the Galactic disk and halo, using the LAMOST RGB stars. The Galactic disk extends to about R = 19 kpc, where the disk still contributes about 10% to the total stellar surface density. Beyond this radius, the disk smoothly transitions to the halo without any truncation, bending or breaking. Moreover, no over-density corresponding to the Monoceros ring is found in the Galactic anti-center direction. The disk shows moderate north-south asymmetry at radii larger than 12 kpc. On the other hand, the R-Z tomographic map directly shows that the stellar halo is substantially oblate within a Galactocentric radius of 20 kpc and gradually becomes nearly spherical beyond 30 kpc.
基金supported by the National Key R&D Program of China(2018YFE0202900)support by the NAOC Nebula Talents Program and the Cultivation Project for FAST Scientific Payoff and Research Achievement of CAMS-CAS。
文摘In radio astronomy,radio frequency interference(RFI)becomes more and more serious for radio observational facilities.The RFI always influences the search and study of the interesting astronomical objects.Mitigating the RFI becomes an essential procedure in any survey data processing.The Five-hundred-meter Aperture Spherical radio Telescope(FAST)is an extremely sensitive radio telescope.It is necessary to find out an effective and precise RFI mitigation method for FAST data processing.In this work,we introduce a method to mitigate the RFI in FAST spectral observation and make a statistic for the RFI using~300 h FAST data.The details are as follows.First,according to the characteristics of FAST spectra,we propose to use the Asymmetrically Reweighted Penalized Least Squares algorithm for baseline fitting.Our test results show that it has a good performance.Second,we flag the RFI with four strategies,which are to flag extremely strong RFI,flag long-lasting RFI,flag polarized RFI,and flag beam-combined RFI,respectively.The test results show that all the RFI above a preset threshold could be flagged.Third,we make a statistic for the probabilities of polarized XX and YY RFI in FAST observations.The statistical results could tell us which frequencies are relatively quiescent.With such statistical data,we are able to avoid using such frequencies in our spectral observations.Finally,based on the~300 h FAST data,we obtained an RFI table,which is the most complete database currently for FAST.
基金supported by the National Key Basic R&D Program of China(2019YFA0405500)the National Natural Science Foundation of China(No.11603002)Beijing Normal University(No.310232102)。
文摘The Chinese Space Station Telescope(CSST)spectroscopic survey aims to deliver high-quality low-resolution(R>200)slitless spectra for hundreds of millions of targets down to a limiting magnitude of about 21 mag,distributed within a large survey area(17500 deg2)and covering a wide wavelength range(255-1000 nm by three bands GU,GV,and GI).As slitless spectroscopy precludes the usage of wavelength calibration lamps,wavelength calibration is one of the most challenging issues in the reduction of slitless spectra,yet it plays a key role in measuring precise radial velocities of stars and redshifts of galaxies.In this work,we propose a star-based method that can monitor and correct for possible errors in the CSST wavelength calibration using normal scientific observations,taking advantage of the facts that(ⅰ)there are about ten million stars with reliable radial velocities now available thanks to spectroscopic surveys like LAMOST,(ⅱ)the large field of view of CSST enables efficient observations of such stars in a short period of time,and(ⅲ)radial velocities of such stars can be reliably measured using only a narrow segment of CSST spectra.We demonstrate that it is possible to achieve a wavelength calibration precision of a few km s^(-1) for the GU band,and about 10 to 20 kms^(-1) for the GV and GI bands,with only a few hundred velocity standard stars.Implementations of the method to other surveys are also discussed.
基金funded by the National Natural Science Foundation of China(Grant Nos.11873066 and U1731109)。
文摘Spectrum denoising is an important procedure for large-scale spectroscopical surveys. This work proposes a novel stellar spectrum denoising method based on deep Bayesian modeling. The construction of our model includes a prior distribution for each stellar subclass, a spectrum generator and a flow-based noise model. Our method takes into account the noise correlation structure, and it is not susceptible to strong sky emission lines and cosmic rays. Moreover, it is able to naturally handle spectra with missing flux values without ad-hoc imputation. The proposed method is evaluated on real stellar spectra from the Sloan Digital Sky Survey(SDSS) with a comprehensive list of common stellar subclasses and compared to the standard denoising auto-encoder. Our denoising method demonstrates a superior performance to the standard denoising auto-encoder, in respect of denoising quality and missing flux imputation. It may be potentially helpful in improving the accuracy of the classification and physical parameter measurement of stars when applying our method during data preprocessing.
基金supported by the National Natural Science Foundation of China(Grant No.11078010)SOHO is a project of international cooperation between the European Space Agency(ESA) and NASA
文摘A multi-model integration method is proposed to develop a multi-source and heterogeneous model for short-term solar flare prediction. Different prediction models are constructed on the basis of extracted predictors from a pool of observation databases. The outputs of the base models are normal- ized first because these established models extract predictors from many data resources using different prediction methods. Then weighted integration of the base models is used to develop a multi-model integrated model (MIM). The weight set that single models assign is optimized by a genetic algorithm. Seven base models and data from Solar and Heliospheric Observatory/Michelson Doppler Imager lon- gitudinal magnetograms are used to construct the MIM, and then its performance is evaluated by cross validation. Experimental results showed that the MIM outperforms any individual model in nearly every data group, and the richer the diversity of the base models, the better the performance of the MIM. Thus, integrating more diversified models, such as an expert system, a statistical model and a physical model, will greatly improve the performance of the MIM.
文摘Ag-sheathed (Bi,Pb)(2)SoCa(2)Cu(3)O(x) tapes were prepared by the powder-in-tube method. The influences of rolling parameters on superconducting characteristics of Bi(2223)/Ag tapes were analyzed qualitatively with a statistical method. The results demonstrate that roll diameter and reduction per pass significantly influence the properties of Bi(2223)/Ag superconducting tapes while roll speed does less and working friction the least. An optimized rolling process was therefore achieved according to the above results.
基金supported by the Eleventh Five-Year Key Technology R and D Program,China(Grant No.2006BAC02A15)the Colleges and Universities in Jiangsu Province Natural Science-Based Research Projects(Grant No.2006BAC02A15)+1 种基金the Jiangsu Province Post-Doctoral Fund Projects(Grant No.0801006C)the China Post-Doctoral Science Foundation(Grant No.20080441032)
文摘The significance of the fluctuation and randomness of the time series of each pollutant in environmental quality assessment is described for the first time in this paper. A comparative study was made of three different computing methods: the same starting point method, the striding averaging method, and the stagger phase averaging method. All of them can be used to calculate the Hurst index, which quantifies fluctuation and randomness. This study used real water quality data from Shazhu monitoring station on Taihu Lake in Wuxi, Jiangsu Province. The results show that, of the three methods, the stagger phase averaging method is best for calculating the Hurst index of a pollutant time series from the perspective of statistical regularity.
文摘Statistical approaches for evaluating causal effects and for discovering causal networks are discussed in this paper.A causal relation between two variables is different from an association or correlation between them.An association measurement between two variables and may be changed dramatically from positive to negative by omitting a third variable,which is called Yule-Simpson paradox.We shall discuss how to evaluate the causal effect of a treatment or exposure on an outcome to avoid the phenomena of Yule-Simpson paradox. Surrogates and intermediate variables are often used to reduce measurement costs or duration when measurement of endpoint variables is expensive,inconvenient,infeasible or unobservable in practice.There have been many criteria for surrogates.However,it is possible that for a surrogate satisfying these criteria,a treatment has a positive effect on the surrogate,which in turn has a positive effect on the outcome,but the treatment has a negative effect on the outcome,which is called the surrogate paradox.We shall discuss criteria for surrogates to avoid the phenomena of the surrogate paradox. Causal networks which describe the causal relationships among a large number of variables have been applied to many research fields.It is important to discover structures of causal networks from observed data.We propose a recursive approach for discovering a causal network in which a structural learning of a large network is decomposed recursively into learning of small networks.Further to discover causal relationships,we present an active learning approach in terms of external interventions on some variables.When we focus on the causes of an interest outcome, instead of discovering a whole network,we propose a local learning approach to discover these causes that affect the outcome.
基金funded by the National Natural Science Foundation of China(Grant No.11973021)。
文摘In pulsar astronomy, detecting effective pulsar signals among numerous pulsar candidates is an important research topic. Starting from space X-ray pulsar signals, the two-dimensional autocorrelation profile map(2 D-APM) feature modelling method, which utilizes epoch folding of the autocorrelation function of X-ray signals and expands the time-domain information of the periodic axis, is proposed. A uniform setting criterion regarding the time resolution of the periodic axis addresses pulsar signals without any prior information. Compared with the traditional profile, the model has a strong anti-noise ability, a greater abundance of information and consistent characteristics. The new feature is simulated with double Gaussian components, and the characteristic distribution of the model is revealed to be closely related to the distance between the double peaks of the profile. Next, a deep convolutional neural network(DCNN)is built, named Inception-Res Net. According to the order of the peak separation and number of arriving photons, 30 data sets based on the Poisson process are simulated to construct the training set, and the observation data of PSRs B0531+21, B0540-69 and B1509-58 from the Rossi X-ray Timing Explorer(RXTE) are selected to generate the test set. The number of training sets and the number of test sets are 30 000 and 5400, respectively. After achieving convergence stability, more than 99% of the pulsar signals are recognized, and more than 99% of the interference is successfully rejected, which verifies the high degree of agreement between the network and the feature model and the high potential of the proposed method in searching for pulsars.
文摘The quality of the low frequency electromagnetic data is affected by the spike and the trend noises.Failure in removal of the spikes and the trends reduces the credibility of data explanation.Based on the analyses of the causes and characteristics of these noises,this paper presents the results of a preset statistics stacking method(PSSM)and a piecewise linear fitting method(PLFM)in de-noising the spikes and trends,respectively.The magnitudes of the spikes are either higher or lower than the normal values,which leads to distortion of the useful signal.Comparisons have been performed in removing of the spikes among the average,the statistics and the PSSM methods,and the results indicate that only the PSSM can remove the spikes successfully.On the other hand,the spectrums of the linear and nonlinear trends mainly lie in the low frequency band and can change the calculated resistivity significantly.No influence of the trends is observed when the frequency is higher than a certain threshold value.The PLSM can remove effectively both the linear and nonlinear trends with errors around 1% in the power spectrum.The proposed methods present an effective way for de-noising the spike and the trend noises in the low frequency electromagnetic data,and establish a research basis for de-noising the low frequency noises.