In this study,methods based on the distribution model(with and without personal opinion)were used for the separation of anomalous zones,which include two different methods of U-spatial statistics and mean plus values ...In this study,methods based on the distribution model(with and without personal opinion)were used for the separation of anomalous zones,which include two different methods of U-spatial statistics and mean plus values of standard deviation(X+nS).The primary purpose is to compare the results of these methods with each other.To increase the accuracy of comparison,regional geochemical data were used where occurrences and mineralization zones of epithermal gold have been introduced.The study area is part of the Hashtjin geological map,which is structurally part of the folded and thrust belt and part of the Alborz Tertiary magmatic complex.Samples were taken from secondary lithogeochemical environments.Au element data concerning epithermal gold reserves were used to investigate the efficacy of these two methods.In the U-spatial statistics method,and criteria were used to determine the threshold,and in the method,the element enrichment index of the region rock units was obtained with grouping these units.The anomalous areas were identified by,and criteria.Comparison of methods was made considering the position of discovered occurrences and the occurrences obtained from these methods,the flexibility of the methods in separating the anomalous zones,and the two-dimensional spatial correlation of the three elements As,Pb,and Ag with Au element.The ability of two methods to identify potential areas is acceptable.Among these methods,it seems the method with criteria has a high degree of flexibility in separating anomalous regions in the case of epithermal type gold deposits.展开更多
A method named interval analysis method, which solves the buckling load of composite laminate with uncertainties, is presented. Based on interval mathematics and Taylor series expansion, the interval analysis method i...A method named interval analysis method, which solves the buckling load of composite laminate with uncertainties, is presented. Based on interval mathematics and Taylor series expansion, the interval analysis method is used to deal with uncertainties. Not necessarily knowing the probabilistic statistics characteristics of the uncertain variables, only little information on physical properties of material is needed in the interval analysis method, that is, the upper bound and lower bound of the uncertain variable. So the interval of response of the structure can be gotten through less computational efforts. The interval analysis method is efficient under the condition that probability approach cannot work well because of small samples and deficient statistics characteristics. For buckling load of a special cross-ply laminates and antisymmetric angle-ply laminates with all edges simply supported, calculations and comparisons between interval analysis method and probability method are performed.展开更多
Aim To improve the efficiency of fatigue material tests and relevant statistical treatment of test data. Methods\ Least square approach and other special treatments were used. Results and Conclusion\ The concepts...Aim To improve the efficiency of fatigue material tests and relevant statistical treatment of test data. Methods\ Least square approach and other special treatments were used. Results and Conclusion\ The concepts of each phase in fatigue tests and statistical treatment are clarified. The method proposed leads to three important properties. Reduced number of specimens brings to the advantage of lowering test expenditures. The whole test procedure has more flexibility for there is no need to conduct many tests at the same stress level as in traditional cases.展开更多
In this study,geochemical anomaly separation was carried out with methods based on the distribution model,which includes probability diagram(MPD),fractal(concentration-area technique),and U-statistic methods.The main ...In this study,geochemical anomaly separation was carried out with methods based on the distribution model,which includes probability diagram(MPD),fractal(concentration-area technique),and U-statistic methods.The main objective is to evaluate the efficiency and accuracy of the methods in separation of anomalies on the shear zone gold mineralization.For this purpose,samples were taken from the secondary lithogeochemical environment(stream sediment samples)on the gold mineralization in Saqqez,NW of Iran.Interpretation of the histograms and diagrams showed that the MPD is capable of identifying two phases of mineralization.The fractal method could separate only one phase of change based on the fractal dimension with high concentration areas of the Au element.The spatial analysis showed two mixed subpopulations after U=0 and another subpopulation with very high U values.The MPD analysis followed spatial analysis,which shows the detail of the variations.Six mineralized zones detected from local geochemical exploration results were used for validating the methods mentioned above.The MPD method was able to identify the anomalous areas higher than 90%,whereas the two other methods identified 60%(maximum)of the anomalous areas.The raw data without any estimation for the concentration was used by the MPD method using aminimum of calculations to determine the threshold values.Therefore,the MPD method is more robust than the other methods.The spatial analysis identified the detail soft hegeological and mineralization events that were affected in the study area.MPD is recommended as the best,and the spatial U-analysis is the next reliable method to be used.The fractal method could show more detail of the events and variations in the area with asymmetrical grid net and a higher density of sampling or at the detailed exploration stage.展开更多
A novel damage detection method is applied to a 3-story frame structure, to obtain statistical quantification control criterion of the existence, location and identification of damage. The mean, standard deviation, an...A novel damage detection method is applied to a 3-story frame structure, to obtain statistical quantification control criterion of the existence, location and identification of damage. The mean, standard deviation, and exponentially weighted moving average (EWMA) are applied to detect damage information according to statistical process control (SPC) theory. It is concluded that the detection is insignificant with the mean and EWMA because the structural response is not independent and is not a normal distribution. On the other hand, the damage information is detected well with the standard deviation because the influence of the data distribution is not pronounced with this parameter. A suitable moderate confidence level is explored for more significant damage location and quantification detection, and the impact of noise is investigated to illustrate the robustness of the method.展开更多
Identification of modal parameters of a linear structure with output-only measurements has received much attention over the past decades. In the paper, the Natural Excitation Technique (NExT) is used for acquisition o...Identification of modal parameters of a linear structure with output-only measurements has received much attention over the past decades. In the paper, the Natural Excitation Technique (NExT) is used for acquisition of the impulse signals from the structural responses. Then Eigensystem Realization Algorithm (ERA) is utilized for modal identification. For disregarding the fictitious ‘computational modes', a procedure, Statistically Averaging Modal Frequency Method (SAMFM), is developed to distinguish the true modes from noise modes, and to improve the precision of the identified modal frequencies of the structure. An offshore platform is modeled with the finite element method. The theoretical modal parameters are obtained for a comparison with the identified values. The dynamic responses of the platform under random wave loading are computed for providing the output signals used for identification with ERA. Results of simulation demonstrate that the proposed method can determine the system modal frequency with high precision.展开更多
We present a statistical method to derive the stellar density profiles of the Milky Way from spectroscopic survey data, taking into account selection effects. We assume the selection function, which can be altered dur...We present a statistical method to derive the stellar density profiles of the Milky Way from spectroscopic survey data, taking into account selection effects. We assume the selection function, which can be altered during observations and data reductions, of the spectroscopic survey is based on photometric colors and magnitude. Then the underlying selection function for a line-of-sight can be recovered well by comparing the distribution of the spectroscopic stars in a color-magnitude plane with that of the photometric dataset. Subsequently, the stellar density profile along a line-of-sight can be derived from the spectroscopically measured stellar density profile multiplied by the selection function. The method is validated using Galaxia mock data with two different selection functions. We demonstrate that the derived stellar density profiles reconstruct the true ones well not only for the full set of targets, but also for sub-populations selected from the full dataset. Finally, the method is applied to map the density pro- files for the Galactic disk and halo, using the LAMOST RGB stars. The Galactic disk extends to about R = 19 kpc, where the disk still contributes about 10% to the total stellar surface density. Beyond this radius, the disk smoothly transitions to the halo without any truncation, bending or breaking. Moreover, no over-density corresponding to the Monoceros ring is found in the Galactic anti-center direction. The disk shows moderate north-south asymmetry at radii larger than 12 kpc. On the other hand, the R-Z tomographic map directly shows that the stellar halo is substantially oblate within a Galactocentric radius of 20 kpc and gradually becomes nearly spherical beyond 30 kpc.展开更多
In radio astronomy,radio frequency interference(RFI)becomes more and more serious for radio observational facilities.The RFI always influences the search and study of the interesting astronomical objects.Mitigating th...In radio astronomy,radio frequency interference(RFI)becomes more and more serious for radio observational facilities.The RFI always influences the search and study of the interesting astronomical objects.Mitigating the RFI becomes an essential procedure in any survey data processing.The Five-hundred-meter Aperture Spherical radio Telescope(FAST)is an extremely sensitive radio telescope.It is necessary to find out an effective and precise RFI mitigation method for FAST data processing.In this work,we introduce a method to mitigate the RFI in FAST spectral observation and make a statistic for the RFI using~300 h FAST data.The details are as follows.First,according to the characteristics of FAST spectra,we propose to use the Asymmetrically Reweighted Penalized Least Squares algorithm for baseline fitting.Our test results show that it has a good performance.Second,we flag the RFI with four strategies,which are to flag extremely strong RFI,flag long-lasting RFI,flag polarized RFI,and flag beam-combined RFI,respectively.The test results show that all the RFI above a preset threshold could be flagged.Third,we make a statistic for the probabilities of polarized XX and YY RFI in FAST observations.The statistical results could tell us which frequencies are relatively quiescent.With such statistical data,we are able to avoid using such frequencies in our spectral observations.Finally,based on the~300 h FAST data,we obtained an RFI table,which is the most complete database currently for FAST.展开更多
The Chinese Space Station Telescope(CSST)spectroscopic survey aims to deliver high-quality low-resolution(R>200)slitless spectra for hundreds of millions of targets down to a limiting magnitude of about 21 mag,dist...The Chinese Space Station Telescope(CSST)spectroscopic survey aims to deliver high-quality low-resolution(R>200)slitless spectra for hundreds of millions of targets down to a limiting magnitude of about 21 mag,distributed within a large survey area(17500 deg2)and covering a wide wavelength range(255-1000 nm by three bands GU,GV,and GI).As slitless spectroscopy precludes the usage of wavelength calibration lamps,wavelength calibration is one of the most challenging issues in the reduction of slitless spectra,yet it plays a key role in measuring precise radial velocities of stars and redshifts of galaxies.In this work,we propose a star-based method that can monitor and correct for possible errors in the CSST wavelength calibration using normal scientific observations,taking advantage of the facts that(ⅰ)there are about ten million stars with reliable radial velocities now available thanks to spectroscopic surveys like LAMOST,(ⅱ)the large field of view of CSST enables efficient observations of such stars in a short period of time,and(ⅲ)radial velocities of such stars can be reliably measured using only a narrow segment of CSST spectra.We demonstrate that it is possible to achieve a wavelength calibration precision of a few km s^(-1) for the GU band,and about 10 to 20 kms^(-1) for the GV and GI bands,with only a few hundred velocity standard stars.Implementations of the method to other surveys are also discussed.展开更多
Spectrum denoising is an important procedure for large-scale spectroscopical surveys. This work proposes a novel stellar spectrum denoising method based on deep Bayesian modeling. The construction of our model include...Spectrum denoising is an important procedure for large-scale spectroscopical surveys. This work proposes a novel stellar spectrum denoising method based on deep Bayesian modeling. The construction of our model includes a prior distribution for each stellar subclass, a spectrum generator and a flow-based noise model. Our method takes into account the noise correlation structure, and it is not susceptible to strong sky emission lines and cosmic rays. Moreover, it is able to naturally handle spectra with missing flux values without ad-hoc imputation. The proposed method is evaluated on real stellar spectra from the Sloan Digital Sky Survey(SDSS) with a comprehensive list of common stellar subclasses and compared to the standard denoising auto-encoder. Our denoising method demonstrates a superior performance to the standard denoising auto-encoder, in respect of denoising quality and missing flux imputation. It may be potentially helpful in improving the accuracy of the classification and physical parameter measurement of stars when applying our method during data preprocessing.展开更多
A multi-model integration method is proposed to develop a multi-source and heterogeneous model for short-term solar flare prediction. Different prediction models are constructed on the basis of extracted predictors fr...A multi-model integration method is proposed to develop a multi-source and heterogeneous model for short-term solar flare prediction. Different prediction models are constructed on the basis of extracted predictors from a pool of observation databases. The outputs of the base models are normal- ized first because these established models extract predictors from many data resources using different prediction methods. Then weighted integration of the base models is used to develop a multi-model integrated model (MIM). The weight set that single models assign is optimized by a genetic algorithm. Seven base models and data from Solar and Heliospheric Observatory/Michelson Doppler Imager lon- gitudinal magnetograms are used to construct the MIM, and then its performance is evaluated by cross validation. Experimental results showed that the MIM outperforms any individual model in nearly every data group, and the richer the diversity of the base models, the better the performance of the MIM. Thus, integrating more diversified models, such as an expert system, a statistical model and a physical model, will greatly improve the performance of the MIM.展开更多
Ag-sheathed (Bi,Pb)(2)SoCa(2)Cu(3)O(x) tapes were prepared by the powder-in-tube method. The influences of rolling parameters on superconducting characteristics of Bi(2223)/Ag tapes were analyzed qualitatively with a ...Ag-sheathed (Bi,Pb)(2)SoCa(2)Cu(3)O(x) tapes were prepared by the powder-in-tube method. The influences of rolling parameters on superconducting characteristics of Bi(2223)/Ag tapes were analyzed qualitatively with a statistical method. The results demonstrate that roll diameter and reduction per pass significantly influence the properties of Bi(2223)/Ag superconducting tapes while roll speed does less and working friction the least. An optimized rolling process was therefore achieved according to the above results.展开更多
The significance of the fluctuation and randomness of the time series of each pollutant in environmental quality assessment is described for the first time in this paper. A comparative study was made of three differen...The significance of the fluctuation and randomness of the time series of each pollutant in environmental quality assessment is described for the first time in this paper. A comparative study was made of three different computing methods: the same starting point method, the striding averaging method, and the stagger phase averaging method. All of them can be used to calculate the Hurst index, which quantifies fluctuation and randomness. This study used real water quality data from Shazhu monitoring station on Taihu Lake in Wuxi, Jiangsu Province. The results show that, of the three methods, the stagger phase averaging method is best for calculating the Hurst index of a pollutant time series from the perspective of statistical regularity.展开更多
Statistical approaches for evaluating causal effects and for discovering causal networks are discussed in this paper.A causal relation between two variables is different from an association or correlation between them...Statistical approaches for evaluating causal effects and for discovering causal networks are discussed in this paper.A causal relation between two variables is different from an association or correlation between them.An association measurement between two variables and may be changed dramatically from positive to negative by omitting a third variable,which is called Yule-Simpson paradox.We shall discuss how to evaluate the causal effect of a treatment or exposure on an outcome to avoid the phenomena of Yule-Simpson paradox. Surrogates and intermediate variables are often used to reduce measurement costs or duration when measurement of endpoint variables is expensive,inconvenient,infeasible or unobservable in practice.There have been many criteria for surrogates.However,it is possible that for a surrogate satisfying these criteria,a treatment has a positive effect on the surrogate,which in turn has a positive effect on the outcome,but the treatment has a negative effect on the outcome,which is called the surrogate paradox.We shall discuss criteria for surrogates to avoid the phenomena of the surrogate paradox. Causal networks which describe the causal relationships among a large number of variables have been applied to many research fields.It is important to discover structures of causal networks from observed data.We propose a recursive approach for discovering a causal network in which a structural learning of a large network is decomposed recursively into learning of small networks.Further to discover causal relationships,we present an active learning approach in terms of external interventions on some variables.When we focus on the causes of an interest outcome, instead of discovering a whole network,we propose a local learning approach to discover these causes that affect the outcome.展开更多
In pulsar astronomy, detecting effective pulsar signals among numerous pulsar candidates is an important research topic. Starting from space X-ray pulsar signals, the two-dimensional autocorrelation profile map(2 D-AP...In pulsar astronomy, detecting effective pulsar signals among numerous pulsar candidates is an important research topic. Starting from space X-ray pulsar signals, the two-dimensional autocorrelation profile map(2 D-APM) feature modelling method, which utilizes epoch folding of the autocorrelation function of X-ray signals and expands the time-domain information of the periodic axis, is proposed. A uniform setting criterion regarding the time resolution of the periodic axis addresses pulsar signals without any prior information. Compared with the traditional profile, the model has a strong anti-noise ability, a greater abundance of information and consistent characteristics. The new feature is simulated with double Gaussian components, and the characteristic distribution of the model is revealed to be closely related to the distance between the double peaks of the profile. Next, a deep convolutional neural network(DCNN)is built, named Inception-Res Net. According to the order of the peak separation and number of arriving photons, 30 data sets based on the Poisson process are simulated to construct the training set, and the observation data of PSRs B0531+21, B0540-69 and B1509-58 from the Rossi X-ray Timing Explorer(RXTE) are selected to generate the test set. The number of training sets and the number of test sets are 30 000 and 5400, respectively. After achieving convergence stability, more than 99% of the pulsar signals are recognized, and more than 99% of the interference is successfully rejected, which verifies the high degree of agreement between the network and the feature model and the high potential of the proposed method in searching for pulsars.展开更多
The quality of the low frequency electromagnetic data is affected by the spike and the trend noises.Failure in removal of the spikes and the trends reduces the credibility of data explanation.Based on the analyses of ...The quality of the low frequency electromagnetic data is affected by the spike and the trend noises.Failure in removal of the spikes and the trends reduces the credibility of data explanation.Based on the analyses of the causes and characteristics of these noises,this paper presents the results of a preset statistics stacking method(PSSM)and a piecewise linear fitting method(PLFM)in de-noising the spikes and trends,respectively.The magnitudes of the spikes are either higher or lower than the normal values,which leads to distortion of the useful signal.Comparisons have been performed in removing of the spikes among the average,the statistics and the PSSM methods,and the results indicate that only the PSSM can remove the spikes successfully.On the other hand,the spectrums of the linear and nonlinear trends mainly lie in the low frequency band and can change the calculated resistivity significantly.No influence of the trends is observed when the frequency is higher than a certain threshold value.The PLSM can remove effectively both the linear and nonlinear trends with errors around 1% in the power spectrum.The proposed methods present an effective way for de-noising the spike and the trend noises in the low frequency electromagnetic data,and establish a research basis for de-noising the low frequency noises.展开更多
We collected the basic parameters of 231 supernova remnants (SNRs) in ourGalaxy, namely, distances (d) from the Sun, linear diameters (D), Galactic heights (Z), estimatedages (t), luminosities (L), surface brightness ...We collected the basic parameters of 231 supernova remnants (SNRs) in ourGalaxy, namely, distances (d) from the Sun, linear diameters (D), Galactic heights (Z), estimatedages (t), luminosities (L), surface brightness (Σ) and flux densities (S_1) at 1-GHz frequency andspectral indices (α). We tried to find possible correlations between these parameters. As expected,the linear diameters were found to increase with ages for the shell-type remnants, and also to havea tendency to increase with the Galactic heights. Both the surface brightness and luminosity ofSNRs at 1-GHz tend to decrease with the linear diameter and with age. No other relations between theparameters were found.展开更多
Existing theory and models suggest that a Type I (merger) GRB should have a larger jet beaming angle than a Type II (collapsar) GRB, but so far no statistical evidence is available to support this suggestion. In t...Existing theory and models suggest that a Type I (merger) GRB should have a larger jet beaming angle than a Type II (collapsar) GRB, but so far no statistical evidence is available to support this suggestion. In this paper, we obtain a sample of 37 beaming angles and calculate the probability that this is true. A correction is also devised to account for the scarcity of Type I GRBs in our sample. The probability is calculated to be 83% without the correction and 71% with it.展开更多
This paper focuses on the dynamic thermo-mechanical coupled response of random particulate composite materials. Both the inertia term and coupling term are considered in the dynamic coupled problem. The formulation of...This paper focuses on the dynamic thermo-mechanical coupled response of random particulate composite materials. Both the inertia term and coupling term are considered in the dynamic coupled problem. The formulation of the problem by a statistical second-order two-scale (SSOTS) analysis method and the algorithm procedure based on the finite-element difference method are presented. Numerical results of coupled cases are compared with those of uncoupled cases. It shows that the coupling effects on temperature, thermal flux, displacement, and stresses are very distinct, and the micro- characteristics of particles affect the coupling effect of the random composites. Furthermore, the coupling effect causes a lag in the variations of temperature, thermal flux, displacement, and stresses.展开更多
“Four classes of enterprises above designated size”(hereinafter called four-classes enterprises)refer to objects of statistical survey that have reached a certain scale in China’s current statistical method system,...“Four classes of enterprises above designated size”(hereinafter called four-classes enterprises)refer to objects of statistical survey that have reached a certain scale in China’s current statistical method system,including four classes in national economy,namely,industrial enterprises above designated size,construction and real estate development and management enterprises above qualifications,wholesale and retail,catering and accommodation enterprises,and service enterprises above designated size,which are the primary part of national economic and social development activities.This paper is focused on analyzing the practice and difficulties in the current statistics work of four-classes enterprises,and then this paper proposes some recommendations.展开更多
文摘In this study,methods based on the distribution model(with and without personal opinion)were used for the separation of anomalous zones,which include two different methods of U-spatial statistics and mean plus values of standard deviation(X+nS).The primary purpose is to compare the results of these methods with each other.To increase the accuracy of comparison,regional geochemical data were used where occurrences and mineralization zones of epithermal gold have been introduced.The study area is part of the Hashtjin geological map,which is structurally part of the folded and thrust belt and part of the Alborz Tertiary magmatic complex.Samples were taken from secondary lithogeochemical environments.Au element data concerning epithermal gold reserves were used to investigate the efficacy of these two methods.In the U-spatial statistics method,and criteria were used to determine the threshold,and in the method,the element enrichment index of the region rock units was obtained with grouping these units.The anomalous areas were identified by,and criteria.Comparison of methods was made considering the position of discovered occurrences and the occurrences obtained from these methods,the flexibility of the methods in separating the anomalous zones,and the two-dimensional spatial correlation of the three elements As,Pb,and Ag with Au element.The ability of two methods to identify potential areas is acceptable.Among these methods,it seems the method with criteria has a high degree of flexibility in separating anomalous regions in the case of epithermal type gold deposits.
文摘A method named interval analysis method, which solves the buckling load of composite laminate with uncertainties, is presented. Based on interval mathematics and Taylor series expansion, the interval analysis method is used to deal with uncertainties. Not necessarily knowing the probabilistic statistics characteristics of the uncertain variables, only little information on physical properties of material is needed in the interval analysis method, that is, the upper bound and lower bound of the uncertain variable. So the interval of response of the structure can be gotten through less computational efforts. The interval analysis method is efficient under the condition that probability approach cannot work well because of small samples and deficient statistics characteristics. For buckling load of a special cross-ply laminates and antisymmetric angle-ply laminates with all edges simply supported, calculations and comparisons between interval analysis method and probability method are performed.
文摘Aim To improve the efficiency of fatigue material tests and relevant statistical treatment of test data. Methods\ Least square approach and other special treatments were used. Results and Conclusion\ The concepts of each phase in fatigue tests and statistical treatment are clarified. The method proposed leads to three important properties. Reduced number of specimens brings to the advantage of lowering test expenditures. The whole test procedure has more flexibility for there is no need to conduct many tests at the same stress level as in traditional cases.
文摘In this study,geochemical anomaly separation was carried out with methods based on the distribution model,which includes probability diagram(MPD),fractal(concentration-area technique),and U-statistic methods.The main objective is to evaluate the efficiency and accuracy of the methods in separation of anomalies on the shear zone gold mineralization.For this purpose,samples were taken from the secondary lithogeochemical environment(stream sediment samples)on the gold mineralization in Saqqez,NW of Iran.Interpretation of the histograms and diagrams showed that the MPD is capable of identifying two phases of mineralization.The fractal method could separate only one phase of change based on the fractal dimension with high concentration areas of the Au element.The spatial analysis showed two mixed subpopulations after U=0 and another subpopulation with very high U values.The MPD analysis followed spatial analysis,which shows the detail of the variations.Six mineralized zones detected from local geochemical exploration results were used for validating the methods mentioned above.The MPD method was able to identify the anomalous areas higher than 90%,whereas the two other methods identified 60%(maximum)of the anomalous areas.The raw data without any estimation for the concentration was used by the MPD method using aminimum of calculations to determine the threshold values.Therefore,the MPD method is more robust than the other methods.The spatial analysis identified the detail soft hegeological and mineralization events that were affected in the study area.MPD is recommended as the best,and the spatial U-analysis is the next reliable method to be used.The fractal method could show more detail of the events and variations in the area with asymmetrical grid net and a higher density of sampling or at the detailed exploration stage.
基金Natural Natural Science Foundation of China Under Grant No 50778077 & 50608036the Graduate Innovation Fund of Huazhong University of Science and Technology Under Grant No HF-06-028
文摘A novel damage detection method is applied to a 3-story frame structure, to obtain statistical quantification control criterion of the existence, location and identification of damage. The mean, standard deviation, and exponentially weighted moving average (EWMA) are applied to detect damage information according to statistical process control (SPC) theory. It is concluded that the detection is insignificant with the mean and EWMA because the structural response is not independent and is not a normal distribution. On the other hand, the damage information is detected well with the standard deviation because the influence of the data distribution is not pronounced with this parameter. A suitable moderate confidence level is explored for more significant damage location and quantification detection, and the impact of noise is investigated to illustrate the robustness of the method.
文摘Identification of modal parameters of a linear structure with output-only measurements has received much attention over the past decades. In the paper, the Natural Excitation Technique (NExT) is used for acquisition of the impulse signals from the structural responses. Then Eigensystem Realization Algorithm (ERA) is utilized for modal identification. For disregarding the fictitious ‘computational modes', a procedure, Statistically Averaging Modal Frequency Method (SAMFM), is developed to distinguish the true modes from noise modes, and to improve the precision of the identified modal frequencies of the structure. An offshore platform is modeled with the finite element method. The theoretical modal parameters are obtained for a comparison with the identified values. The dynamic responses of the platform under random wave loading are computed for providing the output signals used for identification with ERA. Results of simulation demonstrate that the proposed method can determine the system modal frequency with high precision.
基金supported by the Strategic Priority Research Program“The Emergence of Cosmological Structures”of the Chinese Academy of Sciences(Grant No.XDB09000000)the National Key Basic Research Program of China(2014CB845700)+1 种基金the National Natural Science Foundation of China(Grant Nos.11373032 and 11333003)a National Major Scientific Project built by the Chinese Academy of Sciences.Funding for the project has been provided by the project has been provided by the National Development and Reform Commission
文摘We present a statistical method to derive the stellar density profiles of the Milky Way from spectroscopic survey data, taking into account selection effects. We assume the selection function, which can be altered during observations and data reductions, of the spectroscopic survey is based on photometric colors and magnitude. Then the underlying selection function for a line-of-sight can be recovered well by comparing the distribution of the spectroscopic stars in a color-magnitude plane with that of the photometric dataset. Subsequently, the stellar density profile along a line-of-sight can be derived from the spectroscopically measured stellar density profile multiplied by the selection function. The method is validated using Galaxia mock data with two different selection functions. We demonstrate that the derived stellar density profiles reconstruct the true ones well not only for the full set of targets, but also for sub-populations selected from the full dataset. Finally, the method is applied to map the density pro- files for the Galactic disk and halo, using the LAMOST RGB stars. The Galactic disk extends to about R = 19 kpc, where the disk still contributes about 10% to the total stellar surface density. Beyond this radius, the disk smoothly transitions to the halo without any truncation, bending or breaking. Moreover, no over-density corresponding to the Monoceros ring is found in the Galactic anti-center direction. The disk shows moderate north-south asymmetry at radii larger than 12 kpc. On the other hand, the R-Z tomographic map directly shows that the stellar halo is substantially oblate within a Galactocentric radius of 20 kpc and gradually becomes nearly spherical beyond 30 kpc.
基金supported by the National Key R&D Program of China(2018YFE0202900)support by the NAOC Nebula Talents Program and the Cultivation Project for FAST Scientific Payoff and Research Achievement of CAMS-CAS。
文摘In radio astronomy,radio frequency interference(RFI)becomes more and more serious for radio observational facilities.The RFI always influences the search and study of the interesting astronomical objects.Mitigating the RFI becomes an essential procedure in any survey data processing.The Five-hundred-meter Aperture Spherical radio Telescope(FAST)is an extremely sensitive radio telescope.It is necessary to find out an effective and precise RFI mitigation method for FAST data processing.In this work,we introduce a method to mitigate the RFI in FAST spectral observation and make a statistic for the RFI using~300 h FAST data.The details are as follows.First,according to the characteristics of FAST spectra,we propose to use the Asymmetrically Reweighted Penalized Least Squares algorithm for baseline fitting.Our test results show that it has a good performance.Second,we flag the RFI with four strategies,which are to flag extremely strong RFI,flag long-lasting RFI,flag polarized RFI,and flag beam-combined RFI,respectively.The test results show that all the RFI above a preset threshold could be flagged.Third,we make a statistic for the probabilities of polarized XX and YY RFI in FAST observations.The statistical results could tell us which frequencies are relatively quiescent.With such statistical data,we are able to avoid using such frequencies in our spectral observations.Finally,based on the~300 h FAST data,we obtained an RFI table,which is the most complete database currently for FAST.
基金supported by the National Key Basic R&D Program of China(2019YFA0405500)the National Natural Science Foundation of China(No.11603002)Beijing Normal University(No.310232102)。
文摘The Chinese Space Station Telescope(CSST)spectroscopic survey aims to deliver high-quality low-resolution(R>200)slitless spectra for hundreds of millions of targets down to a limiting magnitude of about 21 mag,distributed within a large survey area(17500 deg2)and covering a wide wavelength range(255-1000 nm by three bands GU,GV,and GI).As slitless spectroscopy precludes the usage of wavelength calibration lamps,wavelength calibration is one of the most challenging issues in the reduction of slitless spectra,yet it plays a key role in measuring precise radial velocities of stars and redshifts of galaxies.In this work,we propose a star-based method that can monitor and correct for possible errors in the CSST wavelength calibration using normal scientific observations,taking advantage of the facts that(ⅰ)there are about ten million stars with reliable radial velocities now available thanks to spectroscopic surveys like LAMOST,(ⅱ)the large field of view of CSST enables efficient observations of such stars in a short period of time,and(ⅲ)radial velocities of such stars can be reliably measured using only a narrow segment of CSST spectra.We demonstrate that it is possible to achieve a wavelength calibration precision of a few km s^(-1) for the GU band,and about 10 to 20 kms^(-1) for the GV and GI bands,with only a few hundred velocity standard stars.Implementations of the method to other surveys are also discussed.
基金funded by the National Natural Science Foundation of China(Grant Nos.11873066 and U1731109)。
文摘Spectrum denoising is an important procedure for large-scale spectroscopical surveys. This work proposes a novel stellar spectrum denoising method based on deep Bayesian modeling. The construction of our model includes a prior distribution for each stellar subclass, a spectrum generator and a flow-based noise model. Our method takes into account the noise correlation structure, and it is not susceptible to strong sky emission lines and cosmic rays. Moreover, it is able to naturally handle spectra with missing flux values without ad-hoc imputation. The proposed method is evaluated on real stellar spectra from the Sloan Digital Sky Survey(SDSS) with a comprehensive list of common stellar subclasses and compared to the standard denoising auto-encoder. Our denoising method demonstrates a superior performance to the standard denoising auto-encoder, in respect of denoising quality and missing flux imputation. It may be potentially helpful in improving the accuracy of the classification and physical parameter measurement of stars when applying our method during data preprocessing.
基金supported by the National Natural Science Foundation of China(Grant No.11078010)SOHO is a project of international cooperation between the European Space Agency(ESA) and NASA
文摘A multi-model integration method is proposed to develop a multi-source and heterogeneous model for short-term solar flare prediction. Different prediction models are constructed on the basis of extracted predictors from a pool of observation databases. The outputs of the base models are normal- ized first because these established models extract predictors from many data resources using different prediction methods. Then weighted integration of the base models is used to develop a multi-model integrated model (MIM). The weight set that single models assign is optimized by a genetic algorithm. Seven base models and data from Solar and Heliospheric Observatory/Michelson Doppler Imager lon- gitudinal magnetograms are used to construct the MIM, and then its performance is evaluated by cross validation. Experimental results showed that the MIM outperforms any individual model in nearly every data group, and the richer the diversity of the base models, the better the performance of the MIM. Thus, integrating more diversified models, such as an expert system, a statistical model and a physical model, will greatly improve the performance of the MIM.
文摘Ag-sheathed (Bi,Pb)(2)SoCa(2)Cu(3)O(x) tapes were prepared by the powder-in-tube method. The influences of rolling parameters on superconducting characteristics of Bi(2223)/Ag tapes were analyzed qualitatively with a statistical method. The results demonstrate that roll diameter and reduction per pass significantly influence the properties of Bi(2223)/Ag superconducting tapes while roll speed does less and working friction the least. An optimized rolling process was therefore achieved according to the above results.
基金supported by the Eleventh Five-Year Key Technology R and D Program,China(Grant No.2006BAC02A15)the Colleges and Universities in Jiangsu Province Natural Science-Based Research Projects(Grant No.2006BAC02A15)+1 种基金the Jiangsu Province Post-Doctoral Fund Projects(Grant No.0801006C)the China Post-Doctoral Science Foundation(Grant No.20080441032)
文摘The significance of the fluctuation and randomness of the time series of each pollutant in environmental quality assessment is described for the first time in this paper. A comparative study was made of three different computing methods: the same starting point method, the striding averaging method, and the stagger phase averaging method. All of them can be used to calculate the Hurst index, which quantifies fluctuation and randomness. This study used real water quality data from Shazhu monitoring station on Taihu Lake in Wuxi, Jiangsu Province. The results show that, of the three methods, the stagger phase averaging method is best for calculating the Hurst index of a pollutant time series from the perspective of statistical regularity.
文摘Statistical approaches for evaluating causal effects and for discovering causal networks are discussed in this paper.A causal relation between two variables is different from an association or correlation between them.An association measurement between two variables and may be changed dramatically from positive to negative by omitting a third variable,which is called Yule-Simpson paradox.We shall discuss how to evaluate the causal effect of a treatment or exposure on an outcome to avoid the phenomena of Yule-Simpson paradox. Surrogates and intermediate variables are often used to reduce measurement costs or duration when measurement of endpoint variables is expensive,inconvenient,infeasible or unobservable in practice.There have been many criteria for surrogates.However,it is possible that for a surrogate satisfying these criteria,a treatment has a positive effect on the surrogate,which in turn has a positive effect on the outcome,but the treatment has a negative effect on the outcome,which is called the surrogate paradox.We shall discuss criteria for surrogates to avoid the phenomena of the surrogate paradox. Causal networks which describe the causal relationships among a large number of variables have been applied to many research fields.It is important to discover structures of causal networks from observed data.We propose a recursive approach for discovering a causal network in which a structural learning of a large network is decomposed recursively into learning of small networks.Further to discover causal relationships,we present an active learning approach in terms of external interventions on some variables.When we focus on the causes of an interest outcome, instead of discovering a whole network,we propose a local learning approach to discover these causes that affect the outcome.
基金funded by the National Natural Science Foundation of China(Grant No.11973021)。
文摘In pulsar astronomy, detecting effective pulsar signals among numerous pulsar candidates is an important research topic. Starting from space X-ray pulsar signals, the two-dimensional autocorrelation profile map(2 D-APM) feature modelling method, which utilizes epoch folding of the autocorrelation function of X-ray signals and expands the time-domain information of the periodic axis, is proposed. A uniform setting criterion regarding the time resolution of the periodic axis addresses pulsar signals without any prior information. Compared with the traditional profile, the model has a strong anti-noise ability, a greater abundance of information and consistent characteristics. The new feature is simulated with double Gaussian components, and the characteristic distribution of the model is revealed to be closely related to the distance between the double peaks of the profile. Next, a deep convolutional neural network(DCNN)is built, named Inception-Res Net. According to the order of the peak separation and number of arriving photons, 30 data sets based on the Poisson process are simulated to construct the training set, and the observation data of PSRs B0531+21, B0540-69 and B1509-58 from the Rossi X-ray Timing Explorer(RXTE) are selected to generate the test set. The number of training sets and the number of test sets are 30 000 and 5400, respectively. After achieving convergence stability, more than 99% of the pulsar signals are recognized, and more than 99% of the interference is successfully rejected, which verifies the high degree of agreement between the network and the feature model and the high potential of the proposed method in searching for pulsars.
文摘The quality of the low frequency electromagnetic data is affected by the spike and the trend noises.Failure in removal of the spikes and the trends reduces the credibility of data explanation.Based on the analyses of the causes and characteristics of these noises,this paper presents the results of a preset statistics stacking method(PSSM)and a piecewise linear fitting method(PLFM)in de-noising the spikes and trends,respectively.The magnitudes of the spikes are either higher or lower than the normal values,which leads to distortion of the useful signal.Comparisons have been performed in removing of the spikes among the average,the statistics and the PSSM methods,and the results indicate that only the PSSM can remove the spikes successfully.On the other hand,the spectrums of the linear and nonlinear trends mainly lie in the low frequency band and can change the calculated resistivity significantly.No influence of the trends is observed when the frequency is higher than a certain threshold value.The PLSM can remove effectively both the linear and nonlinear trends with errors around 1% in the power spectrum.The proposed methods present an effective way for de-noising the spike and the trend noises in the low frequency electromagnetic data,and establish a research basis for de-noising the low frequency noises.
基金the National Natural Science Foundation of China.
文摘We collected the basic parameters of 231 supernova remnants (SNRs) in ourGalaxy, namely, distances (d) from the Sun, linear diameters (D), Galactic heights (Z), estimatedages (t), luminosities (L), surface brightness (Σ) and flux densities (S_1) at 1-GHz frequency andspectral indices (α). We tried to find possible correlations between these parameters. As expected,the linear diameters were found to increase with ages for the shell-type remnants, and also to havea tendency to increase with the Galactic heights. Both the surface brightness and luminosity ofSNRs at 1-GHz tend to decrease with the linear diameter and with age. No other relations between theparameters were found.
基金supported by the National Natural Science Foundation of China (Grant No. 10873009)the National Basic Research Program of China (973 program, No. 2007CB815404)
文摘Existing theory and models suggest that a Type I (merger) GRB should have a larger jet beaming angle than a Type II (collapsar) GRB, but so far no statistical evidence is available to support this suggestion. In this paper, we obtain a sample of 37 beaming angles and calculate the probability that this is true. A correction is also devised to account for the scarcity of Type I GRBs in our sample. The probability is calculated to be 83% without the correction and 71% with it.
基金supported by the Special Funds for the National Basic Research Program of China(Grant No.2012CB025904)the National Natural ScienceFoundation of China(Grant Nos.90916027 and 11302052)
文摘This paper focuses on the dynamic thermo-mechanical coupled response of random particulate composite materials. Both the inertia term and coupling term are considered in the dynamic coupled problem. The formulation of the problem by a statistical second-order two-scale (SSOTS) analysis method and the algorithm procedure based on the finite-element difference method are presented. Numerical results of coupled cases are compared with those of uncoupled cases. It shows that the coupling effects on temperature, thermal flux, displacement, and stresses are very distinct, and the micro- characteristics of particles affect the coupling effect of the random composites. Furthermore, the coupling effect causes a lag in the variations of temperature, thermal flux, displacement, and stresses.
文摘“Four classes of enterprises above designated size”(hereinafter called four-classes enterprises)refer to objects of statistical survey that have reached a certain scale in China’s current statistical method system,including four classes in national economy,namely,industrial enterprises above designated size,construction and real estate development and management enterprises above qualifications,wholesale and retail,catering and accommodation enterprises,and service enterprises above designated size,which are the primary part of national economic and social development activities.This paper is focused on analyzing the practice and difficulties in the current statistics work of four-classes enterprises,and then this paper proposes some recommendations.