To address the problem of real-time processing of ultra-wide bandwidth pulsar baseband data,we designed and implemented a pulsar baseband data processing algorithm(PSRDP)based on GPU parallel computing technology.PSRD...To address the problem of real-time processing of ultra-wide bandwidth pulsar baseband data,we designed and implemented a pulsar baseband data processing algorithm(PSRDP)based on GPU parallel computing technology.PSRDP can perform operations such as baseband data unpacking,channel separation,coherent dedispersion,Stokes detection,phase and folding period prediction,and folding integration in GPU clusters.We tested the algorithm using the J0437-4715 pulsar baseband data generated by the CASPSR and Medusa backends of the Parkes,and the J0332+5434 pulsar baseband data generated by the self-developed backend of the Nan Shan Radio Telescope.We obtained the pulse profiles of each baseband data.Through experimental analysis,we have found that the pulse profiles generated by the PSRDP algorithm in this paper are essentially consistent with the processing results of Digital Signal Processing Software for Pulsar Astronomy(DSPSR),which verified the effectiveness of the PSRDP algorithm.Furthermore,using the same baseband data,we compared the processing speed of PSRDP with DSPSR,and the results showed that PSRDP was not slower than DSPSR in terms of speed.The theoretical and technical experience gained from the PSRDP algorithm research in this article lays a technical foundation for the real-time processing of QTT(Qi Tai radio Telescope)ultra-wide bandwidth pulsar baseband data.展开更多
Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanal...Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanalysis database(ERA5)is used.Seeing calculated from ERA5 is compared consistently with the Differential Image Motion Monitor seeing at the height of 12 m.Results show that seeing decays exponentially with height at the Muztagh-Ata site.Seeing decays the fastest in fall in 2021 and most slowly with height in summer.The seeing condition is better in fall than in summer.The median value of seeing at 12 m is 0.89 arcsec,the maximum value is1.21 arcsec in August and the minimum is 0.66 arcsec in October.The median value of seeing at 12 m is 0.72arcsec in the nighttime and 1.08 arcsec in the daytime.Seeing is a combination of annual and about biannual variations with the same phase as temperature and wind speed indicating that seeing variation with time is influenced by temperature and wind speed.The Richardson number Ri is used to analyze the atmospheric stability and the variations of seeing are consistent with Ri between layers.These quantitative results can provide an important reference for a telescopic observation strategy.展开更多
This paper establishes the phase space in the light of spacial series data , discusses the fractal structure of geological data in terms of correlated functions and studies the chaos of these data . In addition , it i...This paper establishes the phase space in the light of spacial series data , discusses the fractal structure of geological data in terms of correlated functions and studies the chaos of these data . In addition , it introduces the R/S analysis for time series analysis into spacial series to calculate the structural fractal dimensions of ranges and standard deviation for spacial series data -and to establish the fractal dimension matrix and the procedures in plotting the fractal dimension anomaly diagram with vector distances of fractal dimension . At last , it has examples of its application .展开更多
Experimental and theoretical studies of the mechanisms of vibration stimulation of oil recovery in watered fields lead to the conclusion that resonance oscillations develop in fractured-block formations. These oscilla...Experimental and theoretical studies of the mechanisms of vibration stimulation of oil recovery in watered fields lead to the conclusion that resonance oscillations develop in fractured-block formations. These oscillations, caused by weak but long-lasting and frequency-stable influences, create the conditions for ultrasonic wave’s generation in the layers, which are capable of destroying thickened oil membranes in reservoir cracks. For fractured-porous reservoirs in the process of exploitation by the method of water high-pressure oil displacement, the possibility of intensifying ultrasonic vibrations can have an important technological significance. Even a very weak ultrasound can destroy, over a long period of time, the viscous oil membranes formed in the cracks between the blocks, which can be the reason for lowering the permeability of the layers and increasing the oil recovery. To describe these effects, it is necessary to consider the wave process in a hierarchically blocky environment and theoretically simulate the mechanism of the appearance of self-oscillations under the action of relaxation shear stresses. For the analysis of seism acoustic response in time on fixed intervals along the borehole an algorithm of phase diagrams of the state of many-phase medium is suggested.展开更多
The development of adaptation measures to climate change relies on data from climate models or impact models. In order to analyze these large data sets or an ensemble of these data sets, the use of statistical methods...The development of adaptation measures to climate change relies on data from climate models or impact models. In order to analyze these large data sets or an ensemble of these data sets, the use of statistical methods is required. In this paper, the methodological approach to collecting, structuring and publishing the methods, which have been used or developed by former or present adaptation initiatives, is described. The intention is to communicate achieved knowledge and thus support future users. A key component is the participation of users in the development process. Main elements of the approach are standardized, template-based descriptions of the methods including the specific applications, references, and method assessment. All contributions have been quality checked, sorted, and placed in a larger context. The result is a report on statistical methods which is freely available as printed or online version. Examples of how to use the methods are presented in this paper and are also included in the brochure.展开更多
It is not reasonable that one can only use the adjoint of model in data assimilation. The simulated numerical experiment shows that for the tidal model, the result of the adjoint of equation is almost the same as that...It is not reasonable that one can only use the adjoint of model in data assimilation. The simulated numerical experiment shows that for the tidal model, the result of the adjoint of equation is almost the same as that of the adjoint of model: the averaged absolute difference of the amplitude between observations and simulation is less than 5.0 cm and that of the phase-lag is less than 5.0°. The results are both in good agreement with the observed M2 tide in the Bohai Sea and the Yellow Sea. For comparison, the traditional methods also have been used to simulate M2 tide in the Bohai Sea and the Yellow Sea. The initial guess values of the boundary conditions are given first, and then are adjusted to acquire the simulated results that are as close as possible to the observations. As the boundary conditions contain 72 values, which should be adjusted and how to adjust them can only be partially solved by adjusting them many times. The satisfied results are hard to acquire even gigantic efforts are done. Here, the automation of the treatment of the open boundary conditions is realized. The method is unique and superior to the traditional methods. It is emphasized that if the adjoint of equation is used, tedious and complicated mathematical deduction can be avoided. Therefore the adjoint of equation should attract much attention.展开更多
Mitigating increasing cyberattack incidents may require strategies such as reinforcing organizations’ networks with Honeypots and effectively analyzing attack traffic for detection of zero-day attacks and vulnerabili...Mitigating increasing cyberattack incidents may require strategies such as reinforcing organizations’ networks with Honeypots and effectively analyzing attack traffic for detection of zero-day attacks and vulnerabilities. To effectively detect and mitigate cyberattacks, both computerized and visual analyses are typically required. However, most security analysts are not adequately trained in visualization principles and/or methods, which is required for effective visual perception of useful attack information hidden in attack data. Additionally, Honeypot has proven useful in cyberattack research, but no studies have comprehensively investigated visualization practices in the field. In this paper, we reviewed visualization practices and methods commonly used in the discovery and communication of attack patterns based on Honeypot network traffic data. Using the PRISMA methodology, we identified and screened 218 papers and evaluated only 37 papers having a high impact. Most Honeypot papers conducted summary statistics of Honeypot data based on static data metrics such as IP address, port, and packet size. They visually analyzed Honeypot attack data using simple graphical methods (such as line, bar, and pie charts) that tend to hide useful attack information. Furthermore, only a few papers conducted extended attack analysis, and commonly visualized attack data using scatter and linear plots. Papers rarely included simple yet sophisticated graphical methods, such as box plots and histograms, which allow for critical evaluation of analysis results. While a significant number of automated visualization tools have incorporated visualization standards by default, the construction of effective and expressive graphical methods for easy pattern discovery and explainable insights still requires applied knowledge and skill of visualization principles and tools, and occasionally, an interdisciplinary collaboration with peers. We, therefore, suggest the need, going forward, for non-classical graphical methods for visualizing attack patterns and communicating analysis results. We also recommend training investigators in visualization principles and standards for effective visual perception and presentation.展开更多
Fast and reliable localization of high-energy transients is crucial for characterizing the burst properties and guiding the follow-up observations.Localization based on the relative counts of different detectors has b...Fast and reliable localization of high-energy transients is crucial for characterizing the burst properties and guiding the follow-up observations.Localization based on the relative counts of different detectors has been widely used for all-sky gamma-ray monitors.There are two major methods for this count distribution localization:χ^(2)minimization method and the Bayesian method.Here we propose a modified Bayesian method that could take advantage of both the accuracy of the Bayesian method and the simplicity of the χ^(2)method.With comprehensive simulations,we find that our Bayesian method with Poisson likelihood is generally more applicable for various bursts than the χ^(2)method,especially for weak bursts.We further proposed a location-spectrum iteration approach based on the Bayesian inference,which could alleviate the problems caused by the spectral difference between the burst and location templates.Our method is very suitable for scenarios with limited computation resources or timesensitive applications,such as in-flight localization software,and low-latency localization for rapidly follow-up observations.展开更多
The Solar Polar-orbit Observatory(SPO),proposed by Chinese scientists,is designed to observe the solar polar regions in an unprecedented way with a spacecraft traveling in a large solar inclination angle and a small e...The Solar Polar-orbit Observatory(SPO),proposed by Chinese scientists,is designed to observe the solar polar regions in an unprecedented way with a spacecraft traveling in a large solar inclination angle and a small ellipticity.However,one of the most significant challenges lies in ultra-long-distance data transmission,particularly for the Magnetic and Helioseismic Imager(MHI),which is the most important payload and generates the largest volume of data in SPO.In this paper,we propose a tailored lossless data compression method based on the measurement mode and characteristics of MHI data.The background out of the solar disk is removed to decrease the pixel number of an image under compression.Multiple predictive coding methods are combined to eliminate the redundancy utilizing the correlation(space,spectrum,and polarization)in data set,improving the compression ratio.Experimental results demonstrate that our method achieves an average compression ratio of 3.67.The compression time is also less than the general observation period.The method exhibits strong feasibility and can be easily adapted to MHI.展开更多
Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometri...Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.展开更多
Most existing star-galaxy classifiers depend on the reduced information from catalogs,necessitating careful data processing and feature extraction.In this study,we employ a supervised machine learning method(GoogLeNet...Most existing star-galaxy classifiers depend on the reduced information from catalogs,necessitating careful data processing and feature extraction.In this study,we employ a supervised machine learning method(GoogLeNet)to automatically classify stars and galaxies in the COSMOS field.Unlike traditional machine learning methods,we introduce several preprocessing techniques,including noise reduction and the unwrapping of denoised images in polar coordinates,applied to our carefully selected samples of stars and galaxies.By dividing the selected samples into training and validation sets in an 8:2 ratio,we evaluate the performance of the GoogLeNet model in distinguishing between stars and galaxies.The results indicate that the GoogLeNet model is highly effective,achieving accuracies of 99.6% and 99.9% for stars and galaxies,respectively.Furthermore,by comparing the results with and without preprocessing,we find that preprocessing can significantly improve classification accuracy(by approximately 2.0% to 6.0%)when the images are rotated.In preparation for the future launch of the China Space Station Telescope(CSST),we also evaluate the performance of the GoogLeNet model on the CSST simulation data.These results demonstrate a high level of accuracy(approximately 99.8%),indicating that this model can be effectively utilized for future observations with the CSST.展开更多
In the two-dimensional positioning method of pulsars, the grid method is used to provide non-sensitive direction and positional estimates. However, the grid method has a high computational load and low accuracy due to...In the two-dimensional positioning method of pulsars, the grid method is used to provide non-sensitive direction and positional estimates. However, the grid method has a high computational load and low accuracy due to the interval of the grid. To improve estimation accuracy and reduce the computational load, we propose a fast twodimensional positioning method for the crab pulsar based on multiple optimization algorithms(FTPCO). The FTPCO uses the Levenberg–Marquardt(LM) algorithm, three-point orientation(TPO) method, particle swarm optimization(PSO) and Newton–Raphson-based optimizer(NRBO) to substitute the grid method. First, to avoid the influence of the non-sensitive direction on positioning, we take an orbital error and the distortion of the pulsar profile as optimization objectives and combine the grid method with the LM algorithm or PSO to search for the non-sensitive direction. Then, on the sensitive plane perpendicular to the non-sensitive direction, the TPO method is proposed to fast search the sensitive direction and sub-sensitive direction. Finally, the NRBO is employed on the sensitive and sub-sensitive directions to achieve two-dimensional positioning of the Crab pulsar. The simulation results show that the computational load of the FTPCO is reduced by 89.4% and the positioning accuracy of the FTPCO is improved by approximately 38% compared with the grid method. The FTPCO has the advantage of high real-time accuracy and does not fall into the local optimum.展开更多
This paper introduced the basic theory and algorithm of the surrogate data method, which proposed a rigorous way to detect the random and seemingly stochastic characteristics in a system. The Gaussian data and the Ros...This paper introduced the basic theory and algorithm of the surrogate data method, which proposed a rigorous way to detect the random and seemingly stochastic characteristics in a system. The Gaussian data and the Rossler data were used to show the availability and effectivity of this method. According to the analysis by this method based on the short-circuiting current signals under the conditions of the same voltage and different wire feed speeds, it is demonstrated that the electrical signals time series exhibit apparently randomness when the welding parameters do not match. However, the electrical signals time series are deterministic when a match is found. The stability of short-circuiting transfer process could be judged exactly by the method of surrogate data.展开更多
Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minute...Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minutes as an innovation technique,which provides promising applications in tunnel deformation monitoring.Here,an efficient method for extracting tunnel cross-sections and convergence analysis using dense TLS point cloud data is proposed.First,the tunnel orientation is determined using principal component analysis(PCA)in the Euclidean plane.Two control points are introduced to detect and remove the unsuitable points by using point cloud division and then the ground points are removed by defining an elevation value width of 0.5 m.Next,a z-score method is introduced to detect and remove the outlies.Because the tunnel cross-section’s standard shape is round,the circle fitting is implemented using the least-squares method.Afterward,the convergence analysis is made at the angles of 0°,30°and 150°.The proposed approach’s feasibility is tested on a TLS point cloud of a Nanjing subway tunnel acquired using a FARO X330 laser scanner.The results indicate that the proposed methodology achieves an overall accuracy of 1.34 mm,which is also in agreement with the measurements acquired by a total station instrument.The proposed methodology provides new insights and references for the applications of TLS in tunnel deformation monitoring,which can also be extended to other engineering applications.展开更多
The hard X-ray modulation telescope (HXMT) mission is mainly devoted to performing an all-sky survey at 1- 250 keV with both high sensitivity and high spatial resolution. The observed data reduction as well as the i...The hard X-ray modulation telescope (HXMT) mission is mainly devoted to performing an all-sky survey at 1- 250 keV with both high sensitivity and high spatial resolution. The observed data reduction as well as the image reconstruction for HXMT can be achieved by using the direct demodulation method (DDM). However the original DDM is too computationally expensive for multi-dimensional data with high resolution to be employed for HXMT data. We propose an accelerated direct demodulation method especially adapted for data from HXMT. Simulations are also presented to demonstrate this method.展开更多
It is not reasonable that one can only use the adjoint of model in data assimilation. The simulated numerical experiment shows that for the tidal model, the result of the adjoint of equation is almost the same as that...It is not reasonable that one can only use the adjoint of model in data assimilation. The simulated numerical experiment shows that for the tidal model, the result of the adjoint of equation is almost the same as that of the adjoint of model: the averaged absolute difference of the amplitude between observations and simulation is less than 5.0 cm and that of the phase-lag is less than 5.0°. The results are both in good agreement with the observed M2 tide in the Bohai Sea and the Yellow Sea. For comparison, the traditional methods also have been used to simulate M2 tide in the Bohai Sea and the Yellow Sea. The initial guess values of the boundary conditions are given first, and then are adjusted to acquire the simulated results that are as close as possible to the observations. As the boundary conditions contain 72 values, which should be adjusted and how to adjust them can only be partially solved by adjusting them many times. The satisfied results are hard to acquire even gigantic efforts are done. Here, the automation of the treatment of the open boundary conditions is realized. The method is unique and superior to the traditional methods. It is emphasized that if the adjoint of equation is used, tedious and complicated mathematical deduction can be avoided. Therefore the adjoint of equation should attract much attention.展开更多
DArk Matter Particle Explorer(DAMPE) is a general purpose high energy cosmic ray and gamma ray observatory, aiming to detect high energy electrons and gammas in the energy range 5 Ge V to 10 Te V and hundreds of Te V ...DArk Matter Particle Explorer(DAMPE) is a general purpose high energy cosmic ray and gamma ray observatory, aiming to detect high energy electrons and gammas in the energy range 5 Ge V to 10 Te V and hundreds of Te V for nuclei. This paper provides a method using machine learning to identify electrons and separate them from gammas, protons, helium and heavy nuclei with the DAMPE data acquired from 2016 January 1 to 2017 June 30, in the energy range from 10 to 100 Ge V.展开更多
With the development of distribution automation system, the centralized meter reading system has been adopted more and more extensively, which provides real-time electricity consumption data of end-users, and conseque...With the development of distribution automation system, the centralized meter reading system has been adopted more and more extensively, which provides real-time electricity consumption data of end-users, and consequently lays foundation for operating condition on-line analysis of distribution network. In this paper, a modified back/forward sweep method, which directly uses real-time electricity consumption data acquired from the centralized meter reading system, is proposedto realize voltage analysis based on 24-hour electricity consumption data of a typical transformer district. Furthermore, the calculated line losses are verified through data collected from the energy metering of the distribution transformer, illustrating that the proposed method can be applied in analyzing voltage level and discovering unknown energy losses, which will lay foundation for on-line analysis, calculation and monitoring of power distribution network.展开更多
Radioheliograph images are essential for the study of solar short term activities and long term variations, while the continuity and granularity of radioheliograph data are not so ideal, due to the short visible time ...Radioheliograph images are essential for the study of solar short term activities and long term variations, while the continuity and granularity of radioheliograph data are not so ideal, due to the short visible time of the Sun and the complex electron-magnetic environment near the ground-based radio telescope. In this work, we develop a multi-channel input single-channel output neural network, which can generate radioheliograph image in microwave band from the Extreme Ultra-violet(EUV) observation of the Atmospheric Imaging Assembly(AIA) on board the Solar Dynamic Observatory(SDO). The neural network is trained with nearly 8 years of data of Nobeyama Radioheliograph(No RH) at 17 GHz and SDO/AIA from January 2011 to September 2018. The generated radioheliograph image is in good consistency with the well-calibrated No RH observation. SDO/AIA provides solar atmosphere images in multiple EUV wavelengths every 12 seconds from space, so the present model can fill the vacancy of limited observation time of microwave radioheliograph, and support further study of the relationship between the microwave and EUV emission.展开更多
By using the method of least square linear fitting to analyze data do not exist errors under certain conditions, in order to make the linear data fitting method that can more accurately solve the relationship expressi...By using the method of least square linear fitting to analyze data do not exist errors under certain conditions, in order to make the linear data fitting method that can more accurately solve the relationship expression between the volume and quantity in scientific experiments and engineering practice, this article analyzed data error by commonly linear data fitting method, and proposed improved process of the least distance squ^re method based on least squares method. Finally, the paper discussed the advantages and disadvantages through the example analysis of two kinds of linear data fitting method, and given reasonable control conditions for its application.展开更多
基金supported by the National Key R&D Program of China Nos.2021YFC2203502 and 2022YFF0711502the National Natural Science Foundation of China(NSFC)(12173077 and 12003062)+5 种基金the Tianshan Innovation Team Plan of Xinjiang Uygur Autonomous Region(2022D14020)the Tianshan Talent Project of Xinjiang Uygur Autonomous Region(2022TSYCCX0095)the Scientific Instrument Developing Project of the Chinese Academy of Sciences,grant No.PTYQ2022YZZD01China National Astronomical Data Center(NADC)the Operation,Maintenance and Upgrading Fund for Astronomical Telescopes and Facility Instruments,budgeted from the Ministry of Finance of China(MOF)and administrated by the Chinese Academy of Sciences(CAS)Natural Science Foundation of Xinjiang Uygur Autonomous Region(2022D01A360)。
文摘To address the problem of real-time processing of ultra-wide bandwidth pulsar baseband data,we designed and implemented a pulsar baseband data processing algorithm(PSRDP)based on GPU parallel computing technology.PSRDP can perform operations such as baseband data unpacking,channel separation,coherent dedispersion,Stokes detection,phase and folding period prediction,and folding integration in GPU clusters.We tested the algorithm using the J0437-4715 pulsar baseband data generated by the CASPSR and Medusa backends of the Parkes,and the J0332+5434 pulsar baseband data generated by the self-developed backend of the Nan Shan Radio Telescope.We obtained the pulse profiles of each baseband data.Through experimental analysis,we have found that the pulse profiles generated by the PSRDP algorithm in this paper are essentially consistent with the processing results of Digital Signal Processing Software for Pulsar Astronomy(DSPSR),which verified the effectiveness of the PSRDP algorithm.Furthermore,using the same baseband data,we compared the processing speed of PSRDP with DSPSR,and the results showed that PSRDP was not slower than DSPSR in terms of speed.The theoretical and technical experience gained from the PSRDP algorithm research in this article lays a technical foundation for the real-time processing of QTT(Qi Tai radio Telescope)ultra-wide bandwidth pulsar baseband data.
基金funded by the National Natural Science Foundation of China(NSFC)the Chinese Academy of Sciences(CAS)(grant No.U2031209)the National Natural Science Foundation of China(NSFC,grant Nos.11872128,42174192,and 91952111)。
文摘Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanalysis database(ERA5)is used.Seeing calculated from ERA5 is compared consistently with the Differential Image Motion Monitor seeing at the height of 12 m.Results show that seeing decays exponentially with height at the Muztagh-Ata site.Seeing decays the fastest in fall in 2021 and most slowly with height in summer.The seeing condition is better in fall than in summer.The median value of seeing at 12 m is 0.89 arcsec,the maximum value is1.21 arcsec in August and the minimum is 0.66 arcsec in October.The median value of seeing at 12 m is 0.72arcsec in the nighttime and 1.08 arcsec in the daytime.Seeing is a combination of annual and about biannual variations with the same phase as temperature and wind speed indicating that seeing variation with time is influenced by temperature and wind speed.The Richardson number Ri is used to analyze the atmospheric stability and the variations of seeing are consistent with Ri between layers.These quantitative results can provide an important reference for a telescopic observation strategy.
文摘This paper establishes the phase space in the light of spacial series data , discusses the fractal structure of geological data in terms of correlated functions and studies the chaos of these data . In addition , it introduces the R/S analysis for time series analysis into spacial series to calculate the structural fractal dimensions of ranges and standard deviation for spacial series data -and to establish the fractal dimension matrix and the procedures in plotting the fractal dimension anomaly diagram with vector distances of fractal dimension . At last , it has examples of its application .
文摘Experimental and theoretical studies of the mechanisms of vibration stimulation of oil recovery in watered fields lead to the conclusion that resonance oscillations develop in fractured-block formations. These oscillations, caused by weak but long-lasting and frequency-stable influences, create the conditions for ultrasonic wave’s generation in the layers, which are capable of destroying thickened oil membranes in reservoir cracks. For fractured-porous reservoirs in the process of exploitation by the method of water high-pressure oil displacement, the possibility of intensifying ultrasonic vibrations can have an important technological significance. Even a very weak ultrasound can destroy, over a long period of time, the viscous oil membranes formed in the cracks between the blocks, which can be the reason for lowering the permeability of the layers and increasing the oil recovery. To describe these effects, it is necessary to consider the wave process in a hierarchically blocky environment and theoretically simulate the mechanism of the appearance of self-oscillations under the action of relaxation shear stresses. For the analysis of seism acoustic response in time on fixed intervals along the borehole an algorithm of phase diagrams of the state of many-phase medium is suggested.
文摘The development of adaptation measures to climate change relies on data from climate models or impact models. In order to analyze these large data sets or an ensemble of these data sets, the use of statistical methods is required. In this paper, the methodological approach to collecting, structuring and publishing the methods, which have been used or developed by former or present adaptation initiatives, is described. The intention is to communicate achieved knowledge and thus support future users. A key component is the participation of users in the development process. Main elements of the approach are standardized, template-based descriptions of the methods including the specific applications, references, and method assessment. All contributions have been quality checked, sorted, and placed in a larger context. The result is a report on statistical methods which is freely available as printed or online version. Examples of how to use the methods are presented in this paper and are also included in the brochure.
文摘It is not reasonable that one can only use the adjoint of model in data assimilation. The simulated numerical experiment shows that for the tidal model, the result of the adjoint of equation is almost the same as that of the adjoint of model: the averaged absolute difference of the amplitude between observations and simulation is less than 5.0 cm and that of the phase-lag is less than 5.0°. The results are both in good agreement with the observed M2 tide in the Bohai Sea and the Yellow Sea. For comparison, the traditional methods also have been used to simulate M2 tide in the Bohai Sea and the Yellow Sea. The initial guess values of the boundary conditions are given first, and then are adjusted to acquire the simulated results that are as close as possible to the observations. As the boundary conditions contain 72 values, which should be adjusted and how to adjust them can only be partially solved by adjusting them many times. The satisfied results are hard to acquire even gigantic efforts are done. Here, the automation of the treatment of the open boundary conditions is realized. The method is unique and superior to the traditional methods. It is emphasized that if the adjoint of equation is used, tedious and complicated mathematical deduction can be avoided. Therefore the adjoint of equation should attract much attention.
文摘Mitigating increasing cyberattack incidents may require strategies such as reinforcing organizations’ networks with Honeypots and effectively analyzing attack traffic for detection of zero-day attacks and vulnerabilities. To effectively detect and mitigate cyberattacks, both computerized and visual analyses are typically required. However, most security analysts are not adequately trained in visualization principles and/or methods, which is required for effective visual perception of useful attack information hidden in attack data. Additionally, Honeypot has proven useful in cyberattack research, but no studies have comprehensively investigated visualization practices in the field. In this paper, we reviewed visualization practices and methods commonly used in the discovery and communication of attack patterns based on Honeypot network traffic data. Using the PRISMA methodology, we identified and screened 218 papers and evaluated only 37 papers having a high impact. Most Honeypot papers conducted summary statistics of Honeypot data based on static data metrics such as IP address, port, and packet size. They visually analyzed Honeypot attack data using simple graphical methods (such as line, bar, and pie charts) that tend to hide useful attack information. Furthermore, only a few papers conducted extended attack analysis, and commonly visualized attack data using scatter and linear plots. Papers rarely included simple yet sophisticated graphical methods, such as box plots and histograms, which allow for critical evaluation of analysis results. While a significant number of automated visualization tools have incorporated visualization standards by default, the construction of effective and expressive graphical methods for easy pattern discovery and explainable insights still requires applied knowledge and skill of visualization principles and tools, and occasionally, an interdisciplinary collaboration with peers. We, therefore, suggest the need, going forward, for non-classical graphical methods for visualizing attack patterns and communicating analysis results. We also recommend training investigators in visualization principles and standards for effective visual perception and presentation.
基金supported by the National Key R&D Program of China(2021YFA0718500)support from the Strategic Priority Research Program on Space Science,the Chinese Academy of Sciences(grant Nos.XDA15360102,XDA15360300,XDA15052700 and E02212A02S)+1 种基金the National Natural Science Foundation of China(grant Nos.12173038 and U2038106)the National HEP Data Center(grant No.E029S2S1)。
文摘Fast and reliable localization of high-energy transients is crucial for characterizing the burst properties and guiding the follow-up observations.Localization based on the relative counts of different detectors has been widely used for all-sky gamma-ray monitors.There are two major methods for this count distribution localization:χ^(2)minimization method and the Bayesian method.Here we propose a modified Bayesian method that could take advantage of both the accuracy of the Bayesian method and the simplicity of the χ^(2)method.With comprehensive simulations,we find that our Bayesian method with Poisson likelihood is generally more applicable for various bursts than the χ^(2)method,especially for weak bursts.We further proposed a location-spectrum iteration approach based on the Bayesian inference,which could alleviate the problems caused by the spectral difference between the burst and location templates.Our method is very suitable for scenarios with limited computation resources or timesensitive applications,such as in-flight localization software,and low-latency localization for rapidly follow-up observations.
基金supported by the National Key R&D Program of China(grant No.2022YFF0503800)by the National Natural Science Foundation of China(NSFC)(grant No.11427901)+1 种基金by the Strategic Priority Research Program of the Chinese Academy of Sciences(CAS-SPP)(grant No.XDA15320102)by the Youth Innovation Promotion Association(CAS No.2022057)。
文摘The Solar Polar-orbit Observatory(SPO),proposed by Chinese scientists,is designed to observe the solar polar regions in an unprecedented way with a spacecraft traveling in a large solar inclination angle and a small ellipticity.However,one of the most significant challenges lies in ultra-long-distance data transmission,particularly for the Magnetic and Helioseismic Imager(MHI),which is the most important payload and generates the largest volume of data in SPO.In this paper,we propose a tailored lossless data compression method based on the measurement mode and characteristics of MHI data.The background out of the solar disk is removed to decrease the pixel number of an image under compression.Multiple predictive coding methods are combined to eliminate the redundancy utilizing the correlation(space,spectrum,and polarization)in data set,improving the compression ratio.Experimental results demonstrate that our method achieves an average compression ratio of 3.67.The compression time is also less than the general observation period.The method exhibits strong feasibility and can be easily adapted to MHI.
基金funded by the National Natural Science Foundation of China(NSFC,Nos.12373086 and 12303082)CAS“Light of West China”Program+2 种基金Yunnan Revitalization Talent Support Program in Yunnan ProvinceNational Key R&D Program of ChinaGravitational Wave Detection Project No.2022YFC2203800。
文摘Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.
基金supported by the Strategic Priority Research Program of Chinese Academy of Sciences(grant No.XDB41000000)the National Natural Science Foundation of China(NSFC,Grant Nos.12233008 and 11973038)+2 种基金the China Manned Space Project(No.CMS-CSST-2021-A07)the Cyrus Chun Ying Tang Foundationsthe support from Hong Kong Innovation and Technology Fund through the Research Talent Hub program(GSP028)。
文摘Most existing star-galaxy classifiers depend on the reduced information from catalogs,necessitating careful data processing and feature extraction.In this study,we employ a supervised machine learning method(GoogLeNet)to automatically classify stars and galaxies in the COSMOS field.Unlike traditional machine learning methods,we introduce several preprocessing techniques,including noise reduction and the unwrapping of denoised images in polar coordinates,applied to our carefully selected samples of stars and galaxies.By dividing the selected samples into training and validation sets in an 8:2 ratio,we evaluate the performance of the GoogLeNet model in distinguishing between stars and galaxies.The results indicate that the GoogLeNet model is highly effective,achieving accuracies of 99.6% and 99.9% for stars and galaxies,respectively.Furthermore,by comparing the results with and without preprocessing,we find that preprocessing can significantly improve classification accuracy(by approximately 2.0% to 6.0%)when the images are rotated.In preparation for the future launch of the China Space Station Telescope(CSST),we also evaluate the performance of the GoogLeNet model on the CSST simulation data.These results demonstrate a high level of accuracy(approximately 99.8%),indicating that this model can be effectively utilized for future observations with the CSST.
基金supported by the National Natural Science Foundation of China (Nos. 61873196 and 62373030)the Innovation Program for Quantum Science and Technology(No. 2021ZD0303400)。
文摘In the two-dimensional positioning method of pulsars, the grid method is used to provide non-sensitive direction and positional estimates. However, the grid method has a high computational load and low accuracy due to the interval of the grid. To improve estimation accuracy and reduce the computational load, we propose a fast twodimensional positioning method for the crab pulsar based on multiple optimization algorithms(FTPCO). The FTPCO uses the Levenberg–Marquardt(LM) algorithm, three-point orientation(TPO) method, particle swarm optimization(PSO) and Newton–Raphson-based optimizer(NRBO) to substitute the grid method. First, to avoid the influence of the non-sensitive direction on positioning, we take an orbital error and the distortion of the pulsar profile as optimization objectives and combine the grid method with the LM algorithm or PSO to search for the non-sensitive direction. Then, on the sensitive plane perpendicular to the non-sensitive direction, the TPO method is proposed to fast search the sensitive direction and sub-sensitive direction. Finally, the NRBO is employed on the sensitive and sub-sensitive directions to achieve two-dimensional positioning of the Crab pulsar. The simulation results show that the computational load of the FTPCO is reduced by 89.4% and the positioning accuracy of the FTPCO is improved by approximately 38% compared with the grid method. The FTPCO has the advantage of high real-time accuracy and does not fall into the local optimum.
基金supported by the Young Scientists Fund of the National Natural Science Foundation of China(Grant No.51205283)
文摘This paper introduced the basic theory and algorithm of the surrogate data method, which proposed a rigorous way to detect the random and seemingly stochastic characteristics in a system. The Gaussian data and the Rossler data were used to show the availability and effectivity of this method. According to the analysis by this method based on the short-circuiting current signals under the conditions of the same voltage and different wire feed speeds, it is demonstrated that the electrical signals time series exhibit apparently randomness when the welding parameters do not match. However, the electrical signals time series are deterministic when a match is found. The stability of short-circuiting transfer process could be judged exactly by the method of surrogate data.
基金National Natural Science Foundation of China(No.41801379)Fundamental Research Funds for the Central Universities(No.2019B08414)National Key R&D Program of China(No.2016YFC0401801)。
文摘Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minutes as an innovation technique,which provides promising applications in tunnel deformation monitoring.Here,an efficient method for extracting tunnel cross-sections and convergence analysis using dense TLS point cloud data is proposed.First,the tunnel orientation is determined using principal component analysis(PCA)in the Euclidean plane.Two control points are introduced to detect and remove the unsuitable points by using point cloud division and then the ground points are removed by defining an elevation value width of 0.5 m.Next,a z-score method is introduced to detect and remove the outlies.Because the tunnel cross-section’s standard shape is round,the circle fitting is implemented using the least-squares method.Afterward,the convergence analysis is made at the angles of 0°,30°and 150°.The proposed approach’s feasibility is tested on a TLS point cloud of a Nanjing subway tunnel acquired using a FARO X330 laser scanner.The results indicate that the proposed methodology achieves an overall accuracy of 1.34 mm,which is also in agreement with the measurements acquired by a total station instrument.The proposed methodology provides new insights and references for the applications of TLS in tunnel deformation monitoring,which can also be extended to other engineering applications.
基金supported by the National Natural Science Foundation of China (Grant Nos. 11173038 and 11103022)the Tsinghua University Initiative Scientific Research Program (Grant No. 20111081102)
文摘The hard X-ray modulation telescope (HXMT) mission is mainly devoted to performing an all-sky survey at 1- 250 keV with both high sensitivity and high spatial resolution. The observed data reduction as well as the image reconstruction for HXMT can be achieved by using the direct demodulation method (DDM). However the original DDM is too computationally expensive for multi-dimensional data with high resolution to be employed for HXMT data. We propose an accelerated direct demodulation method especially adapted for data from HXMT. Simulations are also presented to demonstrate this method.
文摘It is not reasonable that one can only use the adjoint of model in data assimilation. The simulated numerical experiment shows that for the tidal model, the result of the adjoint of equation is almost the same as that of the adjoint of model: the averaged absolute difference of the amplitude between observations and simulation is less than 5.0 cm and that of the phase-lag is less than 5.0°. The results are both in good agreement with the observed M2 tide in the Bohai Sea and the Yellow Sea. For comparison, the traditional methods also have been used to simulate M2 tide in the Bohai Sea and the Yellow Sea. The initial guess values of the boundary conditions are given first, and then are adjusted to acquire the simulated results that are as close as possible to the observations. As the boundary conditions contain 72 values, which should be adjusted and how to adjust them can only be partially solved by adjusting them many times. The satisfied results are hard to acquire even gigantic efforts are done. Here, the automation of the treatment of the open boundary conditions is realized. The method is unique and superior to the traditional methods. It is emphasized that if the adjoint of equation is used, tedious and complicated mathematical deduction can be avoided. Therefore the adjoint of equation should attract much attention.
基金supported by the State Key Project of Research and Development Plan (2016YFA0400204)the National Natural Science Foundation of China (U1738133)+3 种基金Strategic Pioneer Research Program in Space Science of the Chinese Academy of Science (CAS)Youth Innovation Promotion Association of CASMinistry of Science and Technology of Jiangsu Province (17KJD510001)Changzhou Institute of Technology (YN1611)
文摘DArk Matter Particle Explorer(DAMPE) is a general purpose high energy cosmic ray and gamma ray observatory, aiming to detect high energy electrons and gammas in the energy range 5 Ge V to 10 Te V and hundreds of Te V for nuclei. This paper provides a method using machine learning to identify electrons and separate them from gammas, protons, helium and heavy nuclei with the DAMPE data acquired from 2016 January 1 to 2017 June 30, in the energy range from 10 to 100 Ge V.
文摘With the development of distribution automation system, the centralized meter reading system has been adopted more and more extensively, which provides real-time electricity consumption data of end-users, and consequently lays foundation for operating condition on-line analysis of distribution network. In this paper, a modified back/forward sweep method, which directly uses real-time electricity consumption data acquired from the centralized meter reading system, is proposedto realize voltage analysis based on 24-hour electricity consumption data of a typical transformer district. Furthermore, the calculated line losses are verified through data collected from the energy metering of the distribution transformer, illustrating that the proposed method can be applied in analyzing voltage level and discovering unknown energy losses, which will lay foundation for on-line analysis, calculation and monitoring of power distribution network.
基金supported by the National Natural Science Foundation of China(Grant Nos.41974199 and 41574167)the B-type Strategic Priority Program of the Chinese Academy of Sciences(XDB41000000)。
文摘Radioheliograph images are essential for the study of solar short term activities and long term variations, while the continuity and granularity of radioheliograph data are not so ideal, due to the short visible time of the Sun and the complex electron-magnetic environment near the ground-based radio telescope. In this work, we develop a multi-channel input single-channel output neural network, which can generate radioheliograph image in microwave band from the Extreme Ultra-violet(EUV) observation of the Atmospheric Imaging Assembly(AIA) on board the Solar Dynamic Observatory(SDO). The neural network is trained with nearly 8 years of data of Nobeyama Radioheliograph(No RH) at 17 GHz and SDO/AIA from January 2011 to September 2018. The generated radioheliograph image is in good consistency with the well-calibrated No RH observation. SDO/AIA provides solar atmosphere images in multiple EUV wavelengths every 12 seconds from space, so the present model can fill the vacancy of limited observation time of microwave radioheliograph, and support further study of the relationship between the microwave and EUV emission.
文摘By using the method of least square linear fitting to analyze data do not exist errors under certain conditions, in order to make the linear data fitting method that can more accurately solve the relationship expression between the volume and quantity in scientific experiments and engineering practice, this article analyzed data error by commonly linear data fitting method, and proposed improved process of the least distance squ^re method based on least squares method. Finally, the paper discussed the advantages and disadvantages through the example analysis of two kinds of linear data fitting method, and given reasonable control conditions for its application.