With the continued development of multiple Global Navigation Satellite Systems(GNSS)and the emergence of various frequencies,UnDifferenced and UnCombined(UDUC)data processing has become an increasingly attractive opti...With the continued development of multiple Global Navigation Satellite Systems(GNSS)and the emergence of various frequencies,UnDifferenced and UnCombined(UDUC)data processing has become an increasingly attractive option.In this contribution,we provide an overview of the current status of UDUC GNSS data processing activities in China.These activities encompass the formulation of Precise Point Positioning(PPP)models and PPP-Real-Time Kinematic(PPP-RTK)models for processing single-station and multi-station GNSS data,respectively.Regarding single-station data processing,we discuss the advancements in PPP models,particularly the extension from a single system to multiple systems,and from dual frequencies to single and multiple frequencies.Additionally,we introduce the modified PPP model,which accounts for the time variation of receiver code biases,a departure from the conventional PPP model that typically assumes these biases to be time-constant.In the realm of multi-station PPP-RTK data processing,we introduce the ionosphere-weighted PPP-RTK model,which enhances the model strength by considering the spatial correlation of ionospheric delays.We also review the phase-only PPP-RTK model,designed to mitigate the impact of unmodelled code-related errors.Furthermore,we explore GLONASS PPP-RTK,achieved through the application of the integer-estimable model.For large-scale network data processing,we introduce the all-in-view PPP-RTK model,which alleviates the strict common-view requirement at all receivers.Moreover,we present the decentralized PPP-RTK data processing strategy,designed to improve computational efficiency.Overall,this work highlights the various advancements in UDUC GNSS data processing,providing insights into the state-of-the-art techniques employed in China to achieve precise GNSS applications.展开更多
The current velocity observation of LADCP(Lowered Acoustic Doppler Current Profiler)has the advantages of a large vertical range of observation and high operability compared with traditional current measurement method...The current velocity observation of LADCP(Lowered Acoustic Doppler Current Profiler)has the advantages of a large vertical range of observation and high operability compared with traditional current measurement methods,and is being widely used in the field of ocean observation.Shear and inverse methods are now commonly used by the international marine community to process LADCP data and calculate ocean current profiles.The two methods have their advantages and shortcomings.The shear method calculates the value of current shear more accurately,while the accuracy in an absolute value of the current is lower.The inverse method calculates the absolute value of the current velocity more accurately,but the current shear is less accurate.Based on the shear method,this paper proposes a layering shear method to calculate the current velocity profile by“layering averaging”,and proposes corresponding current calculation methods according to the different types of problems in several field observation data from the western Pacific,forming an independent LADCP data processing system.The comparison results have shown that the layering shear method can achieve the same effect as the inverse method in the calculation of the absolute value of current velocity,while retaining the advantages of the shear method in the calculation of a value of the current shear.展开更多
The Kuiyang-ST2000 deep-towed high-resolution multichannel seismic system was designed by the First Institute of Oceanography,Ministry of Natural Resources(FIO,MNR).The system is mainly composed of a plasma spark sour...The Kuiyang-ST2000 deep-towed high-resolution multichannel seismic system was designed by the First Institute of Oceanography,Ministry of Natural Resources(FIO,MNR).The system is mainly composed of a plasma spark source(source level:216 dB,main frequency:750 Hz,frequency bandwidth:150-1200 Hz)and a towed hydrophone streamer with 48 channels.Because the source and the towed hydrophone streamer are constantly moving according to the towing configuration,the accurate positioning of the towing hydrophone array and the moveout correction of deep-towed multichannel seismic data processing before imaging are challenging.Initially,according to the characteristics of the system and the towing streamer shape in deep water,travel-time positioning method was used to construct the hydrophone streamer shape,and the results were corrected by using the polynomial curve fitting method.Then,a new data-processing workflow for Kuiyang-ST2000 system data was introduced,mainly including float datum setting,residual static correction,phase-based moveout correction,which allows the imaging algorithms of conventional marine seismic data processing to extend to deep-towed seismic data.We successfully applied the Kuiyang-ST2000 system and methodology of data processing to a gas hydrate survey of the Qiongdongnan and Shenhu areas in the South China Sea,and the results show that the profile has very high vertical and lateral resolutions(0.5 m and 8 m,respectively),which can provide full and accurate details of gas hydrate-related and geohazard sedimentary and structural features in the South China Sea.展开更多
Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is di...Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is difficult to use the traditional probability theory to process the samples and assess the degree of uncertainty. Using the grey relational theory and the norm theory, the grey distance information approach, which is based on the grey distance information quantity of a sample and the average grey distance information quantity of the samples, is proposed in this article. The definitions of the grey distance information quantity of a sample and the average grey distance information quantity of the samples, with their characteristics and algorithms, are introduced. The correlative problems, including the algorithm of estimated value, the standard deviation, and the acceptance and rejection criteria of the samples and estimated results, are also proposed. Moreover, the information whitening ratio is introduced to select the weight algorithm and to compare the different samples. Several examples are given to demonstrate the application of the proposed approach. The examples show that the proposed approach, which has no demand for the probability distribution of small samples, is feasible and effective.展开更多
The High Precision Magnetometer(HPM) on board the China Seismo-Electromagnetic Satellite(CSES) allows highly accurate measurement of the geomagnetic field; it includes FGM(Fluxgate Magnetometer) and CDSM(Coupled Dark ...The High Precision Magnetometer(HPM) on board the China Seismo-Electromagnetic Satellite(CSES) allows highly accurate measurement of the geomagnetic field; it includes FGM(Fluxgate Magnetometer) and CDSM(Coupled Dark State Magnetometer)probes. This article introduces the main processing method, algorithm, and processing procedure of the HPM data. First, the FGM and CDSM probes are calibrated according to ground sensor data. Then the FGM linear parameters can be corrected in orbit, by applying the absolute vector magnetic field correction algorithm from CDSM data. At the same time, the magnetic interference of the satellite is eliminated according to ground-satellite magnetic test results. Finally, according to the characteristics of the magnetic field direction in the low latitude region, the transformation matrix between FGM probe and star sensor is calibrated in orbit to determine the correct direction of the magnetic field. Comparing the magnetic field data of CSES and SWARM satellites in five continuous geomagnetic quiet days, the difference in measurements of the vector magnetic field is about 10 nT, which is within the uncertainty interval of geomagnetic disturbance.展开更多
Low-field(nuclear magnetic resonance)NMR has been widely used in petroleum industry,such as well logging and laboratory rock core analysis.However,the signal-to-noise ratio is low due to the low magnetic field strengt...Low-field(nuclear magnetic resonance)NMR has been widely used in petroleum industry,such as well logging and laboratory rock core analysis.However,the signal-to-noise ratio is low due to the low magnetic field strength of NMR tools and the complex petrophysical properties of detected samples.Suppressing the noise and highlighting the available NMR signals is very important for subsequent data processing.Most denoising methods are normally based on fixed mathematical transformation or handdesign feature selectors to suppress noise characteristics,which may not perform well because of their non-adaptive performance to different noisy signals.In this paper,we proposed a“data processing framework”to improve the quality of low field NMR echo data based on dictionary learning.Dictionary learning is a machine learning method based on redundancy and sparse representation theory.Available information in noisy NMR echo data can be adaptively extracted and reconstructed by dictionary learning.The advantages and application effectiveness of the proposed method were verified with a number of numerical simulations,NMR core data analyses,and NMR logging data processing.The results show that dictionary learning can significantly improve the quality of NMR echo data with high noise level and effectively improve the accuracy and reliability of inversion results.展开更多
A novel technique for automatic seismic data processing using both integral and local feature of seismograms was presented in this paper. Here, the term integral feature of seismograms refers to feature which may depi...A novel technique for automatic seismic data processing using both integral and local feature of seismograms was presented in this paper. Here, the term integral feature of seismograms refers to feature which may depict the shape of the whole seismograms. However, unlike some previous efforts which completely abandon the DIAL approach, i.e., signal detection, phase identifi- cation, association, and event localization, and seek to use envelope cross-correlation to detect seismic events directly, our technique keeps following the DIAL approach, but in addition to detect signals corresponding to individual seismic phases, it also detects continuous wave-trains and explores their feature for phase-type identification and signal association. More concrete ideas about how to define wave-trains and combine them with various detections, as well as how to measure and utilize their feature in the seismic data processing were expatiated in the paper. This approach has been applied to the routine data processing by us for years, and test results for a 16 days' period using data from the Xinjiang seismic station network were presented. The automatic processing results have fairly low false and missed event rate simultaneously, showing that the new technique has good application prospects for improvement of the automatic seismic data processing.展开更多
How to design a multicast key management system with high performance is a hot issue now. This paper will apply the idea of hierarchical data processing to construct a common analytic model based on directed logical k...How to design a multicast key management system with high performance is a hot issue now. This paper will apply the idea of hierarchical data processing to construct a common analytic model based on directed logical key tree and supply two important metrics to this problem: re-keying cost and key storage cost. The paper gives the basic theory to the hierarchical data processing and the analyzing model to multieast key management based on logical key tree. It has been proved that the 4-ray tree has the best performance in using these metrics. The key management problem is also investigated based on user probability model, and gives two evaluating parameters to re-keying and key storage cost.展开更多
In the course of network supported collaborative design, the data processing plays a very vital role. Much effort has been spent in this area, and many kinds of approaches have been proposed. Based on the correlative ...In the course of network supported collaborative design, the data processing plays a very vital role. Much effort has been spent in this area, and many kinds of approaches have been proposed. Based on the correlative materials, this paper presents extensible markup language (XML) based strategy for several important problems of data processing in network supported collaborative design, such as the representation of standard for the exchange of product model data (STEP) with XML in the product information expression and the management of XML documents using relational database. The paper gives a detailed exposition on how to clarify the mapping between XML structure and the relationship database structure and how XML-QL queries can be translated into structured query language (SQL) queries. Finally, the structure of data processing system based on XML is presented.展开更多
To improve our understanding of the formation and evolution of the Moon, one of the payloads onboard the Chang'e-3 (CE-3) rover is Lunar Penetrating Radar (LPR). This investigation is the first attempt to explore...To improve our understanding of the formation and evolution of the Moon, one of the payloads onboard the Chang'e-3 (CE-3) rover is Lunar Penetrating Radar (LPR). This investigation is the first attempt to explore the lunar subsurface structure by using ground penetrating radar with high resolution. We have probed the subsur- face to a depth of several hundred meters using LPR. In-orbit testing, data processing and the preliminary results are presented. These observations have revealed the con- figuration of regolith where the thickness of regolith varies from about 4 m to 6 m. In addition, one layer of lunar rock, which is about 330 m deep and might have been accumulated during the depositional hiatus of mare basalts, was detected.展开更多
One of the most important project missions of neutral beam injectors is the implementation of 100 s neutral beam injection (NBI) with high power energy t.o the plasma of the EAST superconducting tokamak. Correspondi...One of the most important project missions of neutral beam injectors is the implementation of 100 s neutral beam injection (NBI) with high power energy t.o the plasma of the EAST superconducting tokamak. Correspondingly, it's necessary to construct a high-speed and reliable computer data processing system for processing experimental data, such as data acquisition, data compression and storage, data decompression and query, as well as data analysis. The implementation of computer data processing application software (CDPS) for EAST NBI is presented in this paper in terms of its functional structure and system realization. The set of software is programmed in C language and runs on Linux operating system based on TCP network protocol and multi-threading technology. The hardware mainly includes industrial control computer (IPC), data server, PXI DAQ cards and so on. Now this software has been applied to EAST NBI system, and experimental results show that the CDPS can serve EAST NBI very well.展开更多
In comparison with the ITRF2000 model, the ITRF2005 model represents a significant improvement in solution generation, datum definition and realization. However, these improvements cause a frame difference between the...In comparison with the ITRF2000 model, the ITRF2005 model represents a significant improvement in solution generation, datum definition and realization. However, these improvements cause a frame difference between the ITRF2000 and ITRF2005 models, which may impact GNSS data processing. To quantify this im- pact, the differences of the GNSS results obtained using the two models, including station coordinates, base- line length and horizontal velocity field, were analyzed. After transformation, the differences in position were at the millimeter level, and the differences in baseline length were less than 1 ram. The differences in the hori- zontal velocity fields decreased with as the study area was reduced. For a large region, the differences in these value were less than 1 mm/a, with a systematic difference of approximately 2 degrees in direction, while for a medium-sized region, the differences in value and direction were not significant.展开更多
The Chang'e-3 Visible and Near-infrared Imaging Spectrometer (VNIS) is one of the four payloads on the Yutu rover. After traversing the landing site during the first two lunar days, four different areas are detecte...The Chang'e-3 Visible and Near-infrared Imaging Spectrometer (VNIS) is one of the four payloads on the Yutu rover. After traversing the landing site during the first two lunar days, four different areas are detected, and Level 2A and 2B ra- diance data have been released to the scientific community. The released data have been processed by dark current subtraction, correction for the effect of temperature, radiometric calibration and geometric calibration. We emphasize approaches for re- flectance analysis and mineral identification for in-situ analysis with VNIS. Then the preliminary spectral and mineralogical results from the landing site are derived. After comparing spectral data from VNIS with data collected by the Ma instrument and samples of mare that were returned from the Apollo program, all the reflectance data have been found to have similar absorption features near 1000 nm except lunar sample 71061. In addition, there is also a weak absorption feature between 1750-2400nm on VNIS, but the slopes of VNIS and Ma reflectance at longer wavelengths are lower than data taken from samples of lunar mare. Spectral parameters such as Band Centers and Integrated Band Depth Ratios are used to analyze mineralogical features. The results show that detection points E and N205 are mixtures of high-Ca pyroxene and olivine, and the composition of olivineat point N205 is higher than that at point E, but the compositions of detection points S3 and N203 are mainly olivine-rich. Since there are no obvious absorption features near 1250 nm, plagioclase is not directly identified at the landing site.展开更多
The Extreme Ultraviolet Camera (EUVC) onboard the Chang'e-3 (CE-3) lander is used to observe the structure and dynamics of Earth's plasmasphere from the Moon. By detecting the resonance line emission of helium i...The Extreme Ultraviolet Camera (EUVC) onboard the Chang'e-3 (CE-3) lander is used to observe the structure and dynamics of Earth's plasmasphere from the Moon. By detecting the resonance line emission of helium ions (He+) at 30.4 nm, the EUVC images the entire plasmasphere with a time resolution of 10 min and a spatial resolution of about 0.1 Earth radius (RE) in a single frame. We first present details about the data processing from EUVC and the data acquisition in the commissioning phase, and then report some initial results, which reflect the basic features of the plas- masphere well. The photon count and emission intensity of EUVC are consistent with previous observations and models, which indicate that the EUVC works normally and can provide high quality data for future studies.展开更多
This paper applied the gray system theory to error data processing of NCmachine tools according to the characteristic. It presented the gray metabolism model of error dataprocessing. The test method for the model need...This paper applied the gray system theory to error data processing of NCmachine tools according to the characteristic. It presented the gray metabolism model of error dataprocessing. The test method for the model needs less capacity. Practice proved that the method issimple, calculation is easy, and results are exact.展开更多
With the development of Laser Induced Breakdown Spectroscopy (LIBS), increasing numbers of researchers have begun to focus on problems of the application. We are not just satisfied with analyzing what kinds of eleme...With the development of Laser Induced Breakdown Spectroscopy (LIBS), increasing numbers of researchers have begun to focus on problems of the application. We are not just satisfied with analyzing what kinds of elements are in the samples but are also eager to accomplish quantitative detection with LIBS. There are several means to improve the limit of detection and stability, which are important to quantitative detection, especially of trace elements, increasing the laser energy and the resolution of spectrometer, using dual pulse setup, vacuuming the ablation environment etc. All of these methods are about to update the hardware system, which is effective but expensive. So we establish the following spectrum data processing methods to improve the trace elements analysis in this paper: spectrum sifting, noise filtering, and peak fitting. There are small algorithms in these three method groups, which we will introduce in detail. Finally, we discuss how these methods affect the results of trace elements detection in an experiment to analyze the lead content in Chinese cabbage.展开更多
An idea is presented about the development of a data processing and analysis system for ICF experiments, which is based on an object oriented framework. The design and preliminary implementation of the data processing...An idea is presented about the development of a data processing and analysis system for ICF experiments, which is based on an object oriented framework. The design and preliminary implementation of the data processing and analysis framework based on the ROOT system have been completed. Software for unfolding soft X-ray spectra has been developed to test the functions of this framework.展开更多
In this paper,a dynamic linear detecting method,that the non-linear coefficient NL% was led and the non-linearity of data were estimated continuously and dynamically and determined when NL% exceeded reference value (...In this paper,a dynamic linear detecting method,that the non-linear coefficient NL% was led and the non-linearity of data were estimated continuously and dynamically and determined when NL% exceeded reference value (5%),was used for data processing and could solve the problem caused by the phenomenon of substrate depleting occurred following the redox reaction in portable blood sugar analyzer.By contrast to the conventional end-point method,the dynamic linear detecting method is based on multipoint data collecting.Experiments of measuring the calibration glucose solution with 8 various concentrations from 50 mg/dl to 400 mg/dl were carried out with the analyzer developed by our group.The linear regression curve,whose correlation for the data was 0.9995 and the residual was 2.8080,were obtained.The obtained correlation,residual, and the computation workload are all fit for the portable blood sugar analyzer.展开更多
Accuracy of coeffcient A_(isp) is related to the reference phase chosen during analysis. The cri- terion of choosing reference phase which may minimize the error of A_(isp) was deduced. The optimum results could be ob...Accuracy of coeffcient A_(isp) is related to the reference phase chosen during analysis. The cri- terion of choosing reference phase which may minimize the error of A_(isp) was deduced. The optimum results could be obtained by using the method of least squares if the number of sam- pies for analysis is more than the phase in samples. The procedure presented here is satisfacto- ryfor ordinary phase analysis.展开更多
To improve the detection rate and lower down the false positive rate in intrusion detection system, dimensionality reduction is widely used in the intrusion detection system. For this purpose, a data processing (DP)...To improve the detection rate and lower down the false positive rate in intrusion detection system, dimensionality reduction is widely used in the intrusion detection system. For this purpose, a data processing (DP) with support vector machine (SVM) was built. Different from traditiona/ly identifying the redundant data before purging the audit data by expert knowledge or utilizing different kinds of subsets of the available 41-connection attributes to build a classifier, the proposed strategy first removes the attributes whose correlation with another attribute exceeds a threshold, and then classifies two sequence samples as one class while removing either of the two samples whose similarity exceeds a threshold. The results of performance experiments showed that the strategy of DP and SVM is superior to the other existing data reduction strategies ( e. g. , audit reduction, rule extraction, and feature selection), and that the detection model based on DP and SVM outperforms those based on data mining, soft computing, and hierarchical principal component analysis neural networks.展开更多
基金National Natural Science Foundation of China(No.42022025)。
文摘With the continued development of multiple Global Navigation Satellite Systems(GNSS)and the emergence of various frequencies,UnDifferenced and UnCombined(UDUC)data processing has become an increasingly attractive option.In this contribution,we provide an overview of the current status of UDUC GNSS data processing activities in China.These activities encompass the formulation of Precise Point Positioning(PPP)models and PPP-Real-Time Kinematic(PPP-RTK)models for processing single-station and multi-station GNSS data,respectively.Regarding single-station data processing,we discuss the advancements in PPP models,particularly the extension from a single system to multiple systems,and from dual frequencies to single and multiple frequencies.Additionally,we introduce the modified PPP model,which accounts for the time variation of receiver code biases,a departure from the conventional PPP model that typically assumes these biases to be time-constant.In the realm of multi-station PPP-RTK data processing,we introduce the ionosphere-weighted PPP-RTK model,which enhances the model strength by considering the spatial correlation of ionospheric delays.We also review the phase-only PPP-RTK model,designed to mitigate the impact of unmodelled code-related errors.Furthermore,we explore GLONASS PPP-RTK,achieved through the application of the integer-estimable model.For large-scale network data processing,we introduce the all-in-view PPP-RTK model,which alleviates the strict common-view requirement at all receivers.Moreover,we present the decentralized PPP-RTK data processing strategy,designed to improve computational efficiency.Overall,this work highlights the various advancements in UDUC GNSS data processing,providing insights into the state-of-the-art techniques employed in China to achieve precise GNSS applications.
基金The National Natural Science Foundation of China under contract No.42206033the Marine Geological Survey Program of China Geological Survey under contract No.DD20221706+1 种基金the Research Foundation of National Engineering Research Center for Gas Hydrate Exploration and Development,Innovation Team Project,under contract No.2022GMGSCXYF41003the Scientific Research Fund of the Second Institute of Oceanography,Ministry of Natural Resources,under contract No.JG2006.
文摘The current velocity observation of LADCP(Lowered Acoustic Doppler Current Profiler)has the advantages of a large vertical range of observation and high operability compared with traditional current measurement methods,and is being widely used in the field of ocean observation.Shear and inverse methods are now commonly used by the international marine community to process LADCP data and calculate ocean current profiles.The two methods have their advantages and shortcomings.The shear method calculates the value of current shear more accurately,while the accuracy in an absolute value of the current is lower.The inverse method calculates the absolute value of the current velocity more accurately,but the current shear is less accurate.Based on the shear method,this paper proposes a layering shear method to calculate the current velocity profile by“layering averaging”,and proposes corresponding current calculation methods according to the different types of problems in several field observation data from the western Pacific,forming an independent LADCP data processing system.The comparison results have shown that the layering shear method can achieve the same effect as the inverse method in the calculation of the absolute value of current velocity,while retaining the advantages of the shear method in the calculation of a value of the current shear.
基金Supported by the National Key R&D Program of China(No.2016YFC0303900)the Laoshan Laboratory(Nos.MGQNLM-KF201807,LSKJ202203604)the National Natural Science Foundation of China(No.42106072)。
文摘The Kuiyang-ST2000 deep-towed high-resolution multichannel seismic system was designed by the First Institute of Oceanography,Ministry of Natural Resources(FIO,MNR).The system is mainly composed of a plasma spark source(source level:216 dB,main frequency:750 Hz,frequency bandwidth:150-1200 Hz)and a towed hydrophone streamer with 48 channels.Because the source and the towed hydrophone streamer are constantly moving according to the towing configuration,the accurate positioning of the towing hydrophone array and the moveout correction of deep-towed multichannel seismic data processing before imaging are challenging.Initially,according to the characteristics of the system and the towing streamer shape in deep water,travel-time positioning method was used to construct the hydrophone streamer shape,and the results were corrected by using the polynomial curve fitting method.Then,a new data-processing workflow for Kuiyang-ST2000 system data was introduced,mainly including float datum setting,residual static correction,phase-based moveout correction,which allows the imaging algorithms of conventional marine seismic data processing to extend to deep-towed seismic data.We successfully applied the Kuiyang-ST2000 system and methodology of data processing to a gas hydrate survey of the Qiongdongnan and Shenhu areas in the South China Sea,and the results show that the profile has very high vertical and lateral resolutions(0.5 m and 8 m,respectively),which can provide full and accurate details of gas hydrate-related and geohazard sedimentary and structural features in the South China Sea.
文摘Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is difficult to use the traditional probability theory to process the samples and assess the degree of uncertainty. Using the grey relational theory and the norm theory, the grey distance information approach, which is based on the grey distance information quantity of a sample and the average grey distance information quantity of the samples, is proposed in this article. The definitions of the grey distance information quantity of a sample and the average grey distance information quantity of the samples, with their characteristics and algorithms, are introduced. The correlative problems, including the algorithm of estimated value, the standard deviation, and the acceptance and rejection criteria of the samples and estimated results, are also proposed. Moreover, the information whitening ratio is introduced to select the weight algorithm and to compare the different samples. Several examples are given to demonstrate the application of the proposed approach. The examples show that the proposed approach, which has no demand for the probability distribution of small samples, is feasible and effective.
基金supported by National Key Research and Development Program of China from MOST (2016YFB0501503)
文摘The High Precision Magnetometer(HPM) on board the China Seismo-Electromagnetic Satellite(CSES) allows highly accurate measurement of the geomagnetic field; it includes FGM(Fluxgate Magnetometer) and CDSM(Coupled Dark State Magnetometer)probes. This article introduces the main processing method, algorithm, and processing procedure of the HPM data. First, the FGM and CDSM probes are calibrated according to ground sensor data. Then the FGM linear parameters can be corrected in orbit, by applying the absolute vector magnetic field correction algorithm from CDSM data. At the same time, the magnetic interference of the satellite is eliminated according to ground-satellite magnetic test results. Finally, according to the characteristics of the magnetic field direction in the low latitude region, the transformation matrix between FGM probe and star sensor is calibrated in orbit to determine the correct direction of the magnetic field. Comparing the magnetic field data of CSES and SWARM satellites in five continuous geomagnetic quiet days, the difference in measurements of the vector magnetic field is about 10 nT, which is within the uncertainty interval of geomagnetic disturbance.
基金supported by Science Foundation of China University of Petroleum,Beijing(Grant Number ZX20210024)Chinese Postdoctoral Science Foundation(Grant Number 2021M700172)+1 种基金The Strategic Cooperation Technology Projects of CNPC and CUP(Grant Number ZLZX2020-03)National Natural Science Foundation of China(Grant Number 42004105)
文摘Low-field(nuclear magnetic resonance)NMR has been widely used in petroleum industry,such as well logging and laboratory rock core analysis.However,the signal-to-noise ratio is low due to the low magnetic field strength of NMR tools and the complex petrophysical properties of detected samples.Suppressing the noise and highlighting the available NMR signals is very important for subsequent data processing.Most denoising methods are normally based on fixed mathematical transformation or handdesign feature selectors to suppress noise characteristics,which may not perform well because of their non-adaptive performance to different noisy signals.In this paper,we proposed a“data processing framework”to improve the quality of low field NMR echo data based on dictionary learning.Dictionary learning is a machine learning method based on redundancy and sparse representation theory.Available information in noisy NMR echo data can be adaptively extracted and reconstructed by dictionary learning.The advantages and application effectiveness of the proposed method were verified with a number of numerical simulations,NMR core data analyses,and NMR logging data processing.The results show that dictionary learning can significantly improve the quality of NMR echo data with high noise level and effectively improve the accuracy and reliability of inversion results.
文摘A novel technique for automatic seismic data processing using both integral and local feature of seismograms was presented in this paper. Here, the term integral feature of seismograms refers to feature which may depict the shape of the whole seismograms. However, unlike some previous efforts which completely abandon the DIAL approach, i.e., signal detection, phase identifi- cation, association, and event localization, and seek to use envelope cross-correlation to detect seismic events directly, our technique keeps following the DIAL approach, but in addition to detect signals corresponding to individual seismic phases, it also detects continuous wave-trains and explores their feature for phase-type identification and signal association. More concrete ideas about how to define wave-trains and combine them with various detections, as well as how to measure and utilize their feature in the seismic data processing were expatiated in the paper. This approach has been applied to the routine data processing by us for years, and test results for a 16 days' period using data from the Xinjiang seismic station network were presented. The automatic processing results have fairly low false and missed event rate simultaneously, showing that the new technique has good application prospects for improvement of the automatic seismic data processing.
基金Supported by the National High-Technology Re-search and Development Programof China(2001AA115300) the Na-tional Natural Science Foundation of China (69874038) ,the Nat-ural Science Foundation of Liaoning Province(20031018)
文摘How to design a multicast key management system with high performance is a hot issue now. This paper will apply the idea of hierarchical data processing to construct a common analytic model based on directed logical key tree and supply two important metrics to this problem: re-keying cost and key storage cost. The paper gives the basic theory to the hierarchical data processing and the analyzing model to multieast key management based on logical key tree. It has been proved that the 4-ray tree has the best performance in using these metrics. The key management problem is also investigated based on user probability model, and gives two evaluating parameters to re-keying and key storage cost.
基金supported by National High Technology Research and Development Program of China (863 Program) (No. AA420060)
文摘In the course of network supported collaborative design, the data processing plays a very vital role. Much effort has been spent in this area, and many kinds of approaches have been proposed. Based on the correlative materials, this paper presents extensible markup language (XML) based strategy for several important problems of data processing in network supported collaborative design, such as the representation of standard for the exchange of product model data (STEP) with XML in the product information expression and the management of XML documents using relational database. The paper gives a detailed exposition on how to clarify the mapping between XML structure and the relationship database structure and how XML-QL queries can be translated into structured query language (SQL) queries. Finally, the structure of data processing system based on XML is presented.
基金Supported by the National Natural Science Foundation of China
文摘To improve our understanding of the formation and evolution of the Moon, one of the payloads onboard the Chang'e-3 (CE-3) rover is Lunar Penetrating Radar (LPR). This investigation is the first attempt to explore the lunar subsurface structure by using ground penetrating radar with high resolution. We have probed the subsur- face to a depth of several hundred meters using LPR. In-orbit testing, data processing and the preliminary results are presented. These observations have revealed the con- figuration of regolith where the thickness of regolith varies from about 4 m to 6 m. In addition, one layer of lunar rock, which is about 330 m deep and might have been accumulated during the depositional hiatus of mare basalts, was detected.
基金supported by National Natural Science Foundation of China(No.11075183)
文摘One of the most important project missions of neutral beam injectors is the implementation of 100 s neutral beam injection (NBI) with high power energy t.o the plasma of the EAST superconducting tokamak. Correspondingly, it's necessary to construct a high-speed and reliable computer data processing system for processing experimental data, such as data acquisition, data compression and storage, data decompression and query, as well as data analysis. The implementation of computer data processing application software (CDPS) for EAST NBI is presented in this paper in terms of its functional structure and system realization. The set of software is programmed in C language and runs on Linux operating system based on TCP network protocol and multi-threading technology. The hardware mainly includes industrial control computer (IPC), data server, PXI DAQ cards and so on. Now this software has been applied to EAST NBI system, and experimental results show that the CDPS can serve EAST NBI very well.
基金supported by the Special Earthquake Research Project Granted by the China Earthquake Administration(201308009)
文摘In comparison with the ITRF2000 model, the ITRF2005 model represents a significant improvement in solution generation, datum definition and realization. However, these improvements cause a frame difference between the ITRF2000 and ITRF2005 models, which may impact GNSS data processing. To quantify this im- pact, the differences of the GNSS results obtained using the two models, including station coordinates, base- line length and horizontal velocity field, were analyzed. After transformation, the differences in position were at the millimeter level, and the differences in baseline length were less than 1 ram. The differences in the hori- zontal velocity fields decreased with as the study area was reduced. For a large region, the differences in these value were less than 1 mm/a, with a systematic difference of approximately 2 degrees in direction, while for a medium-sized region, the differences in value and direction were not significant.
基金Supported by the National Natural Science Foundation of China
文摘The Chang'e-3 Visible and Near-infrared Imaging Spectrometer (VNIS) is one of the four payloads on the Yutu rover. After traversing the landing site during the first two lunar days, four different areas are detected, and Level 2A and 2B ra- diance data have been released to the scientific community. The released data have been processed by dark current subtraction, correction for the effect of temperature, radiometric calibration and geometric calibration. We emphasize approaches for re- flectance analysis and mineral identification for in-situ analysis with VNIS. Then the preliminary spectral and mineralogical results from the landing site are derived. After comparing spectral data from VNIS with data collected by the Ma instrument and samples of mare that were returned from the Apollo program, all the reflectance data have been found to have similar absorption features near 1000 nm except lunar sample 71061. In addition, there is also a weak absorption feature between 1750-2400nm on VNIS, but the slopes of VNIS and Ma reflectance at longer wavelengths are lower than data taken from samples of lunar mare. Spectral parameters such as Band Centers and Integrated Band Depth Ratios are used to analyze mineralogical features. The results show that detection points E and N205 are mixtures of high-Ca pyroxene and olivine, and the composition of olivineat point N205 is higher than that at point E, but the compositions of detection points S3 and N203 are mainly olivine-rich. Since there are no obvious absorption features near 1250 nm, plagioclase is not directly identified at the landing site.
文摘The Extreme Ultraviolet Camera (EUVC) onboard the Chang'e-3 (CE-3) lander is used to observe the structure and dynamics of Earth's plasmasphere from the Moon. By detecting the resonance line emission of helium ions (He+) at 30.4 nm, the EUVC images the entire plasmasphere with a time resolution of 10 min and a spatial resolution of about 0.1 Earth radius (RE) in a single frame. We first present details about the data processing from EUVC and the data acquisition in the commissioning phase, and then report some initial results, which reflect the basic features of the plas- masphere well. The photon count and emission intensity of EUVC are consistent with previous observations and models, which indicate that the EUVC works normally and can provide high quality data for future studies.
文摘This paper applied the gray system theory to error data processing of NCmachine tools according to the characteristic. It presented the gray metabolism model of error dataprocessing. The test method for the model needs less capacity. Practice proved that the method issimple, calculation is easy, and results are exact.
基金supported by National High-Tech R&D Program(863 Program),China(No.2013AA102402)
文摘With the development of Laser Induced Breakdown Spectroscopy (LIBS), increasing numbers of researchers have begun to focus on problems of the application. We are not just satisfied with analyzing what kinds of elements are in the samples but are also eager to accomplish quantitative detection with LIBS. There are several means to improve the limit of detection and stability, which are important to quantitative detection, especially of trace elements, increasing the laser energy and the resolution of spectrometer, using dual pulse setup, vacuuming the ablation environment etc. All of these methods are about to update the hardware system, which is effective but expensive. So we establish the following spectrum data processing methods to improve the trace elements analysis in this paper: spectrum sifting, noise filtering, and peak fitting. There are small algorithms in these three method groups, which we will introduce in detail. Finally, we discuss how these methods affect the results of trace elements detection in an experiment to analyze the lead content in Chinese cabbage.
基金This project supported by the National High-Tech Research and Development Plan (863-804-3)
文摘An idea is presented about the development of a data processing and analysis system for ICF experiments, which is based on an object oriented framework. The design and preliminary implementation of the data processing and analysis framework based on the ROOT system have been completed. Software for unfolding soft X-ray spectra has been developed to test the functions of this framework.
文摘In this paper,a dynamic linear detecting method,that the non-linear coefficient NL% was led and the non-linearity of data were estimated continuously and dynamically and determined when NL% exceeded reference value (5%),was used for data processing and could solve the problem caused by the phenomenon of substrate depleting occurred following the redox reaction in portable blood sugar analyzer.By contrast to the conventional end-point method,the dynamic linear detecting method is based on multipoint data collecting.Experiments of measuring the calibration glucose solution with 8 various concentrations from 50 mg/dl to 400 mg/dl were carried out with the analyzer developed by our group.The linear regression curve,whose correlation for the data was 0.9995 and the residual was 2.8080,were obtained.The obtained correlation,residual, and the computation workload are all fit for the portable blood sugar analyzer.
文摘Accuracy of coeffcient A_(isp) is related to the reference phase chosen during analysis. The cri- terion of choosing reference phase which may minimize the error of A_(isp) was deduced. The optimum results could be obtained by using the method of least squares if the number of sam- pies for analysis is more than the phase in samples. The procedure presented here is satisfacto- ryfor ordinary phase analysis.
基金The National Natural Science Foundation ofChina (No.60672049)
文摘To improve the detection rate and lower down the false positive rate in intrusion detection system, dimensionality reduction is widely used in the intrusion detection system. For this purpose, a data processing (DP) with support vector machine (SVM) was built. Different from traditiona/ly identifying the redundant data before purging the audit data by expert knowledge or utilizing different kinds of subsets of the available 41-connection attributes to build a classifier, the proposed strategy first removes the attributes whose correlation with another attribute exceeds a threshold, and then classifies two sequence samples as one class while removing either of the two samples whose similarity exceeds a threshold. The results of performance experiments showed that the strategy of DP and SVM is superior to the other existing data reduction strategies ( e. g. , audit reduction, rule extraction, and feature selection), and that the detection model based on DP and SVM outperforms those based on data mining, soft computing, and hierarchical principal component analysis neural networks.