At present,the acquisition of seismic data is developing toward high-precision and high-density methods.However,complex natural environments and cultural factors in many exploration areas cause difficulties in achievi...At present,the acquisition of seismic data is developing toward high-precision and high-density methods.However,complex natural environments and cultural factors in many exploration areas cause difficulties in achieving uniform and intensive acquisition,which makes complete seismic data collection impossible.Therefore,data reconstruction is required in the processing link to ensure imaging accuracy.Deep learning,as a new field in rapid development,presents clear advantages in feature extraction and modeling.In this study,the convolutional neural network deep learning algorithm is applied to seismic data reconstruction.Based on the convolutional neural network algorithm and combined with the characteristics of seismic data acquisition,two training strategies of supervised and unsupervised learning are designed to reconstruct sparse acquisition seismic records.First,a supervised learning strategy is proposed for labeled data,wherein the complete seismic data are segmented as the input of the training set and are randomly sampled before each training,thereby increasing the number of samples and the richness of features.Second,an unsupervised learning strategy based on large samples is proposed for unlabeled data,and the rolling segmentation method is used to update(pseudo)labels and training parameters in the training process.Through the reconstruction test of simulated and actual data,the deep learning algorithm based on a convolutional neural network shows better reconstruction quality and higher accuracy than compressed sensing based on Curvelet transform.展开更多
Irregular seismic data causes problems with multi-trace processing algorithms and degrades processing quality. We introduce the Projection onto Convex Sets (POCS) based image restoration method into the seismic data...Irregular seismic data causes problems with multi-trace processing algorithms and degrades processing quality. We introduce the Projection onto Convex Sets (POCS) based image restoration method into the seismic data reconstruction field to interpolate irregularly missing traces. For entire dead traces, we transfer the POCS iteration reconstruction process from the time to frequency domain to save computational cost because forward and reverse Fourier time transforms are not needed. In each iteration, the selection threshold parameter is important for reconstruction efficiency. In this paper, we designed two types of threshold models to reconstruct irregularly missing seismic data. The experimental results show that an exponential threshold can greatly reduce iterations and improve reconstruction efficiency compared to a linear threshold for the same reconstruction result. We also analyze the anti- noise and anti-alias ability of the POCS reconstruction method. Finally, theoretical model tests and real data examples indicate that the proposed method is efficient and applicable.展开更多
Seismic data contain random noise interference and are affected by irregular subsampling. Presently, most of the data reconstruction methods are carried out separately from noise suppression. Moreover, most data recon...Seismic data contain random noise interference and are affected by irregular subsampling. Presently, most of the data reconstruction methods are carried out separately from noise suppression. Moreover, most data reconstruction methods are not ideal for noisy data. In this paper, we choose the multiscale and multidirectional 2D curvelet transform to perform simultaneous data reconstruction and noise suppression of 3D seismic data. We introduce the POCS algorithm, the exponentially decreasing square root threshold, and soft threshold operator to interpolate the data at each time slice. A weighing strategy was introduced to reduce the reconstructed data noise. A 3D simultaneous data reconstruction and noise suppression method based on the curvelet transform was proposed. When compared with data reconstruction followed by denoizing and the Fourier transform, the proposed method is more robust and effective. The proposed method has important implications for data acquisition in complex areas and reconstructing missing traces.展开更多
Traditional seismic data sampling follows the Nyquist sampling theorem. In this paper, we introduce the theory of compressive sensing (CS), breaking through the limitations of the traditional Nyquist sampling theore...Traditional seismic data sampling follows the Nyquist sampling theorem. In this paper, we introduce the theory of compressive sensing (CS), breaking through the limitations of the traditional Nyquist sampling theorem, rendering the coherent aliases of regular undersampling into harmless incoherent random noise using random undersampling, and effectively turning the reconstruction problem into a much simpler denoising problem. We introduce the projections onto convex sets (POCS) algorithm in the data reconstruction process, apply the exponential decay threshold parameter in the iterations, and modify the traditional reconstruction process that performs forward and reverse transforms in the time and space domain. We propose a new method that uses forward and reverse transforms in the space domain. The proposed method uses less computer memory and improves computational speed. We also analyze the antinoise and anti-aliasing ability of the proposed method, and compare the 2D and 3D data reconstruction. Theoretical models and real data show that the proposed method is effective and of practical importance, as it can reconstruct missing traces and reduce the exploration cost of complex data acquisition.展开更多
Seismic data typically contain random missing traces because of obstacles and economic restrictions,influencing subsequent processing and interpretation.Seismic data recovery can be expressed as a low-rank matrix appr...Seismic data typically contain random missing traces because of obstacles and economic restrictions,influencing subsequent processing and interpretation.Seismic data recovery can be expressed as a low-rank matrix approximation problem by assuming a low-rank structure for the complete seismic data in the frequency–space(f–x)domain.The nuclear norm minimization(NNM)(sum of singular values)approach treats singular values equally,yielding a solution deviating from the optimal.Further,the log-sum majorization–minimization(LSMM)approach uses the nonconvex log-sum function as a rank substitution for seismic data interpolation,which is highly accurate but time-consuming.Therefore,this study proposes an efficient nonconvex reconstruction model based on the nonconvex Geman function(the nonconvex Geman low-rank(NCGL)model),involving a tighter approximation of the original rank function.Without introducing additional parameters,the nonconvex problem is solved using the Karush–Kuhn–Tucker condition theory.Experiments using synthetic and field data demonstrate that the proposed NCGL approach achieves a higher signal-to-noise ratio than the singular value thresholding method based on NNM and the projection onto convex sets method based on the data-driven threshold model.The proposed approach achieves higher reconstruction efficiency than the singular value thresholding and LSMM methods.展开更多
Seismic data reconstruction is an essential and yet fundamental step in seismic data processing workflow,which is of profound significance to improve migration imaging quality,multiple suppression effect,and seismic i...Seismic data reconstruction is an essential and yet fundamental step in seismic data processing workflow,which is of profound significance to improve migration imaging quality,multiple suppression effect,and seismic inversion accuracy.Regularization methods play a central role in solving the underdetermined inverse problem of seismic data reconstruction.In this paper,a novel regularization approach is proposed,the low dimensional manifold model(LDMM),for reconstructing the missing seismic data.Our work relies on the fact that seismic patches always occupy a low dimensional manifold.Specifically,we exploit the dimension of the seismic patches manifold as a regularization term in the reconstruction problem,and reconstruct the missing seismic data by enforcing low dimensionality on this manifold.The crucial procedure of the proposed method is to solve the dimension of the patches manifold.Toward this,we adopt an efficient dimensionality calculation method based on low-rank approximation,which provides a reliable safeguard to enforce the constraints in the reconstruction process.Numerical experiments performed on synthetic and field seismic data demonstrate that,compared with the curvelet-based sparsity-promoting L1-norm minimization method and the multichannel singular spectrum analysis method,the proposed method obtains state-of-the-art reconstruction results.展开更多
In the process of encoding and decoding,erasure codes over binary fields,which just need AND operations and XOR operations and therefore have a high computational efficiency,are widely used in various fields of inform...In the process of encoding and decoding,erasure codes over binary fields,which just need AND operations and XOR operations and therefore have a high computational efficiency,are widely used in various fields of information technology.A matrix decoding method is proposed in this paper.The method is a universal data reconstruction scheme for erasure codes over binary fields.Besides a pre-judgment that whether errors can be recovered,the method can rebuild sectors of loss data on a fault-tolerant storage system constructed by erasure codes for disk errors.Data reconstruction process of the new method has simple and clear steps,so it is beneficial for implementation of computer codes.And more,it can be applied to other non-binary fields easily,so it is expected that the method has an extensive application in the future.展开更多
Based on the compressive sensing,a novel algorithm is proposed to solve reconstruction problem under sparsity assumptions.Instead of estimating the reconstructed data through minimizing the objective function,the auth...Based on the compressive sensing,a novel algorithm is proposed to solve reconstruction problem under sparsity assumptions.Instead of estimating the reconstructed data through minimizing the objective function,the authors parameterize the problem as a linear combination of few elementary thresholding functions,which can be solved by calculating the linear weighting coefficients.It is to update the thresholding functions during the process of iteration.The advantage of this method is that the optimization problem only needs to be solved by calculating linear coefficients for each time.With the elementary thresholding functions satisfying certain constraints,a global convergence of the iterative algorithm is guaranteed.The synthetic and the field data results prove the effectiveness of the proposed algorithm.展开更多
Oil and gas seismic exploration have to adopt irregular seismic acquisition due to the increasingly complex exploration conditions to adapt to complex geological conditions and environments.However,the irregular seism...Oil and gas seismic exploration have to adopt irregular seismic acquisition due to the increasingly complex exploration conditions to adapt to complex geological conditions and environments.However,the irregular seismic acquisition is accompanied by the lack of acquisition data,which requires high-precision regularization.The sparse signal feature in the transform domain in compressed sensing theory is used in this paper to recover the missing signal,involving sparse transform base optimization and threshold modeling.First,this paper analyzes and compares the effects of six sparse transformation bases on the reconstruction accuracy and efficiency of irregular seismic data and establishes the quantitative relationship between sparse transformation and reconstruction accuracy and efficiency.Second,an adaptive threshold modeling method based on sparse coefficient is provided to improve the reconstruction accuracy.Test results show that the method has good adaptability to different seismic data and sparse transform bases.The f-x domain reconstruction method of effective frequency samples is studied to address the problem of low computational efficiency.The parallel computing strategy of curvelet transform combined with OpenMP is further proposed,which substantially improves the computational efficiency under the premise of ensuring the reconstruction accuracy.Finally,the actual acquisition data are used to verify the proposed method.The results indicate that the proposed method strategy can solve the regularization problem of irregular seismic data in production and improve the imaging quality of the target layer economically and efficiently.展开更多
This paper describes a data reconstruction technique for a multi-function sensor based on the Mestimator, which uses least squares and weighted least squares method. The algorithm has better robustness than convention...This paper describes a data reconstruction technique for a multi-function sensor based on the Mestimator, which uses least squares and weighted least squares method. The algorithm has better robustness than conventional least squares which can amplify the errors of inaccurate data. The M-estimator places particular emphasis on reducing the effects of large data errors, which are further overcome by an iterative regression process which gives small weights to large off-group data errors and large weights to small data errors. Simulation results are consistent with the hypothesis with 81 groups of regression data having an average accuracy of 3.5%, which demonstrates that the M-estimator provides more accurate and reliable data reconstruction.展开更多
BACKGROUND Lutetium has been shown to be an important potential innovation in pre-treated metastatic castration-resistant prostate cancer.Two clinical trials have evaluated lutetium thus far(therap and vision with 99 ...BACKGROUND Lutetium has been shown to be an important potential innovation in pre-treated metastatic castration-resistant prostate cancer.Two clinical trials have evaluated lutetium thus far(therap and vision with 99 and 385 patients,respectively),but their results are discordant.AIM To synthetize the available evidence on the effectiveness of lutetium in pre-treated metastatic castration-resistant prostate cancer;and to test the application of a new artificial intelligence technique that synthetizes effectiveness based on reconstructed patient-level data.METHODS We employed a new artificial intelligence method(shiny method)to pool the survival data of these two trials and evaluate to what extent the lutetium cohorts differed from one another.The shiny technique employs an original reconstruction of individual patient data from the Kaplan-Meier curves.The progression-free survival graphs of the two lutetium cohorts were analyzed and compared.RESULTS The hazard ratio estimated was in favor of the vision trial;the difference was statistically significant(P<0.001).These results indicate that further studies on lutetium are needed because the survival data of the two trials published thus far are conflicting.CONCLUSION Our study confirms the feasibility of reconstructing patient-level data from survival graphs in order to generate a survival statistics.展开更多
A new B-spline surface reconstruction method from layer data based on deformable model is presented. An initial deformable surface, which is represented as a closed cylinder, is firstly given. The surface is subject t...A new B-spline surface reconstruction method from layer data based on deformable model is presented. An initial deformable surface, which is represented as a closed cylinder, is firstly given. The surface is subject to internal forces describing its implicit smoothness property and external forces attracting it toward the layer data points. And then finite element method is adopted to solve its energy minimization problem, which results a bicubic closed B-spline surface with C^2 continuity. The proposed method can provide a smoothness and accurate surface model directly from the layer data, without the need to fit cross-sectional curves and make them compatible. The feasibility of the proposed method is verified by the experimental results.展开更多
We propose a novel machine learning approach to reconstruct meshless surface wind speed fields,i.e.,to reconstruct the surface wind speed at any location,based on meteorological background fields and geographical info...We propose a novel machine learning approach to reconstruct meshless surface wind speed fields,i.e.,to reconstruct the surface wind speed at any location,based on meteorological background fields and geographical information.The random forest method is selected to develop the machine learning data reconstruction model(MLDRM-RF)for wind speeds over Beijing from 2015-19.We use temporal,geospatial attribute and meteorological background field features as inputs.The wind speed field can be reconstructed at any station in the region not used in the training process to cross-validate model performance.The evaluation considers the spatial distribution of and seasonal variations in the root mean squared error(RMSE)of the reconstructed wind speed field across Beijing.The average RMSE is 1.09 m s^(−1),considerably smaller than the result(1.29 m s^(−1))obtained with inverse distance weighting(IDW)interpolation.Finally,we extract the important feature permutations by the method of mean decrease in impurity(MDI)and discuss the reasonableness of the model prediction results.MLDRM-RF is a reasonable approach with excellent potential for the improved reconstruction of historical surface wind speed fields with arbitrary grid resolutions.Such a model is needed in many wind applications,such as wind energy and aviation safety assessments.展开更多
A long-term dataset of photosynthetically active radiation (Qp) is reconstructed from a broadband global solar radiation (Rs) dataset through an all-weather reconstruction model. This method is based on four years...A long-term dataset of photosynthetically active radiation (Qp) is reconstructed from a broadband global solar radiation (Rs) dataset through an all-weather reconstruction model. This method is based on four years' worth of data collected in Beijing. Observation data of Rs and Qp from 2005-2008 are used to investigate the temporal variability of Qp and its dependence on the clearness index and solar zenith angle. A simple and effcient all-weather empirically derived reconstruction model is proposed to reconstruct Qp from Rs. This reconstruction method is found to estimate instantaneous Qp with high accuracy. The annual mean of the daily values of Qp during the period 1958-2005 period is 25.06 mol m-2 d-1. The magnitude of the long-term trend for the annual averaged Qp is presented (-0.19 mol m-2 yr-1 from 1958-1997 and -0.12 mol m-2 yr-1 from 1958-2005). The trend in Qp exhibits sharp decreases in the spring and summer and more gentle decreases in the autumn and winter.展开更多
Long-term change of sea surface temperature (SST) in the China Seas from 1900 to 2006 is examined based on two different observation datasets (HadlSSTI and HadSST3). Similar to the Atlantic, SST in the China Seas ...Long-term change of sea surface temperature (SST) in the China Seas from 1900 to 2006 is examined based on two different observation datasets (HadlSSTI and HadSST3). Similar to the Atlantic, SST in the China Seas has been well observed during the past 107 years. A comparison between the reconstructed (HadISSTI) and un-interpolated (HadSST3) datasets shows that the SST wanning trends from both datasets are consistent with each other in most of the China Seas. The warming trends are stronger in winter than in summer, with a maximum rate of SST increase exceeding 2.7℃ (100year)-I in the East China Sea and the Taiwan Strait during winter based on HadISSTI. However, the SST from both datasets experienced a sudden decrease after 1999 in the China Seas. The estimated trend from HadlSSTI is stronger than that fi'om HadSST3 in the East China Sea and the east of Taiwan Island, where the difference in the linear SST warming trends are as large as about 1℃ (100year)-I when using respectively HadISST1 and HadSST3 datasets. When compared to the linear winter warnling trend of the land surface air temperature (1.6℃ (100 year)-1), HadSST3 shows a more reasonable trend of less than 2.1℃( 100 year)-1 than HadISST 1 's trend of larger than 2.7℃ ( 100 year)-1 at the mouth of the Yangtze River. The restllts also indicate large uncertainties in the estimate of SST warming patterns.展开更多
Streamflow forecasting in drylands is challenging.Data are scarce,catchments are highly humanmodified and streamflow exhibits strong nonlinear responses to rainfall.The goal of this study was to evaluate the monthly a...Streamflow forecasting in drylands is challenging.Data are scarce,catchments are highly humanmodified and streamflow exhibits strong nonlinear responses to rainfall.The goal of this study was to evaluate the monthly and seasonal streamflow forecasting in two large catchments in the Jaguaribe River Basin in the Brazilian semi-arid area.We adopted four different lead times:one month ahead for monthly scale and two,three and four months ahead for seasonal scale.The gaps of the historic streamflow series were filled up by using rainfall-runoff modelling.Then,time series model techniques were applied,i.e.,the locally constant,the locally averaged,the k-nearest-neighbours algorithm(k-NN)and the autoregressive(AR)model.The criterion of reliability of the validation results is that the forecast is more skillful than streamflow climatology.Our approach outperformed the streamflow climatology for all monthly streamflows.On average,the former was 25%better than the latter.The seasonal streamflow forecasting(SSF)was also reliable(on average,20%better than the climatology),failing slightly only for the high flow season of one catchment(6%worse than the climatology).Considering an uncertainty envelope(probabilistic forecasting),which was considerably narrower than the data standard deviation,the streamflow forecasting performance increased by about 50%at both scales.The forecast errors were mainly driven by the streamflow intra-seasonality at monthly scale,while they were by the forecast lead time at seasonal scale.The best-fit and worst-fit time series model were the k-NN approach and the AR model,respectively.The rainfall-runoff modelling outputs played an important role in improving streamflow forecasting for one streamgauge that showed 35%of data gaps.The developed data-driven approach is mathematical and computationally very simple,demands few resources to accomplish its operational implementation and is applicable to other dryland watersheds.Our findings may be part of drought forecasting systems and potentially help allocating water months in advance.Moreover,the developed strategy can serve as a baseline for more complex streamflow forecast systems.展开更多
Performance improvement was attained in data reconstructions of 2-dimensional tunable diode laser absorption spectroscopy(TDLAS). Multiplicative Algebraic Reconstruction Technique(MART) algorithm was adopted for data ...Performance improvement was attained in data reconstructions of 2-dimensional tunable diode laser absorption spectroscopy(TDLAS). Multiplicative Algebraic Reconstruction Technique(MART) algorithm was adopted for data reconstruction. The data obtained in an experiment for the measurement of temperature and concentration fields of gas flows were used. The measurement theory is based upon the Beer-Lambert law, and the measurement system consists of a tunable laser, collimators, detectors, and an analyzer. Methane was used as a fuel for combustion with air in the Bunsen-type burner. The data used for the reconstruction are from the optical signals of 8-laser beams passed on a cross-section of the methane flame. The performances of MART algorithm in data reconstruction were validated and compared with those obtained by Algebraic Reconstruction Technique(ART) algorithm.展开更多
Precipitation is the most discontinuous atmospheric parameter because of its temporal and spatial variability. Precipitation observations at automatic weather stations(AWSs) show different patterns over different ti...Precipitation is the most discontinuous atmospheric parameter because of its temporal and spatial variability. Precipitation observations at automatic weather stations(AWSs) show different patterns over different time periods. This paper aims to reconstruct missing data by finding the time periods when precipitation patterns are similar, with a method called the intermittent sliding window period(ISWP) technique—a novel approach to reconstructing the majority of non-continuous missing real-time precipitation data. The ISWP technique is applied to a 1-yr precipitation dataset(January 2015 to January 2016), with a temporal resolution of 1 h, collected at 11 AWSs run by the Indian Meteorological Department in the capital region of Delhi. The acquired dataset has missing precipitation data amounting to 13.66%, of which 90.6% are reconstructed successfully. Furthermore, some traditional estimation algorithms are applied to the reconstructed dataset to estimate the remaining missing values on an hourly basis. The results show that the interpolation of the reconstructed dataset using the ISWP technique exhibits high quality compared with interpolation of the raw dataset. By adopting the ISWP technique, the root-mean-square errors(RMSEs)in the estimation of missing rainfall data—based on the arithmetic mean, multiple linear regression, linear regression,and moving average methods—are reduced by 4.2%, 55.47%, 19.44%, and 9.64%, respectively. However, adopting the ISWP technique with the inverse distance weighted method increases the RMSE by 0.07%, due to the fact that the reconstructed data add a more diverse relation to its neighboring AWSs.展开更多
An online model was proposed to identify the reasons behind changes in the energy consumption of the reheating furnace of a steel processing plant.The heat conversion of the furnace was analyzed and integrated with th...An online model was proposed to identify the reasons behind changes in the energy consumption of the reheating furnace of a steel processing plant.The heat conversion of the furnace was analyzed and integrated with the fuel consumption of the furnace to obtain a model of the energy consumption.Combined with the mechanism analysis,the basic parameters affecting energy consumption were determined,and four key influencing factors were obtained:furnace output,furnace charging temperature,furnace tapping temperature,and steel type.The specific calculation method of the contribution of each influencing factor was derived to define the conditions of the baseline energy consumption,while the online data were used to calculate the energy value and the actual performance value of the baseline energy consumption.The contribution of each influencing factor was determined through normalization.The cloud platform was used for database reconstruction and programming to realize the online intelligent evaluation of the energy consumption of the reheating furnace.Finally,a case study of the evaluation of the practical energy consumption of a steel rolling furnace in a steel plant was presented.The intelligent evaluation results were quantified and displayed online,and the performance of the system in reducing production line energy consumption was demonstrated.展开更多
基金This study was supported by the National Natural Science Foundation of China under the project‘Research on the Dynamic Location of Receiver Points and Wave Field Separation Technology Based on Deep Learning in OBN Seismic Exploration’(No.42074140).
文摘At present,the acquisition of seismic data is developing toward high-precision and high-density methods.However,complex natural environments and cultural factors in many exploration areas cause difficulties in achieving uniform and intensive acquisition,which makes complete seismic data collection impossible.Therefore,data reconstruction is required in the processing link to ensure imaging accuracy.Deep learning,as a new field in rapid development,presents clear advantages in feature extraction and modeling.In this study,the convolutional neural network deep learning algorithm is applied to seismic data reconstruction.Based on the convolutional neural network algorithm and combined with the characteristics of seismic data acquisition,two training strategies of supervised and unsupervised learning are designed to reconstruct sparse acquisition seismic records.First,a supervised learning strategy is proposed for labeled data,wherein the complete seismic data are segmented as the input of the training set and are randomly sampled before each training,thereby increasing the number of samples and the richness of features.Second,an unsupervised learning strategy based on large samples is proposed for unlabeled data,and the rolling segmentation method is used to update(pseudo)labels and training parameters in the training process.Through the reconstruction test of simulated and actual data,the deep learning algorithm based on a convolutional neural network shows better reconstruction quality and higher accuracy than compressed sensing based on Curvelet transform.
基金financially supported by National 863 Program (Grants No.2006AA 09A 102-09)National Science and Technology of Major Projects ( Grants No.2008ZX0 5025-001-001)
文摘Irregular seismic data causes problems with multi-trace processing algorithms and degrades processing quality. We introduce the Projection onto Convex Sets (POCS) based image restoration method into the seismic data reconstruction field to interpolate irregularly missing traces. For entire dead traces, we transfer the POCS iteration reconstruction process from the time to frequency domain to save computational cost because forward and reverse Fourier time transforms are not needed. In each iteration, the selection threshold parameter is important for reconstruction efficiency. In this paper, we designed two types of threshold models to reconstruct irregularly missing seismic data. The experimental results show that an exponential threshold can greatly reduce iterations and improve reconstruction efficiency compared to a linear threshold for the same reconstruction result. We also analyze the anti- noise and anti-alias ability of the POCS reconstruction method. Finally, theoretical model tests and real data examples indicate that the proposed method is efficient and applicable.
基金sponsored by the National Natural Science Foundation of China(Nos.41304097 and 41664006)the Natural Science Foundation of Jiangxi Province(No.20151BAB203044)+1 种基金the China Scholarship Council(No.201508360061)Distinguished Young Talent Foundation of Jiangxi Province(2017)
文摘Seismic data contain random noise interference and are affected by irregular subsampling. Presently, most of the data reconstruction methods are carried out separately from noise suppression. Moreover, most data reconstruction methods are not ideal for noisy data. In this paper, we choose the multiscale and multidirectional 2D curvelet transform to perform simultaneous data reconstruction and noise suppression of 3D seismic data. We introduce the POCS algorithm, the exponentially decreasing square root threshold, and soft threshold operator to interpolate the data at each time slice. A weighing strategy was introduced to reduce the reconstructed data noise. A 3D simultaneous data reconstruction and noise suppression method based on the curvelet transform was proposed. When compared with data reconstruction followed by denoizing and the Fourier transform, the proposed method is more robust and effective. The proposed method has important implications for data acquisition in complex areas and reconstructing missing traces.
基金sponsored by the National Natural Science Foundation of China (No.41174107)the National Science and Technology projects of oil and gas (No.2011ZX05023-005)
文摘Traditional seismic data sampling follows the Nyquist sampling theorem. In this paper, we introduce the theory of compressive sensing (CS), breaking through the limitations of the traditional Nyquist sampling theorem, rendering the coherent aliases of regular undersampling into harmless incoherent random noise using random undersampling, and effectively turning the reconstruction problem into a much simpler denoising problem. We introduce the projections onto convex sets (POCS) algorithm in the data reconstruction process, apply the exponential decay threshold parameter in the iterations, and modify the traditional reconstruction process that performs forward and reverse transforms in the time and space domain. We propose a new method that uses forward and reverse transforms in the space domain. The proposed method uses less computer memory and improves computational speed. We also analyze the antinoise and anti-aliasing ability of the proposed method, and compare the 2D and 3D data reconstruction. Theoretical models and real data show that the proposed method is effective and of practical importance, as it can reconstruct missing traces and reduce the exploration cost of complex data acquisition.
基金financially supported by the National Key R&D Program of China(No.2018YFC1503705)the Science and Technology Research Project of Hubei Provincial Department of Education(No.B2017597)+1 种基金the Hubei Subsurface Multiscale Imaging Key Laboratory(China University of Geosciences)(No.SMIL-2018-06)the Fundamental Research Funds for the Central Universities(No.CCNU19TS020).
文摘Seismic data typically contain random missing traces because of obstacles and economic restrictions,influencing subsequent processing and interpretation.Seismic data recovery can be expressed as a low-rank matrix approximation problem by assuming a low-rank structure for the complete seismic data in the frequency–space(f–x)domain.The nuclear norm minimization(NNM)(sum of singular values)approach treats singular values equally,yielding a solution deviating from the optimal.Further,the log-sum majorization–minimization(LSMM)approach uses the nonconvex log-sum function as a rank substitution for seismic data interpolation,which is highly accurate but time-consuming.Therefore,this study proposes an efficient nonconvex reconstruction model based on the nonconvex Geman function(the nonconvex Geman low-rank(NCGL)model),involving a tighter approximation of the original rank function.Without introducing additional parameters,the nonconvex problem is solved using the Karush–Kuhn–Tucker condition theory.Experiments using synthetic and field data demonstrate that the proposed NCGL approach achieves a higher signal-to-noise ratio than the singular value thresholding method based on NNM and the projection onto convex sets method based on the data-driven threshold model.The proposed approach achieves higher reconstruction efficiency than the singular value thresholding and LSMM methods.
基金supported by National Natural Science Foundation of China(Grant No.41874146 and No.42030103)Postgraduate Innovation Project of China University of Petroleum(East China)(No.YCX2021012)
文摘Seismic data reconstruction is an essential and yet fundamental step in seismic data processing workflow,which is of profound significance to improve migration imaging quality,multiple suppression effect,and seismic inversion accuracy.Regularization methods play a central role in solving the underdetermined inverse problem of seismic data reconstruction.In this paper,a novel regularization approach is proposed,the low dimensional manifold model(LDMM),for reconstructing the missing seismic data.Our work relies on the fact that seismic patches always occupy a low dimensional manifold.Specifically,we exploit the dimension of the seismic patches manifold as a regularization term in the reconstruction problem,and reconstruct the missing seismic data by enforcing low dimensionality on this manifold.The crucial procedure of the proposed method is to solve the dimension of the patches manifold.Toward this,we adopt an efficient dimensionality calculation method based on low-rank approximation,which provides a reliable safeguard to enforce the constraints in the reconstruction process.Numerical experiments performed on synthetic and field seismic data demonstrate that,compared with the curvelet-based sparsity-promoting L1-norm minimization method and the multichannel singular spectrum analysis method,the proposed method obtains state-of-the-art reconstruction results.
基金supported by the National Natural Science Foundation of China under Grant No.61501064Sichuan Provincial Science and Technology Project under Grant No.2016GZ0122
文摘In the process of encoding and decoding,erasure codes over binary fields,which just need AND operations and XOR operations and therefore have a high computational efficiency,are widely used in various fields of information technology.A matrix decoding method is proposed in this paper.The method is a universal data reconstruction scheme for erasure codes over binary fields.Besides a pre-judgment that whether errors can be recovered,the method can rebuild sectors of loss data on a fault-tolerant storage system constructed by erasure codes for disk errors.Data reconstruction process of the new method has simple and clear steps,so it is beneficial for implementation of computer codes.And more,it can be applied to other non-binary fields easily,so it is expected that the method has an extensive application in the future.
文摘Based on the compressive sensing,a novel algorithm is proposed to solve reconstruction problem under sparsity assumptions.Instead of estimating the reconstructed data through minimizing the objective function,the authors parameterize the problem as a linear combination of few elementary thresholding functions,which can be solved by calculating the linear weighting coefficients.It is to update the thresholding functions during the process of iteration.The advantage of this method is that the optimization problem only needs to be solved by calculating linear coefficients for each time.With the elementary thresholding functions satisfying certain constraints,a global convergence of the iterative algorithm is guaranteed.The synthetic and the field data results prove the effectiveness of the proposed algorithm.
基金supported by the National Science and Technology Major project(No.2016ZX05024001003)the Innovation Consortium Project of China Petroleum,and the Southwest Petroleum University(No.2020CX010201).
文摘Oil and gas seismic exploration have to adopt irregular seismic acquisition due to the increasingly complex exploration conditions to adapt to complex geological conditions and environments.However,the irregular seismic acquisition is accompanied by the lack of acquisition data,which requires high-precision regularization.The sparse signal feature in the transform domain in compressed sensing theory is used in this paper to recover the missing signal,involving sparse transform base optimization and threshold modeling.First,this paper analyzes and compares the effects of six sparse transformation bases on the reconstruction accuracy and efficiency of irregular seismic data and establishes the quantitative relationship between sparse transformation and reconstruction accuracy and efficiency.Second,an adaptive threshold modeling method based on sparse coefficient is provided to improve the reconstruction accuracy.Test results show that the method has good adaptability to different seismic data and sparse transform bases.The f-x domain reconstruction method of effective frequency samples is studied to address the problem of low computational efficiency.The parallel computing strategy of curvelet transform combined with OpenMP is further proposed,which substantially improves the computational efficiency under the premise of ensuring the reconstruction accuracy.Finally,the actual acquisition data are used to verify the proposed method.The results indicate that the proposed method strategy can solve the regularization problem of irregular seismic data in production and improve the imaging quality of the target layer economically and efficiently.
基金the National Natural Science Foundation of China (Nos. 60172071 and 60372005)
文摘This paper describes a data reconstruction technique for a multi-function sensor based on the Mestimator, which uses least squares and weighted least squares method. The algorithm has better robustness than conventional least squares which can amplify the errors of inaccurate data. The M-estimator places particular emphasis on reducing the effects of large data errors, which are further overcome by an iterative regression process which gives small weights to large off-group data errors and large weights to small data errors. Simulation results are consistent with the hypothesis with 81 groups of regression data having an average accuracy of 3.5%, which demonstrates that the M-estimator provides more accurate and reliable data reconstruction.
文摘BACKGROUND Lutetium has been shown to be an important potential innovation in pre-treated metastatic castration-resistant prostate cancer.Two clinical trials have evaluated lutetium thus far(therap and vision with 99 and 385 patients,respectively),but their results are discordant.AIM To synthetize the available evidence on the effectiveness of lutetium in pre-treated metastatic castration-resistant prostate cancer;and to test the application of a new artificial intelligence technique that synthetizes effectiveness based on reconstructed patient-level data.METHODS We employed a new artificial intelligence method(shiny method)to pool the survival data of these two trials and evaluate to what extent the lutetium cohorts differed from one another.The shiny technique employs an original reconstruction of individual patient data from the Kaplan-Meier curves.The progression-free survival graphs of the two lutetium cohorts were analyzed and compared.RESULTS The hazard ratio estimated was in favor of the vision trial;the difference was statistically significant(P<0.001).These results indicate that further studies on lutetium are needed because the survival data of the two trials published thus far are conflicting.CONCLUSION Our study confirms the feasibility of reconstructing patient-level data from survival graphs in order to generate a survival statistics.
基金This project is supported by National Natural Science Foundation of China(No. 10272033) and Provincial Natural Science Foundation of Guangdong,China(No.04105385).
文摘A new B-spline surface reconstruction method from layer data based on deformable model is presented. An initial deformable surface, which is represented as a closed cylinder, is firstly given. The surface is subject to internal forces describing its implicit smoothness property and external forces attracting it toward the layer data points. And then finite element method is adopted to solve its energy minimization problem, which results a bicubic closed B-spline surface with C^2 continuity. The proposed method can provide a smoothness and accurate surface model directly from the layer data, without the need to fit cross-sectional curves and make them compatible. The feasibility of the proposed method is verified by the experimental results.
基金supported by the Strategic Priority Research Program of the Chinese Academy of Sciences(Grant No.XDA19030402)the Key Special Projects for International Cooperation in Science and Technology Innovation between Governments(Grant No.2017YFE0133600the Beijing Municipal Natural Science Foundation Youth Project 8214066:Application Research of Beijing Road Visibility Prediction Based on Machine Learning Methods.
文摘We propose a novel machine learning approach to reconstruct meshless surface wind speed fields,i.e.,to reconstruct the surface wind speed at any location,based on meteorological background fields and geographical information.The random forest method is selected to develop the machine learning data reconstruction model(MLDRM-RF)for wind speeds over Beijing from 2015-19.We use temporal,geospatial attribute and meteorological background field features as inputs.The wind speed field can be reconstructed at any station in the region not used in the training process to cross-validate model performance.The evaluation considers the spatial distribution of and seasonal variations in the root mean squared error(RMSE)of the reconstructed wind speed field across Beijing.The average RMSE is 1.09 m s^(−1),considerably smaller than the result(1.29 m s^(−1))obtained with inverse distance weighting(IDW)interpolation.Finally,we extract the important feature permutations by the method of mean decrease in impurity(MDI)and discuss the reasonableness of the model prediction results.MLDRM-RF is a reasonable approach with excellent potential for the improved reconstruction of historical surface wind speed fields with arbitrary grid resolutions.Such a model is needed in many wind applications,such as wind energy and aviation safety assessments.
基金supported by the National Basic Research Program of China(No.2007CB407303)
文摘A long-term dataset of photosynthetically active radiation (Qp) is reconstructed from a broadband global solar radiation (Rs) dataset through an all-weather reconstruction model. This method is based on four years' worth of data collected in Beijing. Observation data of Rs and Qp from 2005-2008 are used to investigate the temporal variability of Qp and its dependence on the clearness index and solar zenith angle. A simple and effcient all-weather empirically derived reconstruction model is proposed to reconstruct Qp from Rs. This reconstruction method is found to estimate instantaneous Qp with high accuracy. The annual mean of the daily values of Qp during the period 1958-2005 period is 25.06 mol m-2 d-1. The magnitude of the long-term trend for the annual averaged Qp is presented (-0.19 mol m-2 yr-1 from 1958-1997 and -0.12 mol m-2 yr-1 from 1958-2005). The trend in Qp exhibits sharp decreases in the spring and summer and more gentle decreases in the autumn and winter.
基金supported by the National Basic Research Program of China(2012-CB955602)National Key Program for Developing Basic Science(2010CB428904)Natural Science Foundation of China(40830106,40921004 and 41176006)
文摘Long-term change of sea surface temperature (SST) in the China Seas from 1900 to 2006 is examined based on two different observation datasets (HadlSSTI and HadSST3). Similar to the Atlantic, SST in the China Seas has been well observed during the past 107 years. A comparison between the reconstructed (HadISSTI) and un-interpolated (HadSST3) datasets shows that the SST wanning trends from both datasets are consistent with each other in most of the China Seas. The warming trends are stronger in winter than in summer, with a maximum rate of SST increase exceeding 2.7℃ (100year)-I in the East China Sea and the Taiwan Strait during winter based on HadISSTI. However, the SST from both datasets experienced a sudden decrease after 1999 in the China Seas. The estimated trend from HadlSSTI is stronger than that fi'om HadSST3 in the East China Sea and the east of Taiwan Island, where the difference in the linear SST warming trends are as large as about 1℃ (100year)-I when using respectively HadISST1 and HadSST3 datasets. When compared to the linear winter warnling trend of the land surface air temperature (1.6℃ (100 year)-1), HadSST3 shows a more reasonable trend of less than 2.1℃( 100 year)-1 than HadISST 1 's trend of larger than 2.7℃ ( 100 year)-1 at the mouth of the Yangtze River. The restllts also indicate large uncertainties in the estimate of SST warming patterns.
基金The first author thanks the Brazilian National Council for Scientific and Technological Development for the Post-Doc scholarship(155814/2018-4).
文摘Streamflow forecasting in drylands is challenging.Data are scarce,catchments are highly humanmodified and streamflow exhibits strong nonlinear responses to rainfall.The goal of this study was to evaluate the monthly and seasonal streamflow forecasting in two large catchments in the Jaguaribe River Basin in the Brazilian semi-arid area.We adopted four different lead times:one month ahead for monthly scale and two,three and four months ahead for seasonal scale.The gaps of the historic streamflow series were filled up by using rainfall-runoff modelling.Then,time series model techniques were applied,i.e.,the locally constant,the locally averaged,the k-nearest-neighbours algorithm(k-NN)and the autoregressive(AR)model.The criterion of reliability of the validation results is that the forecast is more skillful than streamflow climatology.Our approach outperformed the streamflow climatology for all monthly streamflows.On average,the former was 25%better than the latter.The seasonal streamflow forecasting(SSF)was also reliable(on average,20%better than the climatology),failing slightly only for the high flow season of one catchment(6%worse than the climatology).Considering an uncertainty envelope(probabilistic forecasting),which was considerably narrower than the data standard deviation,the streamflow forecasting performance increased by about 50%at both scales.The forecast errors were mainly driven by the streamflow intra-seasonality at monthly scale,while they were by the forecast lead time at seasonal scale.The best-fit and worst-fit time series model were the k-NN approach and the AR model,respectively.The rainfall-runoff modelling outputs played an important role in improving streamflow forecasting for one streamgauge that showed 35%of data gaps.The developed data-driven approach is mathematical and computationally very simple,demands few resources to accomplish its operational implementation and is applicable to other dryland watersheds.Our findings may be part of drought forecasting systems and potentially help allocating water months in advance.Moreover,the developed strategy can serve as a baseline for more complex streamflow forecast systems.
基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)(No.2014R1A1A4A01005191)and by(No.2015 H1C1A1035890)by MSIP(No.2015R1A2A2A 01006803)
文摘Performance improvement was attained in data reconstructions of 2-dimensional tunable diode laser absorption spectroscopy(TDLAS). Multiplicative Algebraic Reconstruction Technique(MART) algorithm was adopted for data reconstruction. The data obtained in an experiment for the measurement of temperature and concentration fields of gas flows were used. The measurement theory is based upon the Beer-Lambert law, and the measurement system consists of a tunable laser, collimators, detectors, and an analyzer. Methane was used as a fuel for combustion with air in the Bunsen-type burner. The data used for the reconstruction are from the optical signals of 8-laser beams passed on a cross-section of the methane flame. The performances of MART algorithm in data reconstruction were validated and compared with those obtained by Algebraic Reconstruction Technique(ART) algorithm.
文摘Precipitation is the most discontinuous atmospheric parameter because of its temporal and spatial variability. Precipitation observations at automatic weather stations(AWSs) show different patterns over different time periods. This paper aims to reconstruct missing data by finding the time periods when precipitation patterns are similar, with a method called the intermittent sliding window period(ISWP) technique—a novel approach to reconstructing the majority of non-continuous missing real-time precipitation data. The ISWP technique is applied to a 1-yr precipitation dataset(January 2015 to January 2016), with a temporal resolution of 1 h, collected at 11 AWSs run by the Indian Meteorological Department in the capital region of Delhi. The acquired dataset has missing precipitation data amounting to 13.66%, of which 90.6% are reconstructed successfully. Furthermore, some traditional estimation algorithms are applied to the reconstructed dataset to estimate the remaining missing values on an hourly basis. The results show that the interpolation of the reconstructed dataset using the ISWP technique exhibits high quality compared with interpolation of the raw dataset. By adopting the ISWP technique, the root-mean-square errors(RMSEs)in the estimation of missing rainfall data—based on the arithmetic mean, multiple linear regression, linear regression,and moving average methods—are reduced by 4.2%, 55.47%, 19.44%, and 9.64%, respectively. However, adopting the ISWP technique with the inverse distance weighted method increases the RMSE by 0.07%, due to the fact that the reconstructed data add a more diverse relation to its neighboring AWSs.
基金supported by the National Key Researchand Development Programof China (Grant No.2020YFB1711101)the Anhui Provincial University Natural Science Foundation Key Project (Grant No.KJ2019A127).
文摘An online model was proposed to identify the reasons behind changes in the energy consumption of the reheating furnace of a steel processing plant.The heat conversion of the furnace was analyzed and integrated with the fuel consumption of the furnace to obtain a model of the energy consumption.Combined with the mechanism analysis,the basic parameters affecting energy consumption were determined,and four key influencing factors were obtained:furnace output,furnace charging temperature,furnace tapping temperature,and steel type.The specific calculation method of the contribution of each influencing factor was derived to define the conditions of the baseline energy consumption,while the online data were used to calculate the energy value and the actual performance value of the baseline energy consumption.The contribution of each influencing factor was determined through normalization.The cloud platform was used for database reconstruction and programming to realize the online intelligent evaluation of the energy consumption of the reheating furnace.Finally,a case study of the evaluation of the practical energy consumption of a steel rolling furnace in a steel plant was presented.The intelligent evaluation results were quantified and displayed online,and the performance of the system in reducing production line energy consumption was demonstrated.