To solve the problems in restoring sedimentary facies and predicting reservoirs in loose gas-bearing sediment,based on seismic sedimentologic analysis of the first 9-component S-wave 3D seismic dataset of China,a four...To solve the problems in restoring sedimentary facies and predicting reservoirs in loose gas-bearing sediment,based on seismic sedimentologic analysis of the first 9-component S-wave 3D seismic dataset of China,a fourth-order isochronous stratigraphic framework was set up and then sedimentary facies and reservoirs in the Pleistocene Qigequan Formation in Taidong area of Qaidam Basin were studied by seismic geomorphology and seismic lithology.The study method and thought are as following.Firstly,techniques of phase rotation,frequency decomposition and fusion,and stratal slicing were applied to the 9-component S-wave seismic data to restore sedimentary facies of major marker beds based on sedimentary models reflected by satellite images.Then,techniques of seismic attribute extraction,principal component analysis,and random fitting were applied to calculate the reservoir thickness and physical parameters of a key sandbody,and the results are satisfactory and confirmed by blind testing wells.Study results reveal that the dominant sedimentary facies in the Qigequan Formation within the study area are delta front and shallow lake.The RGB fused slices indicate that there are two cycles with three sets of underwater distributary channel systems in one period.Among them,sandstones in the distributary channels of middle-low Qigequan Formation are thick and broad with superior physical properties,which are favorable reservoirs.The reservoir permeability is also affected by diagenesis.Distributary channel sandstone reservoirs extend further to the west of Sebei-1 gas field,which provides a basis to expand exploration to the western peripheral area.展开更多
Accurate prediction of formation pore pressure is essential to predict fluid flow and manage hydrocarbon production in petroleum engineering.Recent deep learning technique has been receiving more interest due to the g...Accurate prediction of formation pore pressure is essential to predict fluid flow and manage hydrocarbon production in petroleum engineering.Recent deep learning technique has been receiving more interest due to the great potential to deal with pore pressure prediction.However,most of the traditional deep learning models are less efficient to address generalization problems.To fill this technical gap,in this work,we developed a new adaptive physics-informed deep learning model with high generalization capability to predict pore pressure values directly from seismic data.Specifically,the new model,named CGP-NN,consists of a novel parametric features extraction approach(1DCPP),a stacked multilayer gated recurrent model(multilayer GRU),and an adaptive physics-informed loss function.Through machine training,the developed model can automatically select the optimal physical model to constrain the results for each pore pressure prediction.The CGP-NN model has the best generalization when the physicsrelated metricλ=0.5.A hybrid approach combining Eaton and Bowers methods is also proposed to build machine-learnable labels for solving the problem of few labels.To validate the developed model and methodology,a case study on a complex reservoir in Tarim Basin was further performed to demonstrate the high accuracy on the pore pressure prediction of new wells along with the strong generalization ability.The adaptive physics-informed deep learning approach presented here has potential application in the prediction of pore pressures coupled with multiple genesis mechanisms using seismic data.展开更多
Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for rese...Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.展开更多
This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,tradit...This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science.展开更多
This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while ...This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while maintaining data quality.We contributed to meeting the challenges of big data visualization using the embedded method based“Select from model(SFM)”method by using“Random forest Importance algorithm(RFI)”and comparing it with the filter method by using“Select percentile(SP)”method based chi square“Chi2”tool for selecting the most important features,which are then fed into a classification process using the logistic regression(LR)algorithm and the k-nearest neighbor(KNN)algorithm.Thus,the classification accuracy(AC)performance of LRis also compared to theKNN approach in python on eight data sets to see which method produces the best rating when feature selection methods are applied.Consequently,the study concluded that the feature selection methods have a significant impact on the analysis and visualization of the data after removing the repetitive data and the data that do not affect the goal.After making several comparisons,the study suggests(SFMLR)using SFM based on RFI algorithm for feature selection,with LR algorithm for data classify.The proposal proved its efficacy by comparing its results with recent literature.展开更多
The South Yellow Sea basin is filled with Mesozoic-Cenozoic continental sediments overlying pre-Palaeozoic and Mesozoic-Palaeozoic marine sediments.Conventional multi-channel seismic data cannot describe the velocity ...The South Yellow Sea basin is filled with Mesozoic-Cenozoic continental sediments overlying pre-Palaeozoic and Mesozoic-Palaeozoic marine sediments.Conventional multi-channel seismic data cannot describe the velocity structure of the marine residual basin in detail,leading to the lack of a deeper understanding of the distribution and lithology owing to strong energy shielding on the top interface of marine sediments.In this study,we present seismic tomography data from ocean bottom seismographs that describe the NEE-trending velocity distributions of the basin.The results indicate that strong velocity variations occur at shallow crustal levels.Horizontal velocity bodies show good correlation with surface geological features,and multi-layer features exist in the vertical velocity framework(depth:0–10 km).The analyses of the velocity model,gravity data,magnetic data,multichannel seismic profiles,and drilling data showed that high-velocity anomalies(>6.5 km/s)of small(thickness:1–2 km)and large(thickness:>5 km)scales were caused by igneous complexes in the multi-layer structure,which were active during the Palaeogene.Possible locations of good Mesozoic and Palaeozoic marine strata are limited to the Central Uplift and the western part of the Northern Depression along the wide-angle ocean bottom seismograph array.Following the Indosinian movement,a strong compression existed in the Northern Depression during the extensional phase that caused the formation of folds in the middle of the survey line.This study is useful for reconstructing the regional tectonic evolution and delineating the distribution of the marine residual basin in the South Yellow Sea basin.展开更多
The Solomon Sea Basin is a Cenozoic back-arc spreading basin within the convergence system of the Pacific and Indo-Australian plates.Against the background of subduction polarity reversal,the current Solomon Sea Basin...The Solomon Sea Basin is a Cenozoic back-arc spreading basin within the convergence system of the Pacific and Indo-Australian plates.Against the background of subduction polarity reversal,the current Solomon Sea Basin gradually formed a rhombic morphology with the subduction of the basin along the New Britain Trench and the Trobriand Trough.By analyzing the vertical gravity gradient,natural earthquake and seismic reflection data,this study determines the structural characteristics of the Solomon Sea Basin.It was found that the tectonics of the basin are characterized by the original expansion structure within the central part in addition to the structure induced by the latest subduction along the basin margin.The original spreading structure of the basin presented an east–west linear graben and horst controlled by normal faults during the basin expansion period.As a result of the subduction and slab-pull of the Solomon Sea Basin,extensional structure belts parallel to the New Britain Trench formed along the basin margin.展开更多
At present,the acquisition of seismic data is developing toward high-precision and high-density methods.However,complex natural environments and cultural factors in many exploration areas cause difficulties in achievi...At present,the acquisition of seismic data is developing toward high-precision and high-density methods.However,complex natural environments and cultural factors in many exploration areas cause difficulties in achieving uniform and intensive acquisition,which makes complete seismic data collection impossible.Therefore,data reconstruction is required in the processing link to ensure imaging accuracy.Deep learning,as a new field in rapid development,presents clear advantages in feature extraction and modeling.In this study,the convolutional neural network deep learning algorithm is applied to seismic data reconstruction.Based on the convolutional neural network algorithm and combined with the characteristics of seismic data acquisition,two training strategies of supervised and unsupervised learning are designed to reconstruct sparse acquisition seismic records.First,a supervised learning strategy is proposed for labeled data,wherein the complete seismic data are segmented as the input of the training set and are randomly sampled before each training,thereby increasing the number of samples and the richness of features.Second,an unsupervised learning strategy based on large samples is proposed for unlabeled data,and the rolling segmentation method is used to update(pseudo)labels and training parameters in the training process.Through the reconstruction test of simulated and actual data,the deep learning algorithm based on a convolutional neural network shows better reconstruction quality and higher accuracy than compressed sensing based on Curvelet transform.展开更多
The Kuiyang-ST2000 deep-towed high-resolution multichannel seismic system was designed by the First Institute of Oceanography,Ministry of Natural Resources(FIO,MNR).The system is mainly composed of a plasma spark sour...The Kuiyang-ST2000 deep-towed high-resolution multichannel seismic system was designed by the First Institute of Oceanography,Ministry of Natural Resources(FIO,MNR).The system is mainly composed of a plasma spark source(source level:216 dB,main frequency:750 Hz,frequency bandwidth:150-1200 Hz)and a towed hydrophone streamer with 48 channels.Because the source and the towed hydrophone streamer are constantly moving according to the towing configuration,the accurate positioning of the towing hydrophone array and the moveout correction of deep-towed multichannel seismic data processing before imaging are challenging.Initially,according to the characteristics of the system and the towing streamer shape in deep water,travel-time positioning method was used to construct the hydrophone streamer shape,and the results were corrected by using the polynomial curve fitting method.Then,a new data-processing workflow for Kuiyang-ST2000 system data was introduced,mainly including float datum setting,residual static correction,phase-based moveout correction,which allows the imaging algorithms of conventional marine seismic data processing to extend to deep-towed seismic data.We successfully applied the Kuiyang-ST2000 system and methodology of data processing to a gas hydrate survey of the Qiongdongnan and Shenhu areas in the South China Sea,and the results show that the profile has very high vertical and lateral resolutions(0.5 m and 8 m,respectively),which can provide full and accurate details of gas hydrate-related and geohazard sedimentary and structural features in the South China Sea.展开更多
Paleostress plays a significant role in controlling the formation, accumulation, and distribution of reservoirs, and this could be an important factor in controlling the production of hydrocarbons from the unconventio...Paleostress plays a significant role in controlling the formation, accumulation, and distribution of reservoirs, and this could be an important factor in controlling the production of hydrocarbons from the unconventional reservoirs. In this study, we will use 3D seismic reflection data to perform the slip-tendency-based stress inversion to determine the stress field in the basement of the northern slope area in the Bongor Basin. The dataset for this technique is easily available in the oil and gas companies. The stress inversion results from the basement of the northern slope area of Bongor basin show that the maximum principal stress axis (σ1) is oriented vertically, the intermediate principal stress axis (σ2) is oriented N18° and the minimum principal stress axis (σ3) is oriented N105°, and σ2/σ1 = 0.60 and σ3/σ1 = 0.29. The findings of this paper provide significant information to understand the fault reactivation at the critical stage of hydrocarbon accumulation and the regional tectonic evolution.展开更多
The Growth Value Model(GVM)proposed theoretical closed form formulas consist-ing of Return on Equity(ROE)and the Price-to-Book value ratio(P/B)for fair stock prices and expected rates of return.Although regression ana...The Growth Value Model(GVM)proposed theoretical closed form formulas consist-ing of Return on Equity(ROE)and the Price-to-Book value ratio(P/B)for fair stock prices and expected rates of return.Although regression analysis can be employed to verify these theoretical closed form formulas,they cannot be explored by classical quintile or decile sorting approaches with intuition due to the essence of multi-factors and dynamical processes.This article uses visualization techniques to help intuitively explore GVM.The discerning findings and contributions of this paper is that we put forward the concept of the smart frontier,which can be regarded as the reasonable lower limit of P/B at a specific ROE by exploring fair P/B with ROE-P/B 2D dynamical process visualization.The coefficients in the formula can be determined by the quantile regression analysis with market data.The moving paths of the ROE and P/B in the cur-rent quarter and the subsequent quarters show that the portfolios at the lower right of the curve approaches this curve and stagnates here after the portfolios are formed.Furthermore,exploring expected rates of return with ROE-P/B-Return 3D dynamical process visualization,the results show that the data outside of the lower right edge of the“smart frontier”has positive quarterly return rates not only in the t+1 quarter but also in the t+2 quarter.The farther away the data in the t quarter is from the“smart frontier”,the larger the return rates in the t+1 and t+2 quarter.展开更多
The Pennsylvanian unconformity,which is a detrital surface,separates the beds of the Permian-aged strata from the Lower Paleozoic in the Central Basin Platform.Seismic data interpretation indicates that the unconformi...The Pennsylvanian unconformity,which is a detrital surface,separates the beds of the Permian-aged strata from the Lower Paleozoic in the Central Basin Platform.Seismic data interpretation indicates that the unconformity is an angular unconformity,overlying multiple normal faults,and accompanied with a thrust fault which maximizes the region's structural complexity.Additionally,the Pennsylvanian angular unconformity creates pinch-outs between the beds above and below.We computed the spectral decomposition and reflector convergence attributes and analyzed them to characterize the angular unconformity and faults.The spectral decomposition attribute divides the broadband seismic data into different spectral bands to resolve thin beds and show thickness variations.In contrast,the reflector convergence attribute highlights the location and direction of the pinch-outs as they dip south at angles between 2and 6.After reviewing findings from RGB blending of the spectrally decomposed frequencies along the Pennsylvanian unconformity,we observed channel-like features and multiple linear bands in addition to the faults and pinch-outs.It can be inferred that the identified linear bands could be the result of different lithologies associated with the tilting of the beds,and the faults may possibly influence hydrocarbon migration or act as a flow barrier to entrap hydrocarbon accumulation.The identification of this angular unconformity and the associated features in the study area are vital for the following reasons:1)the unconformity surface represents a natural stratigraphic boundary;2)the stratigraphic pinch-outs act as fluid flow connectivity boundaries;3)the areal extent of compartmentalized reservoirs'boundaries created by the angular unconformity are better defined;and 4)fault displacements are better understood when planning well locations as faults can be flow barriers,or permeability conduits,depending on facies heterogeneity and/or seal effectiveness of a fault,which can affect hydrocarbon production.The methodology utilized in this study is a further step in the characterization of reservoirs and can be used to expand our knowledge and obtain more information about the Goldsmith Field.展开更多
Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision...Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision-making across diverse domains. Conversely, Python is indispensable for professional programming due to its versatility, readability, extensive libraries, and robust community support. It enables efficient development, advanced data analysis, data mining, and automation, catering to diverse industries and applications. However, one primary issue when using Microsoft Excel with Python libraries is compatibility and interoperability. While Excel is a widely used tool for data storage and analysis, it may not seamlessly integrate with Python libraries, leading to challenges in reading and writing data, especially in complex or large datasets. Additionally, manipulating Excel files with Python may not always preserve formatting or formulas accurately, potentially affecting data integrity. Moreover, dependency on Excel’s graphical user interface (GUI) for automation can limit scalability and reproducibility compared to Python’s scripting capabilities. This paper covers the integration solution of empowering non-programmers to leverage Python’s capabilities within the familiar Excel environment. This enables users to perform advanced data analysis and automation tasks without requiring extensive programming knowledge. Based on Soliciting feedback from non-programmers who have tested the integration solution, the case study shows how the solution evaluates the ease of implementation, performance, and compatibility of Python with Excel versions.展开更多
Gestational Diabetes Mellitus (GDM) is a significant health concern affecting pregnant women worldwide. It is characterized by elevated blood sugar levels during pregnancy and poses risks to both maternal and fetal he...Gestational Diabetes Mellitus (GDM) is a significant health concern affecting pregnant women worldwide. It is characterized by elevated blood sugar levels during pregnancy and poses risks to both maternal and fetal health. Maternal complications of GDM include an increased risk of developing type 2 diabetes later in life, as well as hypertension and preeclampsia during pregnancy. Fetal complications may include macrosomia (large birth weight), birth injuries, and an increased risk of developing metabolic disorders later in life. Understanding the demographics, risk factors, and biomarkers associated with GDM is crucial for effective management and prevention strategies. This research aims to address these aspects comprehensively through the analysis of a dataset comprising 600 pregnant women. By exploring the demographics of the dataset and employing data modeling techniques, the study seeks to identify key risk factors associated with GDM. Moreover, by analyzing various biomarkers, the research aims to gain insights into the physiological mechanisms underlying GDM and its implications for maternal and fetal health. The significance of this research lies in its potential to inform clinical practice and public health policies related to GDM. By identifying demographic patterns and risk factors, healthcare providers can better tailor screening and intervention strategies for pregnant women at risk of GDM. Additionally, insights into biomarkers associated with GDM may contribute to the development of novel diagnostic tools and therapeutic approaches. Ultimately, by enhancing our understanding of GDM, this research aims to improve maternal and fetal outcomes and reduce the burden of this condition on healthcare systems and society. However, it’s important to acknowledge the limitations of the dataset used in this study. Further research utilizing larger and more diverse datasets, perhaps employing advanced data analysis techniques such as Power BI, is warranted to corroborate and expand upon the findings of this research. This underscores the ongoing need for continued investigation into GDM to refine our understanding and improve clinical management strategies.展开更多
Data breaches have massive consequences for companies, affecting them financially and undermining their reputation, which poses significant challenges to online security and the long-term viability of businesses. This...Data breaches have massive consequences for companies, affecting them financially and undermining their reputation, which poses significant challenges to online security and the long-term viability of businesses. This study analyzes trends in data breaches in the United States, examining the frequency, causes, and magnitude of breaches across various industries. We document that data breaches are increasing, with hacking emerging as the leading cause. Our descriptive analyses explore factors influencing breaches, including security vulnerabilities, human error, and malicious attacks. The findings provide policymakers and businesses with actionable insights to bolster data security through proactive audits, patching, encryption, and response planning. By better understanding breach patterns and risk factors, organizations can take targeted steps to enhance protections and mitigate the potential damage of future incidents.展开更多
This article discusses the current status and development strategies of computer science and technology in the context of big data.Firstly,it explains the relationship between big data and computer science and technol...This article discusses the current status and development strategies of computer science and technology in the context of big data.Firstly,it explains the relationship between big data and computer science and technology,focusing on analyzing the current application status of computer science and technology in big data,including data storage,data processing,and data analysis.Then,it proposes development strategies for big data processing.Computer science and technology play a vital role in big data processing by providing strong technical support.展开更多
The Pearl River Estuary(PRE) is located at the onshore-offshore transition zone between South China and South China Sea Basin, and it is of great significant value in discussing tectonic relationships between South Ch...The Pearl River Estuary(PRE) is located at the onshore-offshore transition zone between South China and South China Sea Basin, and it is of great significant value in discussing tectonic relationships between South China block and South China Sea block and seismic activities along the offshore active faults in PRE. However, the researches on geometric characteristics of offshore faults in this area are extremely lacking. To investigate the offshore fault distribution and their geometric features in the PRE in greater detail, we acquired thirteen seismic reflection profiles in 2015. Combining the analysis of the seismic reflection and free-air gravity anomaly data, this paper revealed the location, continuity, and geometry of the littoral fault zone and other offshore faults in PRE. The littoral fault zone is composed of the major Dangan Islands fault and several parallel, high-angle, normal faults, which mainly trend northeast to northeast-to-east and dip to the southeast with large displacements. The fault zone is divided into three different segments by the northwest-trending faults. Moreover, the basement depth around Dangan Islands is very shallow, while it suddenly increases along the islands westward and southward. These has resulted in the islands and neighboring areas becoming the places where the stress accumulates easily. The seismogenic pattern of this area is closely related to the comprehensive effect of intersecting faults together with the low velocity layer.展开更多
The field seismic data is disturbed by the interferential information, which has low signal to noise ratio (SNR). That is disadvantage for seismic data interpretation. So it is important to remove the noise of seismic...The field seismic data is disturbed by the interferential information, which has low signal to noise ratio (SNR). That is disadvantage for seismic data interpretation. So it is important to remove the noise of seismic data. Independent component analysis (ICA) can remove most of the noise interference. However, ICA has some defects in noise reduction, because it needs some conditions that seismic data is independent reciprocally for denoising. To solve these defects, this paper proposes an improved ICA algorithm to noise reduction. Through simulation experiments, it can be obtained that the best decomposition levels of the new algorithm is 3. At last, the proposed improved ICA is applied to deal with the actual seismic data. The results show that it can effectively eliminate most of seismic noise such as random noise, linear interference, surface waves, and so on. The improved ICA is not only easy to denoising, but also has excellent mathematical theoretical properties.展开更多
A new method is introduced to suppress the noise in seismic data processing. Based on the subtle difference in shape between the noise and the actual signal, we introduce morphologic filtering into seismic data proces...A new method is introduced to suppress the noise in seismic data processing. Based on the subtle difference in shape between the noise and the actual signal, we introduce morphologic filtering into seismic data processing. From the shape and the S/N we can see that the effect of morphologic filtering is superior to other methods like id-value filtering, neighbor average filtering, etc. The SNR of the signal after morphological filtering is comparatively great. In addition, the precision of the seismic data after morphological filtering is high. The characteristics of the actual signal, such as frequency and amplitude, are preserved. We give an example of the real seismic data processing using morphological filtering, in which the actual signal is retained, while the random high intensity noise was removed.展开更多
The control system of Hefei Light Source II(HLS-Ⅱ) is a distributed system based on the experimental physics and industrial control system(EPICS). It is necessary to maintain the central configuration files for the e...The control system of Hefei Light Source II(HLS-Ⅱ) is a distributed system based on the experimental physics and industrial control system(EPICS). It is necessary to maintain the central configuration files for the existing archiving system. When the process variables in the control system are added, removed, or updated, the configuration files must be manually modified to maintain consistency with the control system. This paper presents a new method for data archiving, which realizes the automatic configuration of the archiving parameters. The system uses microservice architecture to integrate the EPICS Archiver Appliance and Rec Sync. In this way, the system can collect all the archived meta-configuration from the distributed input/output controllers and enter them into the EPICS Archiver Appliance automatically. Furthermore, we also developed a web-based GUI to provide automatic visualization of real-time and historical data. At present,this system is under commissioning at HLS-Ⅱ. The results indicate that the new archiving system is reliable and convenient to operate. The operation mode without maintenance is valuable for large-scale scientific facilities.展开更多
基金Supported by the CNPC Science and Technology Projects(2022-N/G-47808,2023-N/G-67014)RIPED International Cooperation Project(19HTY5000008).
文摘To solve the problems in restoring sedimentary facies and predicting reservoirs in loose gas-bearing sediment,based on seismic sedimentologic analysis of the first 9-component S-wave 3D seismic dataset of China,a fourth-order isochronous stratigraphic framework was set up and then sedimentary facies and reservoirs in the Pleistocene Qigequan Formation in Taidong area of Qaidam Basin were studied by seismic geomorphology and seismic lithology.The study method and thought are as following.Firstly,techniques of phase rotation,frequency decomposition and fusion,and stratal slicing were applied to the 9-component S-wave seismic data to restore sedimentary facies of major marker beds based on sedimentary models reflected by satellite images.Then,techniques of seismic attribute extraction,principal component analysis,and random fitting were applied to calculate the reservoir thickness and physical parameters of a key sandbody,and the results are satisfactory and confirmed by blind testing wells.Study results reveal that the dominant sedimentary facies in the Qigequan Formation within the study area are delta front and shallow lake.The RGB fused slices indicate that there are two cycles with three sets of underwater distributary channel systems in one period.Among them,sandstones in the distributary channels of middle-low Qigequan Formation are thick and broad with superior physical properties,which are favorable reservoirs.The reservoir permeability is also affected by diagenesis.Distributary channel sandstone reservoirs extend further to the west of Sebei-1 gas field,which provides a basis to expand exploration to the western peripheral area.
基金funded by the National Natural Science Foundation of China(General Program:No.52074314,No.U19B6003-05)National Key Research and Development Program of China(2019YFA0708303-05)。
文摘Accurate prediction of formation pore pressure is essential to predict fluid flow and manage hydrocarbon production in petroleum engineering.Recent deep learning technique has been receiving more interest due to the great potential to deal with pore pressure prediction.However,most of the traditional deep learning models are less efficient to address generalization problems.To fill this technical gap,in this work,we developed a new adaptive physics-informed deep learning model with high generalization capability to predict pore pressure values directly from seismic data.Specifically,the new model,named CGP-NN,consists of a novel parametric features extraction approach(1DCPP),a stacked multilayer gated recurrent model(multilayer GRU),and an adaptive physics-informed loss function.Through machine training,the developed model can automatically select the optimal physical model to constrain the results for each pore pressure prediction.The CGP-NN model has the best generalization when the physicsrelated metricλ=0.5.A hybrid approach combining Eaton and Bowers methods is also proposed to build machine-learnable labels for solving the problem of few labels.To validate the developed model and methodology,a case study on a complex reservoir in Tarim Basin was further performed to demonstrate the high accuracy on the pore pressure prediction of new wells along with the strong generalization ability.The adaptive physics-informed deep learning approach presented here has potential application in the prediction of pore pressures coupled with multiple genesis mechanisms using seismic data.
文摘Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.
文摘This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science.
文摘This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while maintaining data quality.We contributed to meeting the challenges of big data visualization using the embedded method based“Select from model(SFM)”method by using“Random forest Importance algorithm(RFI)”and comparing it with the filter method by using“Select percentile(SP)”method based chi square“Chi2”tool for selecting the most important features,which are then fed into a classification process using the logistic regression(LR)algorithm and the k-nearest neighbor(KNN)algorithm.Thus,the classification accuracy(AC)performance of LRis also compared to theKNN approach in python on eight data sets to see which method produces the best rating when feature selection methods are applied.Consequently,the study concluded that the feature selection methods have a significant impact on the analysis and visualization of the data after removing the repetitive data and the data that do not affect the goal.After making several comparisons,the study suggests(SFMLR)using SFM based on RFI algorithm for feature selection,with LR algorithm for data classify.The proposal proved its efficacy by comparing its results with recent literature.
基金The National Natural Science Foundation of China under contract No.41806048the Open Fund of the Hubei Key Laboratory of Marine Geological Resources under contract No.MGR202009+2 种基金the Fund from the Key Laboratory of Deep-Earth Dynamics of Ministry of Natural Resource,Institute of Geology,Chinese Academy of Geological Sciences under contract No.J1901-16the Aoshan Science and Technology Innovation Project of Pilot National Laboratory for Marine Science and Technology(Qingdao)under contract No.2015ASKJ03-Seabed Resourcesthe Fund from the Korea Institute of Ocean Science and Technology(KIOST)under contract No.PE99741.
文摘The South Yellow Sea basin is filled with Mesozoic-Cenozoic continental sediments overlying pre-Palaeozoic and Mesozoic-Palaeozoic marine sediments.Conventional multi-channel seismic data cannot describe the velocity structure of the marine residual basin in detail,leading to the lack of a deeper understanding of the distribution and lithology owing to strong energy shielding on the top interface of marine sediments.In this study,we present seismic tomography data from ocean bottom seismographs that describe the NEE-trending velocity distributions of the basin.The results indicate that strong velocity variations occur at shallow crustal levels.Horizontal velocity bodies show good correlation with surface geological features,and multi-layer features exist in the vertical velocity framework(depth:0–10 km).The analyses of the velocity model,gravity data,magnetic data,multichannel seismic profiles,and drilling data showed that high-velocity anomalies(>6.5 km/s)of small(thickness:1–2 km)and large(thickness:>5 km)scales were caused by igneous complexes in the multi-layer structure,which were active during the Palaeogene.Possible locations of good Mesozoic and Palaeozoic marine strata are limited to the Central Uplift and the western part of the Northern Depression along the wide-angle ocean bottom seismograph array.Following the Indosinian movement,a strong compression existed in the Northern Depression during the extensional phase that caused the formation of folds in the middle of the survey line.This study is useful for reconstructing the regional tectonic evolution and delineating the distribution of the marine residual basin in the South Yellow Sea basin.
基金supported by the National Natural Science Foundation of China(Grant Nos.91858215 and 41906048)。
文摘The Solomon Sea Basin is a Cenozoic back-arc spreading basin within the convergence system of the Pacific and Indo-Australian plates.Against the background of subduction polarity reversal,the current Solomon Sea Basin gradually formed a rhombic morphology with the subduction of the basin along the New Britain Trench and the Trobriand Trough.By analyzing the vertical gravity gradient,natural earthquake and seismic reflection data,this study determines the structural characteristics of the Solomon Sea Basin.It was found that the tectonics of the basin are characterized by the original expansion structure within the central part in addition to the structure induced by the latest subduction along the basin margin.The original spreading structure of the basin presented an east–west linear graben and horst controlled by normal faults during the basin expansion period.As a result of the subduction and slab-pull of the Solomon Sea Basin,extensional structure belts parallel to the New Britain Trench formed along the basin margin.
基金This study was supported by the National Natural Science Foundation of China under the project‘Research on the Dynamic Location of Receiver Points and Wave Field Separation Technology Based on Deep Learning in OBN Seismic Exploration’(No.42074140).
文摘At present,the acquisition of seismic data is developing toward high-precision and high-density methods.However,complex natural environments and cultural factors in many exploration areas cause difficulties in achieving uniform and intensive acquisition,which makes complete seismic data collection impossible.Therefore,data reconstruction is required in the processing link to ensure imaging accuracy.Deep learning,as a new field in rapid development,presents clear advantages in feature extraction and modeling.In this study,the convolutional neural network deep learning algorithm is applied to seismic data reconstruction.Based on the convolutional neural network algorithm and combined with the characteristics of seismic data acquisition,two training strategies of supervised and unsupervised learning are designed to reconstruct sparse acquisition seismic records.First,a supervised learning strategy is proposed for labeled data,wherein the complete seismic data are segmented as the input of the training set and are randomly sampled before each training,thereby increasing the number of samples and the richness of features.Second,an unsupervised learning strategy based on large samples is proposed for unlabeled data,and the rolling segmentation method is used to update(pseudo)labels and training parameters in the training process.Through the reconstruction test of simulated and actual data,the deep learning algorithm based on a convolutional neural network shows better reconstruction quality and higher accuracy than compressed sensing based on Curvelet transform.
基金Supported by the National Key R&D Program of China(No.2016YFC0303900)the Laoshan Laboratory(Nos.MGQNLM-KF201807,LSKJ202203604)the National Natural Science Foundation of China(No.42106072)。
文摘The Kuiyang-ST2000 deep-towed high-resolution multichannel seismic system was designed by the First Institute of Oceanography,Ministry of Natural Resources(FIO,MNR).The system is mainly composed of a plasma spark source(source level:216 dB,main frequency:750 Hz,frequency bandwidth:150-1200 Hz)and a towed hydrophone streamer with 48 channels.Because the source and the towed hydrophone streamer are constantly moving according to the towing configuration,the accurate positioning of the towing hydrophone array and the moveout correction of deep-towed multichannel seismic data processing before imaging are challenging.Initially,according to the characteristics of the system and the towing streamer shape in deep water,travel-time positioning method was used to construct the hydrophone streamer shape,and the results were corrected by using the polynomial curve fitting method.Then,a new data-processing workflow for Kuiyang-ST2000 system data was introduced,mainly including float datum setting,residual static correction,phase-based moveout correction,which allows the imaging algorithms of conventional marine seismic data processing to extend to deep-towed seismic data.We successfully applied the Kuiyang-ST2000 system and methodology of data processing to a gas hydrate survey of the Qiongdongnan and Shenhu areas in the South China Sea,and the results show that the profile has very high vertical and lateral resolutions(0.5 m and 8 m,respectively),which can provide full and accurate details of gas hydrate-related and geohazard sedimentary and structural features in the South China Sea.
文摘Paleostress plays a significant role in controlling the formation, accumulation, and distribution of reservoirs, and this could be an important factor in controlling the production of hydrocarbons from the unconventional reservoirs. In this study, we will use 3D seismic reflection data to perform the slip-tendency-based stress inversion to determine the stress field in the basement of the northern slope area in the Bongor Basin. The dataset for this technique is easily available in the oil and gas companies. The stress inversion results from the basement of the northern slope area of Bongor basin show that the maximum principal stress axis (σ1) is oriented vertically, the intermediate principal stress axis (σ2) is oriented N18° and the minimum principal stress axis (σ3) is oriented N105°, and σ2/σ1 = 0.60 and σ3/σ1 = 0.29. The findings of this paper provide significant information to understand the fault reactivation at the critical stage of hydrocarbon accumulation and the regional tectonic evolution.
文摘The Growth Value Model(GVM)proposed theoretical closed form formulas consist-ing of Return on Equity(ROE)and the Price-to-Book value ratio(P/B)for fair stock prices and expected rates of return.Although regression analysis can be employed to verify these theoretical closed form formulas,they cannot be explored by classical quintile or decile sorting approaches with intuition due to the essence of multi-factors and dynamical processes.This article uses visualization techniques to help intuitively explore GVM.The discerning findings and contributions of this paper is that we put forward the concept of the smart frontier,which can be regarded as the reasonable lower limit of P/B at a specific ROE by exploring fair P/B with ROE-P/B 2D dynamical process visualization.The coefficients in the formula can be determined by the quantile regression analysis with market data.The moving paths of the ROE and P/B in the cur-rent quarter and the subsequent quarters show that the portfolios at the lower right of the curve approaches this curve and stagnates here after the portfolios are formed.Furthermore,exploring expected rates of return with ROE-P/B-Return 3D dynamical process visualization,the results show that the data outside of the lower right edge of the“smart frontier”has positive quarterly return rates not only in the t+1 quarter but also in the t+2 quarter.The farther away the data in the t quarter is from the“smart frontier”,the larger the return rates in the t+1 and t+2 quarter.
文摘The Pennsylvanian unconformity,which is a detrital surface,separates the beds of the Permian-aged strata from the Lower Paleozoic in the Central Basin Platform.Seismic data interpretation indicates that the unconformity is an angular unconformity,overlying multiple normal faults,and accompanied with a thrust fault which maximizes the region's structural complexity.Additionally,the Pennsylvanian angular unconformity creates pinch-outs between the beds above and below.We computed the spectral decomposition and reflector convergence attributes and analyzed them to characterize the angular unconformity and faults.The spectral decomposition attribute divides the broadband seismic data into different spectral bands to resolve thin beds and show thickness variations.In contrast,the reflector convergence attribute highlights the location and direction of the pinch-outs as they dip south at angles between 2and 6.After reviewing findings from RGB blending of the spectrally decomposed frequencies along the Pennsylvanian unconformity,we observed channel-like features and multiple linear bands in addition to the faults and pinch-outs.It can be inferred that the identified linear bands could be the result of different lithologies associated with the tilting of the beds,and the faults may possibly influence hydrocarbon migration or act as a flow barrier to entrap hydrocarbon accumulation.The identification of this angular unconformity and the associated features in the study area are vital for the following reasons:1)the unconformity surface represents a natural stratigraphic boundary;2)the stratigraphic pinch-outs act as fluid flow connectivity boundaries;3)the areal extent of compartmentalized reservoirs'boundaries created by the angular unconformity are better defined;and 4)fault displacements are better understood when planning well locations as faults can be flow barriers,or permeability conduits,depending on facies heterogeneity and/or seal effectiveness of a fault,which can affect hydrocarbon production.The methodology utilized in this study is a further step in the characterization of reservoirs and can be used to expand our knowledge and obtain more information about the Goldsmith Field.
文摘Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision-making across diverse domains. Conversely, Python is indispensable for professional programming due to its versatility, readability, extensive libraries, and robust community support. It enables efficient development, advanced data analysis, data mining, and automation, catering to diverse industries and applications. However, one primary issue when using Microsoft Excel with Python libraries is compatibility and interoperability. While Excel is a widely used tool for data storage and analysis, it may not seamlessly integrate with Python libraries, leading to challenges in reading and writing data, especially in complex or large datasets. Additionally, manipulating Excel files with Python may not always preserve formatting or formulas accurately, potentially affecting data integrity. Moreover, dependency on Excel’s graphical user interface (GUI) for automation can limit scalability and reproducibility compared to Python’s scripting capabilities. This paper covers the integration solution of empowering non-programmers to leverage Python’s capabilities within the familiar Excel environment. This enables users to perform advanced data analysis and automation tasks without requiring extensive programming knowledge. Based on Soliciting feedback from non-programmers who have tested the integration solution, the case study shows how the solution evaluates the ease of implementation, performance, and compatibility of Python with Excel versions.
文摘Gestational Diabetes Mellitus (GDM) is a significant health concern affecting pregnant women worldwide. It is characterized by elevated blood sugar levels during pregnancy and poses risks to both maternal and fetal health. Maternal complications of GDM include an increased risk of developing type 2 diabetes later in life, as well as hypertension and preeclampsia during pregnancy. Fetal complications may include macrosomia (large birth weight), birth injuries, and an increased risk of developing metabolic disorders later in life. Understanding the demographics, risk factors, and biomarkers associated with GDM is crucial for effective management and prevention strategies. This research aims to address these aspects comprehensively through the analysis of a dataset comprising 600 pregnant women. By exploring the demographics of the dataset and employing data modeling techniques, the study seeks to identify key risk factors associated with GDM. Moreover, by analyzing various biomarkers, the research aims to gain insights into the physiological mechanisms underlying GDM and its implications for maternal and fetal health. The significance of this research lies in its potential to inform clinical practice and public health policies related to GDM. By identifying demographic patterns and risk factors, healthcare providers can better tailor screening and intervention strategies for pregnant women at risk of GDM. Additionally, insights into biomarkers associated with GDM may contribute to the development of novel diagnostic tools and therapeutic approaches. Ultimately, by enhancing our understanding of GDM, this research aims to improve maternal and fetal outcomes and reduce the burden of this condition on healthcare systems and society. However, it’s important to acknowledge the limitations of the dataset used in this study. Further research utilizing larger and more diverse datasets, perhaps employing advanced data analysis techniques such as Power BI, is warranted to corroborate and expand upon the findings of this research. This underscores the ongoing need for continued investigation into GDM to refine our understanding and improve clinical management strategies.
文摘Data breaches have massive consequences for companies, affecting them financially and undermining their reputation, which poses significant challenges to online security and the long-term viability of businesses. This study analyzes trends in data breaches in the United States, examining the frequency, causes, and magnitude of breaches across various industries. We document that data breaches are increasing, with hacking emerging as the leading cause. Our descriptive analyses explore factors influencing breaches, including security vulnerabilities, human error, and malicious attacks. The findings provide policymakers and businesses with actionable insights to bolster data security through proactive audits, patching, encryption, and response planning. By better understanding breach patterns and risk factors, organizations can take targeted steps to enhance protections and mitigate the potential damage of future incidents.
文摘This article discusses the current status and development strategies of computer science and technology in the context of big data.Firstly,it explains the relationship between big data and computer science and technology,focusing on analyzing the current application status of computer science and technology in big data,including data storage,data processing,and data analysis.Then,it proposes development strategies for big data processing.Computer science and technology play a vital role in big data processing by providing strong technical support.
基金supported by the National Natural Science Foundation of China(Nos.41506046,41376060,41706054)the Opening Foundation of Key Laboratory of Ocean and Marginal Sea Geology,CAS(No.MSGL15-05)+1 种基金WPOS(No.XDA11030102-02)the Strategic Priority Research Program of the Chinese Academy of Sciences(No.XDA13010101)
文摘The Pearl River Estuary(PRE) is located at the onshore-offshore transition zone between South China and South China Sea Basin, and it is of great significant value in discussing tectonic relationships between South China block and South China Sea block and seismic activities along the offshore active faults in PRE. However, the researches on geometric characteristics of offshore faults in this area are extremely lacking. To investigate the offshore fault distribution and their geometric features in the PRE in greater detail, we acquired thirteen seismic reflection profiles in 2015. Combining the analysis of the seismic reflection and free-air gravity anomaly data, this paper revealed the location, continuity, and geometry of the littoral fault zone and other offshore faults in PRE. The littoral fault zone is composed of the major Dangan Islands fault and several parallel, high-angle, normal faults, which mainly trend northeast to northeast-to-east and dip to the southeast with large displacements. The fault zone is divided into three different segments by the northwest-trending faults. Moreover, the basement depth around Dangan Islands is very shallow, while it suddenly increases along the islands westward and southward. These has resulted in the islands and neighboring areas becoming the places where the stress accumulates easily. The seismogenic pattern of this area is closely related to the comprehensive effect of intersecting faults together with the low velocity layer.
基金Funded by the Project of China Geological Survey (No.1212010916040)the Sichuan Science and Technology Program (No.2017JY0051)the Sichuan Science and Technology Program (No.2018GZ0200)
文摘The field seismic data is disturbed by the interferential information, which has low signal to noise ratio (SNR). That is disadvantage for seismic data interpretation. So it is important to remove the noise of seismic data. Independent component analysis (ICA) can remove most of the noise interference. However, ICA has some defects in noise reduction, because it needs some conditions that seismic data is independent reciprocally for denoising. To solve these defects, this paper proposes an improved ICA algorithm to noise reduction. Through simulation experiments, it can be obtained that the best decomposition levels of the new algorithm is 3. At last, the proposed improved ICA is applied to deal with the actual seismic data. The results show that it can effectively eliminate most of seismic noise such as random noise, linear interference, surface waves, and so on. The improved ICA is not only easy to denoising, but also has excellent mathematical theoretical properties.
文摘A new method is introduced to suppress the noise in seismic data processing. Based on the subtle difference in shape between the noise and the actual signal, we introduce morphologic filtering into seismic data processing. From the shape and the S/N we can see that the effect of morphologic filtering is superior to other methods like id-value filtering, neighbor average filtering, etc. The SNR of the signal after morphological filtering is comparatively great. In addition, the precision of the seismic data after morphological filtering is high. The characteristics of the actual signal, such as frequency and amplitude, are preserved. We give an example of the real seismic data processing using morphological filtering, in which the actual signal is retained, while the random high intensity noise was removed.
基金supported by the National Natural Science Foundation of China(No.11375186)
文摘The control system of Hefei Light Source II(HLS-Ⅱ) is a distributed system based on the experimental physics and industrial control system(EPICS). It is necessary to maintain the central configuration files for the existing archiving system. When the process variables in the control system are added, removed, or updated, the configuration files must be manually modified to maintain consistency with the control system. This paper presents a new method for data archiving, which realizes the automatic configuration of the archiving parameters. The system uses microservice architecture to integrate the EPICS Archiver Appliance and Rec Sync. In this way, the system can collect all the archived meta-configuration from the distributed input/output controllers and enter them into the EPICS Archiver Appliance automatically. Furthermore, we also developed a web-based GUI to provide automatic visualization of real-time and historical data. At present,this system is under commissioning at HLS-Ⅱ. The results indicate that the new archiving system is reliable and convenient to operate. The operation mode without maintenance is valuable for large-scale scientific facilities.