Earthquakes are classified as one of the most devastating natural disasters that can have catastrophic effects on the environment,lives,and properties.There has been an increasing interest in the prediction of earthqu...Earthquakes are classified as one of the most devastating natural disasters that can have catastrophic effects on the environment,lives,and properties.There has been an increasing interest in the prediction of earthquakes and in gaining a comprehensive understanding of the mechanisms that underlie their generation,yet earthquakes are the least predictable natural disaster.Satellite data,global positioning system,interferometry synthetic aperture radar(InSAR),and seismometers such as microelectromechanical system,seismometers,ocean bottom seismometers,and distributed acoustic sensing systems have all been used to predict earthquakes with a high degree of success.Despite advances in seismic wave recording,storage,and analysis,earthquake time,location,and magnitude prediction remain difficult.On the other hand,new developments in artificial intelligence(AI)and the Internet of Things(IoT)have shown promising potential to deliver more insights and predictions.Thus,this article reviewed the use of AI-driven Models and IoT-based technologies for the prediction of earthquakes,the limitations of current approaches,and open research issues.The review discusses earthquake prediction setbacks due to insufficient data,inconsistencies,diversity of earthquake precursor signals,and the earth’s geophysical composition.Finally,this study examines potential approaches or solutions that scientists can employ to address the challenges they face in earthquake prediction.The analysis is based on the successful application of AI and IoT in other fields.展开更多
Machine learning methods dealing with the spatial auto-correlation of the response variable have garnered significant attention in the context of spatial prediction.Nonetheless,under these methods,the relationship bet...Machine learning methods dealing with the spatial auto-correlation of the response variable have garnered significant attention in the context of spatial prediction.Nonetheless,under these methods,the relationship between the response variable and explanatory variables is assumed to be homogeneous throughout the entire study area.This assumption,known as spatial stationarity,is very questionable in real-world situations due to the influence of contextual factors.Therefore,allowing the relationship between the target variable and predictor variables to vary spatially within the study region is more reasonable.However,existing machine learning techniques accounting for the spatially varying relationship between the dependent variable and the predictor variables do not capture the spatial auto-correlation of the dependent variable itself.Moreover,under these techniques,local machine learning models are effectively built using only fewer observations,which can lead to well-known issues such as over-fitting and the curse of dimensionality.This paper introduces a novel geostatistical machine learning approach where both the spatial auto-correlation of the response variable and the spatial non-stationarity of the regression relationship between the response and predictor variables are explicitly considered.The basic idea consists of relying on the local stationarity assumption to build a collection of local machine learning models while leveraging on the local spatial auto-correlation of the response variable to locally augment the training dataset.The proposed method’s effectiveness is showcased via experiments conducted on synthetic spatial data with known characteristics as well as real-world spatial data.In the synthetic(resp.real)case study,the proposed method’s predictive accuracy,as indicated by the Root Mean Square Error(RMSE)on the test set,is 17%(resp.7%)better than that of popular machine learning methods dealing with the response variable’s spatial auto-correlation.Additionally,this method is not only valuable for spatial prediction but also offers a deeper understanding of how the relationship between the target and predictor variables varies across space,and it can even be used to investigate the local significance of predictor variables.展开更多
Ensuring the reliability of pipe pile designs under earthquake loading necessitates an accurate determination of lateral displacement and bending moment,typically achieved through complex numerical modeling to address...Ensuring the reliability of pipe pile designs under earthquake loading necessitates an accurate determination of lateral displacement and bending moment,typically achieved through complex numerical modeling to address the intricacies of soil-pile interaction.Despite recent advancements in machine learning techniques,there is a persistent need to establish data-driven models that can predict these parameters without using numerical simulations due to the difficulties in conducting correct numerical simulations and the need for constitutive modelling parameters that are not readily available.This research presents novel lateral displacement and bending moment predictive models for closed and open-ended pipe piles,employing a Genetic Programming(GP)approach.Utilizing a soil dataset extracted from existing literature,comprising 392 data points for both pile types embedded in cohesionless soil and subjected to earthquake loading,the study intentionally limited input parameters to three features to enhance model simplicity:Standard Penetration Test(SPT)corrected blow count(N60),Peak Ground Acceleration(PGA),and pile slenderness ratio(L/D).Model performance was assessed via coefficient of determination(R^(2)),Root Mean Squared Error(RMSE),and Mean Absolute Error(MAE),with R^(2) values ranging from 0.95 to 0.99 for the training set,and from 0.92 to 0.98 for the testing set,which indicate of high accuracy of prediction.Finally,the study concludes with a sensitivity analysis,evaluating the influence of each input parameter across different pile types.展开更多
Machine learning(ML)algorithms are frequently used in landslide susceptibility modeling.Different data handling strategies may generate variations in landslide susceptibility modeling,even when using the same ML algor...Machine learning(ML)algorithms are frequently used in landslide susceptibility modeling.Different data handling strategies may generate variations in landslide susceptibility modeling,even when using the same ML algorithm.This research aims to compare the combinations of inventory data handling,cross validation(CV),and hyperparameter tuning strategies to generate landslide susceptibility maps.The results are expected to provide a general strategy for landslide susceptibility modeling using ML techniques.The authors employed eight landslide inventory data handling scenarios to convert a landslide polygon into a landslide point,i.e.,the landslide point is located on the toe(minimum height),on the scarp(maximum height),at the center of the landslide,randomly inside the polygon(1 point),randomly inside the polygon(3 points),randomly inside the polygon(5 points),randomly inside the polygon(10 points),and 15 m grid sampling.Random forest models using CV-nonspatial hyperparameter tuning,spatial CV-spatial hyperparameter tuning,and spatial CV-forward feature selection-no hyperparameter tuning were applied for each data handling strategy.The combination generated 24 random forest ML workflows,which are applied using a complete inventory of 743 landslides triggered by Tropical Cyclone Cempaka(2017)in Pacitan Regency,Indonesia,and 11 landslide controlling factors.The results show that grid sampling with spatial CV and spatial hyperparameter tuning is favorable because the strategy can minimize overfitting,generate a relatively high-performance predictive model,and reduce the appearance of susceptibility artifacts in the landslide area.Careful data inventory handling,CV,and hyperparameter tuning strategies should be considered in landslide susceptibility modeling to increase the applicability of landslide susceptibility maps in practical application.展开更多
Alteration minerals and silicification are typically associated with a variety of ore mineralizations and could be detected using multispectral remote sensing sensors as indicators for mineral exploration.In this inve...Alteration minerals and silicification are typically associated with a variety of ore mineralizations and could be detected using multispectral remote sensing sensors as indicators for mineral exploration.In this investigation,the Visible Near-Infra-Red(VNIR),Short-Wave Infra-Red(SWIR),and Thermal Infra-Red(TIR)bands of the ASTER satellite sensor derived layers were fused to detect alteration minerals and silicification in east the Kerdous inlier for cupriferous mineralization exploration.Several image processing techniques were executed in the present investigation,namely,Band Ratio(BR),Selective Principal Component Analysis(SPCA)and Constrained Energy Minimization(CEM)techniques.Initially,the BR and SPCA processing results revealed several alteration zones,including argillic,phyllic,dolomitization and silicification as well as iron oxides and hydroxides.Then,these zones were mapped at sub-pixel level using the CEM technique.Pyrophyllite,kaolinite,dolomite,illite,muscovite,montmorillonite,topaz and hematite were revealed displaying a significant distribution in relation with the eastern Amlen region lithological units and previously detected mineral potential zones using HyMap imaging spectroscopy.Mainly,a close spatial association between iron oxides and hydroxide minerals,argillic,and phyllic alteration was detected,as well as a strong silicification was detected around doleritic dykes unit in Jbel Lkest area.A weighted overlay approach was used in the integration of hydrothermal alteration minerals and silicification,which allowed the elaboration of a new mineral alteration map of study area with five alteration intensities.ASTER and the various employed processing techniques allowed a practical and cost effective mapping of alteration features,which corroborates well with field survey and X-ray diffraction analysis.Therefore,ASTER data and the employed processing techniques offers a practical approach for mineral prospection in comparable settings.展开更多
This research aim to evaluate hydro-meteorological data from the Yamuna River Basin,Uttarakhand,India,utilizing Extreme Value Distribution of Frequency Analysis and the Markov Chain Approach.This method assesses persi...This research aim to evaluate hydro-meteorological data from the Yamuna River Basin,Uttarakhand,India,utilizing Extreme Value Distribution of Frequency Analysis and the Markov Chain Approach.This method assesses persistence and allows for combinatorial probability estimations such as initial and transitional probabilities.The hydrologic data was generated(in-situ)and received from Uttarakhand Jal Vidut Nigam Limited(UJVNL),and meteorological data was acquired from NASA’s archives MERRA-2 product.A total of sixteen years(2005-2020)of data was used to foresee daily Precipitation from 2020 to 2022.MERRA-2 products are utilized as observed and forecast values for daily Precipitation throughout the monsoon season,which runs from July to September.Markov Chain and Long Short-Term Memory(LSTM)findings for 2020,2021,and 2022 were observed,and anticipated values for daily rainfall during the monsoon season between July and September.According to test findings,the artificial intelligence technique cannot anticipate future regional meteorological formations;the correlation coefficient R^(2) is around 0.12.According to the randomly verified precipitation data findings,the Markov Chain model has a success rate of 79.17 percent.The results suggest that extended return periods should be a warning sign for drought and flood risk in the Himalayan region.This study gives a better knowledge of the water budget,climate change variability,and impact of global warming,ultimately leading to improved water resource management and better emergency planning to the establishment of the Early Warning System(EWS)for extreme occurrences such as cloudbursts,flash floods,landslides hazards in the complex Himalayan region.展开更多
In this study,we present an artificial neural network(ANN)-based approach for travel-time tomography of a volcanic edifice under sparse-ray coverage.We employ ray tracing to simulate the propagation of seismic waves t...In this study,we present an artificial neural network(ANN)-based approach for travel-time tomography of a volcanic edifice under sparse-ray coverage.We employ ray tracing to simulate the propagation of seismic waves through the heterogeneous medium of a volcanic edifice,and an inverse modeling algorithm that uses an ANN to estimate the velocity structure from the“observed”travel-time data.The performance of the approach is evaluated through a 2-dimensional numerical study that simulates i)an active source seismic experiment with a few(explosive)sources placed on one side of the edifice and a dense line of receivers placed on the other side,and ii)earthquakes located inside the edifice with receivers placed on both sides of the edifice.The results are compared with those obtained from conventional damped linear inversion.The average Root Mean Square Error(RMSE)between the input and output models is approximately 0.03 km/s for the ANN inversions,whereas it is about 0.4 km/s for the linear inversions,demonstrating that the ANN-based approach outperforms the classical approach,particularly in situations with sparse ray coverage.Our study emphasizes the advantages of employing a relatively simple ANN architecture in conjunction with second-order optimizers to minimize the loss function.Compared to using first-order optimizers,our ANN architecture shows a~25%reduction in RMSE.The ANN-based approach is computationally efficient.We observed that even though the ANN is trained based on completely random velocity models,it is still capable of resolving previously unseen anomalous structures within the edifice with about 5%anomalous discrepancies,making it a potentially valuable tool for the detection of low velocity anomalies related to magmatic intrusions or mush.展开更多
Emeralds-the green colored variety of beryl-occur as gem-quality specimens in over fifty deposits globally.While digital traceability methods for emerald have limitations,sample-based approaches offer robust alterna-t...Emeralds-the green colored variety of beryl-occur as gem-quality specimens in over fifty deposits globally.While digital traceability methods for emerald have limitations,sample-based approaches offer robust alterna-tives,particularly for determining the geographic origin of emerald.Three factors make emerald suitable for provenance studies and hence for developing models for origin determination.First,the diverse elemental chemistry of emerald at minor(<1 wt%)and trace levels(<1 to 100’s ppmw)exhibits unique inter-element fractionations between global deposits.Second,minimally destructive techniques,including laser ablation inductively coupled plasma mass spectrometry(LA-ICP-MS),enable measurement of these diagnostic elemental signatures.Third,when applied to extensive datasets,machine learning(ML)techniques enable the creation of predictive models and statistical discrimination with adequate characterization of the deposits.This study em-ploys a carefully selected dataset comprising more than 1000 LA-ICP-MS analyses of gem-quality emeralds,enriched with new analyses.This dataset represents the largest available for global emerald deposits.We con-ducted unsupervised exploratory analysis using Principal Component Analysis(PCA).For machine learning-based classification,we employed Support Vector Machine Classification(SVM-C),achieving an initial accu-racy rate of 79%.This was enhanced to 96.8%through the use of hierarchical SVM-C with PCA filters as our modeling approach.The ML models were trained using the concentrations of eight statistically significant ele-ments(Li,V,Cr,Fe,Sc,Ga,Rb,Cs).By leveraging high-quality LA-ICP-MS data and ML techniques,accurate identification of the geographical origin of emerald becomes possible.These models are important for accurate provenance of emerald,and from a geochemical perspective,for understanding the formation environments of beryl-bearing pegmatites and shales.展开更多
In recent years,there has been a growing interest in using artificial intelligence(AI)for rainfall-runoff modelling,as it has shown promising adaptability in this context.The current study involved the use of six dist...In recent years,there has been a growing interest in using artificial intelligence(AI)for rainfall-runoff modelling,as it has shown promising adaptability in this context.The current study involved the use of six distinct AI models to simulate monthly rainfall-runoff modelling in the Bardha watershed,India.These models included the artificial neural network(ANN),k-nearest neighbour regression model(KNN),extreme gradient boosting(XGBoost)regression model,random forest regression model(RF),convolutional neural network(CNN),and CNN-RNN(convolutional recurrent neural network).The years 2003-2007 are classified as the calibration or training period,while the years 2008-2009 are classified as the validation or testing period for the span of time 2003 to 2009.The available rainfall,maximum and minimum temperatures,and discharge data were collected and utilized in the models.To compare the performance of the models,five criteria were employed:R^(2),NSE,MAE,RMSE,and PBIAS.The CNN-RNN model simulates the rainfall-runoff model in the Bardha watershed best in both the training and testing periods(training:R^(2) is 0.99,NSE is 0.99,MAE is 1.76,RMSE is 3.11,and PBIAS is1.45;testing:R^(2) is 0.97,NSE is 0.97,MAE is 2.05,RMSE is 3.60,and PBIAS is3.94).These results demonstrate the superior performance of the CNN-RNN model in simulating monthly rainfall-runoff modelling when compared to the other models used in the study.The findings suggest that the CNN-RNN model could be a valuable tool for various applications related to sustainable water resource management,flood control,and environmental planning.展开更多
The connectivity of sandbodies is a key constraint to the exploration effectiveness of Bohai A Oilfield.Conventional connectivity studies often use methods such as seismic attribute fusion,while the development of con...The connectivity of sandbodies is a key constraint to the exploration effectiveness of Bohai A Oilfield.Conventional connectivity studies often use methods such as seismic attribute fusion,while the development of contiguous composite sandbodies in this area makes it challenging to characterize connectivity changes with conventional seismic attributes.Aiming at the above problem in the Bohai A Oilfield,this study proposes a big data analysis method based on the Deep Forest algorithm to predict the sandbody connectivity.Firstly,by compiling the abundant exploration and development sandbodies data in the study area,typical sandbodies with reliable connectivity were selected.Then,sensitive seismic attribute were extracted to obtain training samples.Finally,based on the Deep Forest algorithm,mapping model between attribute combinations and sandbody connectivity was established through machine learning.This method achieves the first quantitative determination of the connectivity for continuous composite sandbodies in the Bohai Oilfield.Compared with conventional connectivity discrimination methods such as high-resolution processing and seismic attribute analysis,this method can combine the sandbody characteristics of the study area in the process of machine learning,and jointly judge connectivity by combining multiple seismic attributes.The study results show that this method has high accuracy and timeliness in predicting connectivity for continuous composite sandbodies.Applied to the Bohai A Oilfield,it successfully identified multiple sandbody connectivity relationships and provided strong support for the subsequent exploration potential assessment and well placement optimization.This method also provides a new idea and method for studying sandbody connectivity under similar complex geological conditions.展开更多
Pore size analysis plays a pivotal role in unraveling reservoir behavior and its intricate relationship with confined fluids.Traditional methods for predicting pore size distribution(PSD),relying on drilling cores or ...Pore size analysis plays a pivotal role in unraveling reservoir behavior and its intricate relationship with confined fluids.Traditional methods for predicting pore size distribution(PSD),relying on drilling cores or thin sections,face limitations associated with depth specificity.In this study,we introduce an innovative framework that leverages nuclear magnetic resonance(NMR)log data,encompassing clay-bound water(CBW),bound volume irreducible(BVI),and free fluid volume(FFV),to determine three PSDs(micropores,mesopores,and macropores).Moreover,we establish a robust pore size classification(PSC)system utilizing ternary plots,derived from the PSDs.Within the three studied wells,NMR log data is exclusive to one well(well-A),while conventional well logs are accessible for all three wells(well-A,well-B,and well-C).This distinction enables PSD predictions for the remaining two wells(B and C).To prognosticate NMR outputs(CBW,BVI,FFV)for these wells,a two-step deep learning(DL)algorithm is implemented.Initially,three feature selection algorithms(f-classif,f-regression,and mutual-info-regression)identify the conventional well logs most correlated to NMR outputs in well-A.The three feature selection algorithms utilize statistical computations.These algorithms are utilized to systematically identify and optimize pertinent input features,thereby augmenting model interpretability and predictive efficacy within intricate data-driven endeavors.So,all three feature selection algorithms introduced the number of 4 logs as the most optimal number of inputs to the DL algorithm with different combinations of logs for each of the three desired outputs.Subsequently,the CUDA Deep Neural Network Long Short-Term Memory algorithm(CUDNNLSTM),belonging to the category of DL algorithms and harnessing the computational power of GPUs,is employed for the prediction of CBW,BVI,and FFV logs.This prediction leverages the optimal logs identified in the preceding step.Estimation of NMR outputs was done first in well-A(80%of data as training and 20%as testing).The correlation coefficient(CC)between the actual and estimated data for the three outputs CBW,BVI and FFV are 95%,94%,and 97%,respectively,as well as root mean square error(RMSE)was obtained 0.0081,0.098,and 0.0089,respectively.To assess the effectiveness of the proposed algorithm,we compared it with two traditional methods for log estimation:multiple regression and multi-resolution graph-based clustering methods.The results demonstrate the superior accuracy of our algorithm in comparison to these conventional approaches.This DL-driven approach facilitates PSD prediction grounded in fluid saturation for wells B and C.Ternary plots are then employed for PSCs.Seven distinct PSCs within well-A employing actual NMR logs(CBW,BVI,FFV),in conjunction with an equivalent count within wells B and C utilizing three predicted logs,are harmoniously categorized leading to the identification of seven distinct pore size classification facies(PSCF).this research introduces an advanced approach to pore size classification and prediction,fusing NMR logs with deep learning techniques and extending their application to nearby wells without NMR log.The resulting PSCFs offer valuable insights into generating precise and detailed reservoir 3D models.展开更多
Porosity,tortuosity,specific surface area(SSA),and permeability are four key parameters of reactive transport modeling in sandstone,which are important for understanding solute transport and geochemical reaction pro-c...Porosity,tortuosity,specific surface area(SSA),and permeability are four key parameters of reactive transport modeling in sandstone,which are important for understanding solute transport and geochemical reaction pro-cesses in sandstone aquifers.These four parameters reflect the characteristics of pore structure of sandstone from different perspectives,and the traditional empirical formulas cannot make accurate predictions of them due to their complexity and heterogeneity.In this paper,eleven types of sandstone CT images were firstly segmented into numerous subsample images,the porosity,tortuosity,SSA,and permeability of the subsamples were calculated,and the dataset was established.The 3D convolutional neural network(CNN)models were subse-quently established and trained to predict the key reactive transport parameters based on subsample CT images of sandstones.The results demonstrated that the 3D CNN model with multiple outputs exhibited excellent prediction ability for the four parameters compared to the traditional empirical formulas.In particular,for the prediction of tortuosity and permeability,the 3D CNN model with multiple outputs even showed slightly better prediction ability than its single-output variant model.Additionally,it demonstrated good generalization per-formance on sandstone CT images not included in the training dataset.The study showed that the 3D CNN model with multiple outputs has the advantages of simplifying operation and saving computational resources,which has the prospect of popularization and application.展开更多
Geophysicists interpreting seismic reflection data aim for the highest resolution possible as this facilitates the interpretation and discrimination of subtle geological features.Various deterministic methods based on...Geophysicists interpreting seismic reflection data aim for the highest resolution possible as this facilitates the interpretation and discrimination of subtle geological features.Various deterministic methods based on Wiener filtering exist to increase the temporal frequency bandwidth and compress the seismic wavelet in a process called spectral shaping.Auto-encoder neural networks with convolutional layers have been applied to this problem,with encouraging results,but the problem of generalization to unseen data remains.Most published works have used supervised learning with training data constructed from field seismic data or synthetic seismic data generated based on measured well logs or based on seismic wavefield modelling.This leads to satisfactory results on datasets similar to the training data but requires re-training of the networks for unseen data with different characteristics.In this work seek to improve the generalization,not by experimenting with network architecture(we use a conventional U-net with some small modifications),but by adopting a different approach to creating the training data for the supervised learning process.Although the network is important,at this stage of development we see more improvement in prediction results by altering the design of the training data than by architectural changes.The approach we take is to create synthetic training data consisting of simple geometric shapes convolved with a seismic wavelet.We created a very diverse training dataset consisting of 9000 seismic images with between 5 and 300 seismic events resembling seismic reflections that have geophysically motived perturbations in terms of shape and character.The 2D U-net we have trained can boost robustly and recursively the dominant frequency by 50%.We demonstrate this on unseen field data with different bandwidths and signal-to-noise ratios.Additionally,this 2D U-net can handle non-stationary wavelets and overlapping events of different bandwidth without creating excessive ringing.It is also robust in the presence of noise.The significance of this result is that it simplifies the effort of bandwidth extension and demonstrates the usefulness of auto-encoder neural network for geophysical data processing.展开更多
Since its arrival in late November 2022,ChatGPT-3.5 has rapidly gained popularity and significantly impacted how research is planned,conducted,and published using a generative artificial intelligence approach.ChatGPT-...Since its arrival in late November 2022,ChatGPT-3.5 has rapidly gained popularity and significantly impacted how research is planned,conducted,and published using a generative artificial intelligence approach.ChatGPT-4 was released four months later and became more popular in November 2023.However,there is little study about the perception of scientists of these chatbots,especially in soil science.This article presents the new findings of a brief research investigating soil scientists’responses and perceptions towards chatbots in Indonesia.This artificial intelligence application facilitates conversation-based interactions in text format.The study evaluated ten ChatGPT answers to fundamental questions in soil science,which has developed into a normal science with a mutually agreed-upon paradigm.The evaluation was carried out by seven soil scientists recognized for their expertise in Indonesia,using a scale of 1-100.In addition,a questionnaire was distributed to soil scientists at the National Research and Innovation Agency of the Republic of Indonesia(BRIN),universities,and Indonesian Soil Science Society(HITI)members to gauge their perception of ChatGPT’s presence in the research field.The study results indicate that the scores of ChatGPT answers range from 82.99 to 92.24.ChatGPT-4 is better than both the paid and free versions of ChatGPT-3.5.There is no significant difference between the English and Indonesian versions of ChatGPT-4.0.However,the perception of general soil scientists about the level of trust is only 55%.Furthermore,80%of soil scientists believe that chatbots can only be used as digital tools to assist in soil science research and cannot be used without the involvement of soil scientists.展开更多
Fluctuations in oil prices adversely affect decision making situations in which performance forecasting must be combined with realistic price forecasts.In periods of significant price drops,companies may consider exte...Fluctuations in oil prices adversely affect decision making situations in which performance forecasting must be combined with realistic price forecasts.In periods of significant price drops,companies may consider extended duration of well shut-ins(i.e.temporarily stopping oil production)for economic reasons.For example,prices during the early days of the Covid-19 pandemic forced operators to consider shutting in all or some of their active wells.In the case of partial shut-in,selection of candidate wells may evolve as a challenging decision problem considering the uncertainties involved.In this study,a mature oil field with a long(50+years)production history with 170+wells is considered.Reservoirs with similar conditions face many challenges related to economic sustainability such as frequent maintenance requirements and low production rates.We aimed to solve this decision-making problem through unsupervised machine learning.Average reservoir characteristics at well locations,well production performance statistics and well locations are used as potential features that could characterize similarities and differences among wells.While reservoir characteristics are measured at well locations for the purpose of describing the subsurface reservoir,well performance consists of volumetric rates and pressures,which are frequently measured during oil production.After a multivariate data analysis that explored correlations among parameters,clustering algorithms were used to identify groups of wells that are similar with respect to aforementioned features.Using the field’s reservoir simulation model,scenarios of shutting in different groups of wells were simulated.Forecasted reservoir performance for three years was used for economic evaluation that assumed an oil price drop to$30/bbl for 6,12 or 18 months.Results of economic analysis were analyzed to identify which group(s)of wells should have been shut-in by also considering the sensitivity to different price levels.It was observed that wells can be characterized in the 3-cluster case as low,medium and high performance wells.Analyzing the forecasting scenarios showed that shutting in all or high-and medium-performance wells altogether results in better economic outcomes.The results were most sensitive to the number of active wells and the oil price during the high-price period.This study demonstrated the effectiveness of unsupervised machine learning in well classification for operational decision making purposes.Operating companies may use this approach for improved decision making to select wells for extended shut-in during low oil-price periods.This approach would lead to cost savings especially in mature fields with low-profit margins.展开更多
Seismic inversion can be divided into time-domain inversion and frequency-domain inversion based on different transform domains.Time-domain inversion has stronger stability and noise resistance compared to frequencydo...Seismic inversion can be divided into time-domain inversion and frequency-domain inversion based on different transform domains.Time-domain inversion has stronger stability and noise resistance compared to frequencydomain inversion.Frequency domain inversion has stronger ability to identify small-scale bodies and higher inversion resolution.Therefore,the research on the joint inversion method in the time-frequency domain is of great significance for improving the inversion resolution,stability,and noise resistance.The introduction of prior information constraints can effectively reduce ambiguity in the inversion process.However,the existing modeldriven time-frequency joint inversion assumes a specific prior distribution of the reservoir.These methods do not consider the original features of the data and are difficult to describe the relationship between time-domain features and frequency-domain features.Therefore,this paper proposes a high-resolution seismic inversion method based on joint data-driven in the time-frequency domain.The method is based on the impedance and reflectivity samples from logging,using joint dictionary learning to obtain adaptive feature information of the reservoir,and using sparse coefficients to capture the intrinsic relationship between impedance and reflectivity.The optimization result of the inversion is achieved through the regularization term of the joint dictionary sparse representation.We have finally achieved an inversion method that combines constraints on time-domain features and frequency features.By testing the model data and field data,the method has higher resolution in the inversion results and good noise resistance.展开更多
relationships between logging data and reservoir parameters.We compare our method’s performances using two datasets and evaluate the influences of multi-task learning,model structure,transfer learning,and petrophysic...relationships between logging data and reservoir parameters.We compare our method’s performances using two datasets and evaluate the influences of multi-task learning,model structure,transfer learning,and petrophysics informed machine learning(PIML).Our experiments demonstrate that PIML significantly enhances the performance of formation evaluation,and the structure of residual neural network is optimal for incorporating petrophysical constraints.Moreover,PIML is less sensitive to noise.These findings indicate that it is crucial to integrate data-driven machine learning with petrophysical mechanism for the application of artificial intelligence in oil and gas exploration.展开更多
Water prediction plays a crucial role in modern-day water resource management,encompassing both logical hydro-patterns and demand forecasts.To gain insights into its current focus,status,and emerging themes,this study...Water prediction plays a crucial role in modern-day water resource management,encompassing both logical hydro-patterns and demand forecasts.To gain insights into its current focus,status,and emerging themes,this study analyzed 876 articles published between 2015 and 2022,retrieved from the Web of Science database.Leveraging CiteSpace visualization software,bibliometric techniques,and literature review methodologies,the investigation identified essential literature related to water prediction using machine learning and deep learning approaches.Through a comprehensive analysis,the study identified significant countries,institutions,authors,journals,and keywords in this field.By exploring this data,the research mapped out prevailing trends and cutting-edge areas,providing valuable insights for researchers and practitioners involved in water prediction through machine learning and deep learning.The study aims to guide future inquiries by highlighting key research domains and emerging areas of interest.展开更多
Magnitude estimation is a critical task in seismology,and conventional methods usually require dense seismic station arrays to provide data with sufficient spatiotemporal distribution.In this context,we propose the Ea...Magnitude estimation is a critical task in seismology,and conventional methods usually require dense seismic station arrays to provide data with sufficient spatiotemporal distribution.In this context,we propose the Earthquake Graph Network(EQGraphNet)to enhance the performance of single-station magnitude estimation.The backbone of the proposed model consists of eleven convolutional neural network layers and ten RCGL modules,where a RCGL combines a Residual Connection and a Graph convolutional Layer capable of mitigating the over-smoothing problem and simultaneously extracting temporal features of seismic signals.Our work uses the STanford EArthquake Dataset for model training and performance testing.Compared with three existing deep learning models,EQGraphNet demonstrates improved accuracy for both local magnitude and duration magnitude scales.To evaluate the robustness,we add natural background noise to the model input and find that EQGraphNet achieves the best results,particularly for signals with lower signal-to-noise ratios.Additionally,by replacing various network components and comparing their estimation performances,we illustrate the contribution of each part of EQGraphNet,validating the rationality of our approach.We also demonstrate the generalization capability of our model across different earthquakes occurring environments,achieving mean errors of±0.1 units.Furthermore,by demonstrating the effectiveness of deeper architectures,this work encourages further exploration of deeper GNN models for both multi-station and single-station magnitude estimation.展开更多
Accurately and efficiently predicting the permeability of porous media is essential for addressing a wide range of hydrogeological issues.However,the complexity of porous media often limits the effectiveness of indivi...Accurately and efficiently predicting the permeability of porous media is essential for addressing a wide range of hydrogeological issues.However,the complexity of porous media often limits the effectiveness of individual prediction methods.This study introduces a novel Particle Swarm Optimization-based Permeability Integrated Prediction model(PSO-PIP),which incorporates a particle swarm optimization algorithm enhanced with dy-namic clustering and adaptive parameter tuning(KGPSO).The model integrates multi-source data from the Lattice Boltzmann Method(LBM),Pore Network Modeling(PNM),and Finite Difference Method(FDM).By assigning optimal weight coefficients to the outputs of these methods,the model minimizes deviations from actual values and enhances permeability prediction performance.Initially,the computational performances of the LBM,PNM,and FDM are comparatively analyzed on datasets consisting of sphere packings and real rock samples.It is observed that these methods exhibit computational biases in certain permeability ranges.The PSOPIP model is proposed to combine the strengths of each computational approach and mitigate their limitations.The PSO-PIP model consistently produces predictions that are highly congruent with actual permeability values across all prediction intervals,significantly enhancing prediction accuracy.The outcomes of this study provide a new tool and perspective for the comprehensive,rapid,and accurate prediction of permeability in porous media.展开更多
文摘Earthquakes are classified as one of the most devastating natural disasters that can have catastrophic effects on the environment,lives,and properties.There has been an increasing interest in the prediction of earthquakes and in gaining a comprehensive understanding of the mechanisms that underlie their generation,yet earthquakes are the least predictable natural disaster.Satellite data,global positioning system,interferometry synthetic aperture radar(InSAR),and seismometers such as microelectromechanical system,seismometers,ocean bottom seismometers,and distributed acoustic sensing systems have all been used to predict earthquakes with a high degree of success.Despite advances in seismic wave recording,storage,and analysis,earthquake time,location,and magnitude prediction remain difficult.On the other hand,new developments in artificial intelligence(AI)and the Internet of Things(IoT)have shown promising potential to deliver more insights and predictions.Thus,this article reviewed the use of AI-driven Models and IoT-based technologies for the prediction of earthquakes,the limitations of current approaches,and open research issues.The review discusses earthquake prediction setbacks due to insufficient data,inconsistencies,diversity of earthquake precursor signals,and the earth’s geophysical composition.Finally,this study examines potential approaches or solutions that scientists can employ to address the challenges they face in earthquake prediction.The analysis is based on the successful application of AI and IoT in other fields.
文摘Machine learning methods dealing with the spatial auto-correlation of the response variable have garnered significant attention in the context of spatial prediction.Nonetheless,under these methods,the relationship between the response variable and explanatory variables is assumed to be homogeneous throughout the entire study area.This assumption,known as spatial stationarity,is very questionable in real-world situations due to the influence of contextual factors.Therefore,allowing the relationship between the target variable and predictor variables to vary spatially within the study region is more reasonable.However,existing machine learning techniques accounting for the spatially varying relationship between the dependent variable and the predictor variables do not capture the spatial auto-correlation of the dependent variable itself.Moreover,under these techniques,local machine learning models are effectively built using only fewer observations,which can lead to well-known issues such as over-fitting and the curse of dimensionality.This paper introduces a novel geostatistical machine learning approach where both the spatial auto-correlation of the response variable and the spatial non-stationarity of the regression relationship between the response and predictor variables are explicitly considered.The basic idea consists of relying on the local stationarity assumption to build a collection of local machine learning models while leveraging on the local spatial auto-correlation of the response variable to locally augment the training dataset.The proposed method’s effectiveness is showcased via experiments conducted on synthetic spatial data with known characteristics as well as real-world spatial data.In the synthetic(resp.real)case study,the proposed method’s predictive accuracy,as indicated by the Root Mean Square Error(RMSE)on the test set,is 17%(resp.7%)better than that of popular machine learning methods dealing with the response variable’s spatial auto-correlation.Additionally,this method is not only valuable for spatial prediction but also offers a deeper understanding of how the relationship between the target and predictor variables varies across space,and it can even be used to investigate the local significance of predictor variables.
文摘Ensuring the reliability of pipe pile designs under earthquake loading necessitates an accurate determination of lateral displacement and bending moment,typically achieved through complex numerical modeling to address the intricacies of soil-pile interaction.Despite recent advancements in machine learning techniques,there is a persistent need to establish data-driven models that can predict these parameters without using numerical simulations due to the difficulties in conducting correct numerical simulations and the need for constitutive modelling parameters that are not readily available.This research presents novel lateral displacement and bending moment predictive models for closed and open-ended pipe piles,employing a Genetic Programming(GP)approach.Utilizing a soil dataset extracted from existing literature,comprising 392 data points for both pile types embedded in cohesionless soil and subjected to earthquake loading,the study intentionally limited input parameters to three features to enhance model simplicity:Standard Penetration Test(SPT)corrected blow count(N60),Peak Ground Acceleration(PGA),and pile slenderness ratio(L/D).Model performance was assessed via coefficient of determination(R^(2)),Root Mean Squared Error(RMSE),and Mean Absolute Error(MAE),with R^(2) values ranging from 0.95 to 0.99 for the training set,and from 0.92 to 0.98 for the testing set,which indicate of high accuracy of prediction.Finally,the study concludes with a sensitivity analysis,evaluating the influence of each input parameter across different pile types.
文摘Machine learning(ML)algorithms are frequently used in landslide susceptibility modeling.Different data handling strategies may generate variations in landslide susceptibility modeling,even when using the same ML algorithm.This research aims to compare the combinations of inventory data handling,cross validation(CV),and hyperparameter tuning strategies to generate landslide susceptibility maps.The results are expected to provide a general strategy for landslide susceptibility modeling using ML techniques.The authors employed eight landslide inventory data handling scenarios to convert a landslide polygon into a landslide point,i.e.,the landslide point is located on the toe(minimum height),on the scarp(maximum height),at the center of the landslide,randomly inside the polygon(1 point),randomly inside the polygon(3 points),randomly inside the polygon(5 points),randomly inside the polygon(10 points),and 15 m grid sampling.Random forest models using CV-nonspatial hyperparameter tuning,spatial CV-spatial hyperparameter tuning,and spatial CV-forward feature selection-no hyperparameter tuning were applied for each data handling strategy.The combination generated 24 random forest ML workflows,which are applied using a complete inventory of 743 landslides triggered by Tropical Cyclone Cempaka(2017)in Pacitan Regency,Indonesia,and 11 landslide controlling factors.The results show that grid sampling with spatial CV and spatial hyperparameter tuning is favorable because the strategy can minimize overfitting,generate a relatively high-performance predictive model,and reduce the appearance of susceptibility artifacts in the landslide area.Careful data inventory handling,CV,and hyperparameter tuning strategies should be considered in landslide susceptibility modeling to increase the applicability of landslide susceptibility maps in practical application.
文摘Alteration minerals and silicification are typically associated with a variety of ore mineralizations and could be detected using multispectral remote sensing sensors as indicators for mineral exploration.In this investigation,the Visible Near-Infra-Red(VNIR),Short-Wave Infra-Red(SWIR),and Thermal Infra-Red(TIR)bands of the ASTER satellite sensor derived layers were fused to detect alteration minerals and silicification in east the Kerdous inlier for cupriferous mineralization exploration.Several image processing techniques were executed in the present investigation,namely,Band Ratio(BR),Selective Principal Component Analysis(SPCA)and Constrained Energy Minimization(CEM)techniques.Initially,the BR and SPCA processing results revealed several alteration zones,including argillic,phyllic,dolomitization and silicification as well as iron oxides and hydroxides.Then,these zones were mapped at sub-pixel level using the CEM technique.Pyrophyllite,kaolinite,dolomite,illite,muscovite,montmorillonite,topaz and hematite were revealed displaying a significant distribution in relation with the eastern Amlen region lithological units and previously detected mineral potential zones using HyMap imaging spectroscopy.Mainly,a close spatial association between iron oxides and hydroxide minerals,argillic,and phyllic alteration was detected,as well as a strong silicification was detected around doleritic dykes unit in Jbel Lkest area.A weighted overlay approach was used in the integration of hydrothermal alteration minerals and silicification,which allowed the elaboration of a new mineral alteration map of study area with five alteration intensities.ASTER and the various employed processing techniques allowed a practical and cost effective mapping of alteration features,which corroborates well with field survey and X-ray diffraction analysis.Therefore,ASTER data and the employed processing techniques offers a practical approach for mineral prospection in comparable settings.
基金This research work was carried out during the SERB,SIRE fellowship (File No.SIR/2022/000972)tenure at Keio University,Japan.
文摘This research aim to evaluate hydro-meteorological data from the Yamuna River Basin,Uttarakhand,India,utilizing Extreme Value Distribution of Frequency Analysis and the Markov Chain Approach.This method assesses persistence and allows for combinatorial probability estimations such as initial and transitional probabilities.The hydrologic data was generated(in-situ)and received from Uttarakhand Jal Vidut Nigam Limited(UJVNL),and meteorological data was acquired from NASA’s archives MERRA-2 product.A total of sixteen years(2005-2020)of data was used to foresee daily Precipitation from 2020 to 2022.MERRA-2 products are utilized as observed and forecast values for daily Precipitation throughout the monsoon season,which runs from July to September.Markov Chain and Long Short-Term Memory(LSTM)findings for 2020,2021,and 2022 were observed,and anticipated values for daily rainfall during the monsoon season between July and September.According to test findings,the artificial intelligence technique cannot anticipate future regional meteorological formations;the correlation coefficient R^(2) is around 0.12.According to the randomly verified precipitation data findings,the Markov Chain model has a success rate of 79.17 percent.The results suggest that extended return periods should be a warning sign for drought and flood risk in the Himalayan region.This study gives a better knowledge of the water budget,climate change variability,and impact of global warming,ultimately leading to improved water resource management and better emergency planning to the establishment of the Early Warning System(EWS)for extreme occurrences such as cloudbursts,flash floods,landslides hazards in the complex Himalayan region.
文摘In this study,we present an artificial neural network(ANN)-based approach for travel-time tomography of a volcanic edifice under sparse-ray coverage.We employ ray tracing to simulate the propagation of seismic waves through the heterogeneous medium of a volcanic edifice,and an inverse modeling algorithm that uses an ANN to estimate the velocity structure from the“observed”travel-time data.The performance of the approach is evaluated through a 2-dimensional numerical study that simulates i)an active source seismic experiment with a few(explosive)sources placed on one side of the edifice and a dense line of receivers placed on the other side,and ii)earthquakes located inside the edifice with receivers placed on both sides of the edifice.The results are compared with those obtained from conventional damped linear inversion.The average Root Mean Square Error(RMSE)between the input and output models is approximately 0.03 km/s for the ANN inversions,whereas it is about 0.4 km/s for the linear inversions,demonstrating that the ANN-based approach outperforms the classical approach,particularly in situations with sparse ray coverage.Our study emphasizes the advantages of employing a relatively simple ANN architecture in conjunction with second-order optimizers to minimize the loss function.Compared to using first-order optimizers,our ANN architecture shows a~25%reduction in RMSE.The ANN-based approach is computationally efficient.We observed that even though the ANN is trained based on completely random velocity models,it is still capable of resolving previously unseen anomalous structures within the edifice with about 5%anomalous discrepancies,making it a potentially valuable tool for the detection of low velocity anomalies related to magmatic intrusions or mush.
文摘Emeralds-the green colored variety of beryl-occur as gem-quality specimens in over fifty deposits globally.While digital traceability methods for emerald have limitations,sample-based approaches offer robust alterna-tives,particularly for determining the geographic origin of emerald.Three factors make emerald suitable for provenance studies and hence for developing models for origin determination.First,the diverse elemental chemistry of emerald at minor(<1 wt%)and trace levels(<1 to 100’s ppmw)exhibits unique inter-element fractionations between global deposits.Second,minimally destructive techniques,including laser ablation inductively coupled plasma mass spectrometry(LA-ICP-MS),enable measurement of these diagnostic elemental signatures.Third,when applied to extensive datasets,machine learning(ML)techniques enable the creation of predictive models and statistical discrimination with adequate characterization of the deposits.This study em-ploys a carefully selected dataset comprising more than 1000 LA-ICP-MS analyses of gem-quality emeralds,enriched with new analyses.This dataset represents the largest available for global emerald deposits.We con-ducted unsupervised exploratory analysis using Principal Component Analysis(PCA).For machine learning-based classification,we employed Support Vector Machine Classification(SVM-C),achieving an initial accu-racy rate of 79%.This was enhanced to 96.8%through the use of hierarchical SVM-C with PCA filters as our modeling approach.The ML models were trained using the concentrations of eight statistically significant ele-ments(Li,V,Cr,Fe,Sc,Ga,Rb,Cs).By leveraging high-quality LA-ICP-MS data and ML techniques,accurate identification of the geographical origin of emerald becomes possible.These models are important for accurate provenance of emerald,and from a geochemical perspective,for understanding the formation environments of beryl-bearing pegmatites and shales.
文摘In recent years,there has been a growing interest in using artificial intelligence(AI)for rainfall-runoff modelling,as it has shown promising adaptability in this context.The current study involved the use of six distinct AI models to simulate monthly rainfall-runoff modelling in the Bardha watershed,India.These models included the artificial neural network(ANN),k-nearest neighbour regression model(KNN),extreme gradient boosting(XGBoost)regression model,random forest regression model(RF),convolutional neural network(CNN),and CNN-RNN(convolutional recurrent neural network).The years 2003-2007 are classified as the calibration or training period,while the years 2008-2009 are classified as the validation or testing period for the span of time 2003 to 2009.The available rainfall,maximum and minimum temperatures,and discharge data were collected and utilized in the models.To compare the performance of the models,five criteria were employed:R^(2),NSE,MAE,RMSE,and PBIAS.The CNN-RNN model simulates the rainfall-runoff model in the Bardha watershed best in both the training and testing periods(training:R^(2) is 0.99,NSE is 0.99,MAE is 1.76,RMSE is 3.11,and PBIAS is1.45;testing:R^(2) is 0.97,NSE is 0.97,MAE is 2.05,RMSE is 3.60,and PBIAS is3.94).These results demonstrate the superior performance of the CNN-RNN model in simulating monthly rainfall-runoff modelling when compared to the other models used in the study.The findings suggest that the CNN-RNN model could be a valuable tool for various applications related to sustainable water resource management,flood control,and environmental planning.
文摘The connectivity of sandbodies is a key constraint to the exploration effectiveness of Bohai A Oilfield.Conventional connectivity studies often use methods such as seismic attribute fusion,while the development of contiguous composite sandbodies in this area makes it challenging to characterize connectivity changes with conventional seismic attributes.Aiming at the above problem in the Bohai A Oilfield,this study proposes a big data analysis method based on the Deep Forest algorithm to predict the sandbody connectivity.Firstly,by compiling the abundant exploration and development sandbodies data in the study area,typical sandbodies with reliable connectivity were selected.Then,sensitive seismic attribute were extracted to obtain training samples.Finally,based on the Deep Forest algorithm,mapping model between attribute combinations and sandbody connectivity was established through machine learning.This method achieves the first quantitative determination of the connectivity for continuous composite sandbodies in the Bohai Oilfield.Compared with conventional connectivity discrimination methods such as high-resolution processing and seismic attribute analysis,this method can combine the sandbody characteristics of the study area in the process of machine learning,and jointly judge connectivity by combining multiple seismic attributes.The study results show that this method has high accuracy and timeliness in predicting connectivity for continuous composite sandbodies.Applied to the Bohai A Oilfield,it successfully identified multiple sandbody connectivity relationships and provided strong support for the subsequent exploration potential assessment and well placement optimization.This method also provides a new idea and method for studying sandbody connectivity under similar complex geological conditions.
文摘Pore size analysis plays a pivotal role in unraveling reservoir behavior and its intricate relationship with confined fluids.Traditional methods for predicting pore size distribution(PSD),relying on drilling cores or thin sections,face limitations associated with depth specificity.In this study,we introduce an innovative framework that leverages nuclear magnetic resonance(NMR)log data,encompassing clay-bound water(CBW),bound volume irreducible(BVI),and free fluid volume(FFV),to determine three PSDs(micropores,mesopores,and macropores).Moreover,we establish a robust pore size classification(PSC)system utilizing ternary plots,derived from the PSDs.Within the three studied wells,NMR log data is exclusive to one well(well-A),while conventional well logs are accessible for all three wells(well-A,well-B,and well-C).This distinction enables PSD predictions for the remaining two wells(B and C).To prognosticate NMR outputs(CBW,BVI,FFV)for these wells,a two-step deep learning(DL)algorithm is implemented.Initially,three feature selection algorithms(f-classif,f-regression,and mutual-info-regression)identify the conventional well logs most correlated to NMR outputs in well-A.The three feature selection algorithms utilize statistical computations.These algorithms are utilized to systematically identify and optimize pertinent input features,thereby augmenting model interpretability and predictive efficacy within intricate data-driven endeavors.So,all three feature selection algorithms introduced the number of 4 logs as the most optimal number of inputs to the DL algorithm with different combinations of logs for each of the three desired outputs.Subsequently,the CUDA Deep Neural Network Long Short-Term Memory algorithm(CUDNNLSTM),belonging to the category of DL algorithms and harnessing the computational power of GPUs,is employed for the prediction of CBW,BVI,and FFV logs.This prediction leverages the optimal logs identified in the preceding step.Estimation of NMR outputs was done first in well-A(80%of data as training and 20%as testing).The correlation coefficient(CC)between the actual and estimated data for the three outputs CBW,BVI and FFV are 95%,94%,and 97%,respectively,as well as root mean square error(RMSE)was obtained 0.0081,0.098,and 0.0089,respectively.To assess the effectiveness of the proposed algorithm,we compared it with two traditional methods for log estimation:multiple regression and multi-resolution graph-based clustering methods.The results demonstrate the superior accuracy of our algorithm in comparison to these conventional approaches.This DL-driven approach facilitates PSD prediction grounded in fluid saturation for wells B and C.Ternary plots are then employed for PSCs.Seven distinct PSCs within well-A employing actual NMR logs(CBW,BVI,FFV),in conjunction with an equivalent count within wells B and C utilizing three predicted logs,are harmoniously categorized leading to the identification of seven distinct pore size classification facies(PSCF).this research introduces an advanced approach to pore size classification and prediction,fusing NMR logs with deep learning techniques and extending their application to nearby wells without NMR log.The resulting PSCFs offer valuable insights into generating precise and detailed reservoir 3D models.
基金supported by the National Natural Science Foundation of China (12105139 and 42277264)National Key Research and Development Program of China (2021YFC2902104)Education Department of Hunan Province (21B0446).
文摘Porosity,tortuosity,specific surface area(SSA),and permeability are four key parameters of reactive transport modeling in sandstone,which are important for understanding solute transport and geochemical reaction pro-cesses in sandstone aquifers.These four parameters reflect the characteristics of pore structure of sandstone from different perspectives,and the traditional empirical formulas cannot make accurate predictions of them due to their complexity and heterogeneity.In this paper,eleven types of sandstone CT images were firstly segmented into numerous subsample images,the porosity,tortuosity,SSA,and permeability of the subsamples were calculated,and the dataset was established.The 3D convolutional neural network(CNN)models were subse-quently established and trained to predict the key reactive transport parameters based on subsample CT images of sandstones.The results demonstrated that the 3D CNN model with multiple outputs exhibited excellent prediction ability for the four parameters compared to the traditional empirical formulas.In particular,for the prediction of tortuosity and permeability,the 3D CNN model with multiple outputs even showed slightly better prediction ability than its single-output variant model.Additionally,it demonstrated good generalization per-formance on sandstone CT images not included in the training dataset.The study showed that the 3D CNN model with multiple outputs has the advantages of simplifying operation and saving computational resources,which has the prospect of popularization and application.
文摘Geophysicists interpreting seismic reflection data aim for the highest resolution possible as this facilitates the interpretation and discrimination of subtle geological features.Various deterministic methods based on Wiener filtering exist to increase the temporal frequency bandwidth and compress the seismic wavelet in a process called spectral shaping.Auto-encoder neural networks with convolutional layers have been applied to this problem,with encouraging results,but the problem of generalization to unseen data remains.Most published works have used supervised learning with training data constructed from field seismic data or synthetic seismic data generated based on measured well logs or based on seismic wavefield modelling.This leads to satisfactory results on datasets similar to the training data but requires re-training of the networks for unseen data with different characteristics.In this work seek to improve the generalization,not by experimenting with network architecture(we use a conventional U-net with some small modifications),but by adopting a different approach to creating the training data for the supervised learning process.Although the network is important,at this stage of development we see more improvement in prediction results by altering the design of the training data than by architectural changes.The approach we take is to create synthetic training data consisting of simple geometric shapes convolved with a seismic wavelet.We created a very diverse training dataset consisting of 9000 seismic images with between 5 and 300 seismic events resembling seismic reflections that have geophysically motived perturbations in terms of shape and character.The 2D U-net we have trained can boost robustly and recursively the dominant frequency by 50%.We demonstrate this on unseen field data with different bandwidths and signal-to-noise ratios.Additionally,this 2D U-net can handle non-stationary wavelets and overlapping events of different bandwidth without creating excessive ringing.It is also robust in the presence of noise.The significance of this result is that it simplifies the effort of bandwidth extension and demonstrates the usefulness of auto-encoder neural network for geophysical data processing.
文摘Since its arrival in late November 2022,ChatGPT-3.5 has rapidly gained popularity and significantly impacted how research is planned,conducted,and published using a generative artificial intelligence approach.ChatGPT-4 was released four months later and became more popular in November 2023.However,there is little study about the perception of scientists of these chatbots,especially in soil science.This article presents the new findings of a brief research investigating soil scientists’responses and perceptions towards chatbots in Indonesia.This artificial intelligence application facilitates conversation-based interactions in text format.The study evaluated ten ChatGPT answers to fundamental questions in soil science,which has developed into a normal science with a mutually agreed-upon paradigm.The evaluation was carried out by seven soil scientists recognized for their expertise in Indonesia,using a scale of 1-100.In addition,a questionnaire was distributed to soil scientists at the National Research and Innovation Agency of the Republic of Indonesia(BRIN),universities,and Indonesian Soil Science Society(HITI)members to gauge their perception of ChatGPT’s presence in the research field.The study results indicate that the scores of ChatGPT answers range from 82.99 to 92.24.ChatGPT-4 is better than both the paid and free versions of ChatGPT-3.5.There is no significant difference between the English and Indonesian versions of ChatGPT-4.0.However,the perception of general soil scientists about the level of trust is only 55%.Furthermore,80%of soil scientists believe that chatbots can only be used as digital tools to assist in soil science research and cannot be used without the involvement of soil scientists.
基金support from research grants MGA-2021-42991 and MYL-2022-43726,funded by Istanbul Technical University-Scientific Research Projects,Turkey.Thissupportis gratefully acknowledged.
文摘Fluctuations in oil prices adversely affect decision making situations in which performance forecasting must be combined with realistic price forecasts.In periods of significant price drops,companies may consider extended duration of well shut-ins(i.e.temporarily stopping oil production)for economic reasons.For example,prices during the early days of the Covid-19 pandemic forced operators to consider shutting in all or some of their active wells.In the case of partial shut-in,selection of candidate wells may evolve as a challenging decision problem considering the uncertainties involved.In this study,a mature oil field with a long(50+years)production history with 170+wells is considered.Reservoirs with similar conditions face many challenges related to economic sustainability such as frequent maintenance requirements and low production rates.We aimed to solve this decision-making problem through unsupervised machine learning.Average reservoir characteristics at well locations,well production performance statistics and well locations are used as potential features that could characterize similarities and differences among wells.While reservoir characteristics are measured at well locations for the purpose of describing the subsurface reservoir,well performance consists of volumetric rates and pressures,which are frequently measured during oil production.After a multivariate data analysis that explored correlations among parameters,clustering algorithms were used to identify groups of wells that are similar with respect to aforementioned features.Using the field’s reservoir simulation model,scenarios of shutting in different groups of wells were simulated.Forecasted reservoir performance for three years was used for economic evaluation that assumed an oil price drop to$30/bbl for 6,12 or 18 months.Results of economic analysis were analyzed to identify which group(s)of wells should have been shut-in by also considering the sensitivity to different price levels.It was observed that wells can be characterized in the 3-cluster case as low,medium and high performance wells.Analyzing the forecasting scenarios showed that shutting in all or high-and medium-performance wells altogether results in better economic outcomes.The results were most sensitive to the number of active wells and the oil price during the high-price period.This study demonstrated the effectiveness of unsupervised machine learning in well classification for operational decision making purposes.Operating companies may use this approach for improved decision making to select wells for extended shut-in during low oil-price periods.This approach would lead to cost savings especially in mature fields with low-profit margins.
文摘Seismic inversion can be divided into time-domain inversion and frequency-domain inversion based on different transform domains.Time-domain inversion has stronger stability and noise resistance compared to frequencydomain inversion.Frequency domain inversion has stronger ability to identify small-scale bodies and higher inversion resolution.Therefore,the research on the joint inversion method in the time-frequency domain is of great significance for improving the inversion resolution,stability,and noise resistance.The introduction of prior information constraints can effectively reduce ambiguity in the inversion process.However,the existing modeldriven time-frequency joint inversion assumes a specific prior distribution of the reservoir.These methods do not consider the original features of the data and are difficult to describe the relationship between time-domain features and frequency-domain features.Therefore,this paper proposes a high-resolution seismic inversion method based on joint data-driven in the time-frequency domain.The method is based on the impedance and reflectivity samples from logging,using joint dictionary learning to obtain adaptive feature information of the reservoir,and using sparse coefficients to capture the intrinsic relationship between impedance and reflectivity.The optimization result of the inversion is achieved through the regularization term of the joint dictionary sparse representation.We have finally achieved an inversion method that combines constraints on time-domain features and frequency features.By testing the model data and field data,the method has higher resolution in the inversion results and good noise resistance.
基金supported by the Strategic Cooperation Technology Projects of CNPC and CUPB (ZLZX2020-03)National Key Research and Development Program (2019YFA0708301)+1 种基金National Key Research and Development Program (2023YFF0714102)Science and Technology Innovation Fund of CNPC (2021DQ02-0403).
文摘relationships between logging data and reservoir parameters.We compare our method’s performances using two datasets and evaluate the influences of multi-task learning,model structure,transfer learning,and petrophysics informed machine learning(PIML).Our experiments demonstrate that PIML significantly enhances the performance of formation evaluation,and the structure of residual neural network is optimal for incorporating petrophysical constraints.Moreover,PIML is less sensitive to noise.These findings indicate that it is crucial to integrate data-driven machine learning with petrophysical mechanism for the application of artificial intelligence in oil and gas exploration.
基金The funding for this study was provided by the Ministry of Ed-ucation of Humanities and Social Science project in China (Project No.22YJC630083)the 2022 Shanghai Chenguang Scholars Program (Project No.22CGA82)+1 种基金the Belt and Road Special Foundation of The National Key Laboratory of Water Disaster Prevention (2021491811)the National Social Science Fund of China (Project No.23CGL077).
文摘Water prediction plays a crucial role in modern-day water resource management,encompassing both logical hydro-patterns and demand forecasts.To gain insights into its current focus,status,and emerging themes,this study analyzed 876 articles published between 2015 and 2022,retrieved from the Web of Science database.Leveraging CiteSpace visualization software,bibliometric techniques,and literature review methodologies,the investigation identified essential literature related to water prediction using machine learning and deep learning approaches.Through a comprehensive analysis,the study identified significant countries,institutions,authors,journals,and keywords in this field.By exploring this data,the research mapped out prevailing trends and cutting-edge areas,providing valuable insights for researchers and practitioners involved in water prediction through machine learning and deep learning.The study aims to guide future inquiries by highlighting key research domains and emerging areas of interest.
基金supported by the National Natural Science Foundation of China under Grant 41974137.
文摘Magnitude estimation is a critical task in seismology,and conventional methods usually require dense seismic station arrays to provide data with sufficient spatiotemporal distribution.In this context,we propose the Earthquake Graph Network(EQGraphNet)to enhance the performance of single-station magnitude estimation.The backbone of the proposed model consists of eleven convolutional neural network layers and ten RCGL modules,where a RCGL combines a Residual Connection and a Graph convolutional Layer capable of mitigating the over-smoothing problem and simultaneously extracting temporal features of seismic signals.Our work uses the STanford EArthquake Dataset for model training and performance testing.Compared with three existing deep learning models,EQGraphNet demonstrates improved accuracy for both local magnitude and duration magnitude scales.To evaluate the robustness,we add natural background noise to the model input and find that EQGraphNet achieves the best results,particularly for signals with lower signal-to-noise ratios.Additionally,by replacing various network components and comparing their estimation performances,we illustrate the contribution of each part of EQGraphNet,validating the rationality of our approach.We also demonstrate the generalization capability of our model across different earthquakes occurring environments,achieving mean errors of±0.1 units.Furthermore,by demonstrating the effectiveness of deeper architectures,this work encourages further exploration of deeper GNN models for both multi-station and single-station magnitude estimation.
基金supported by the National Key Research and Devel-opment Program of China (Grant No.2022YFC3005503)the National Natural Science Foundation of China (Grant Nos.52322907,52179141,U23B20149,U2340232)+1 种基金the Fundamental Research Funds for the Central Universities (Grant Nos.2042024kf1031,2042024kf0031)the Key Program of Science and Technology of Yunnan Province (Grant Nos.202202AF080004,202203AA080009).
文摘Accurately and efficiently predicting the permeability of porous media is essential for addressing a wide range of hydrogeological issues.However,the complexity of porous media often limits the effectiveness of individual prediction methods.This study introduces a novel Particle Swarm Optimization-based Permeability Integrated Prediction model(PSO-PIP),which incorporates a particle swarm optimization algorithm enhanced with dy-namic clustering and adaptive parameter tuning(KGPSO).The model integrates multi-source data from the Lattice Boltzmann Method(LBM),Pore Network Modeling(PNM),and Finite Difference Method(FDM).By assigning optimal weight coefficients to the outputs of these methods,the model minimizes deviations from actual values and enhances permeability prediction performance.Initially,the computational performances of the LBM,PNM,and FDM are comparatively analyzed on datasets consisting of sphere packings and real rock samples.It is observed that these methods exhibit computational biases in certain permeability ranges.The PSOPIP model is proposed to combine the strengths of each computational approach and mitigate their limitations.The PSO-PIP model consistently produces predictions that are highly congruent with actual permeability values across all prediction intervals,significantly enhancing prediction accuracy.The outcomes of this study provide a new tool and perspective for the comprehensive,rapid,and accurate prediction of permeability in porous media.