Hyperspectral imagery encompasses spectral and spatial dimensions,reflecting the material properties of objects.Its application proves crucial in search and rescue,concealed target identification,and crop growth analy...Hyperspectral imagery encompasses spectral and spatial dimensions,reflecting the material properties of objects.Its application proves crucial in search and rescue,concealed target identification,and crop growth analysis.Clustering is an important method of hyperspectral analysis.The vast data volume of hyperspectral imagery,coupled with redundant information,poses significant challenges in swiftly and accurately extracting features for subsequent analysis.The current hyperspectral feature clustering methods,which are mostly studied from space or spectrum,do not have strong interpretability,resulting in poor comprehensibility of the algorithm.So,this research introduces a feature clustering algorithm for hyperspectral imagery from an interpretability perspective.It commences with a simulated perception process,proposing an interpretable band selection algorithm to reduce data dimensions.Following this,amulti-dimensional clustering algorithm,rooted in fuzzy and kernel clustering,is developed to highlight intra-class similarities and inter-class differences.An optimized P systemis then introduced to enhance computational efficiency.This system coordinates all cells within a mapping space to compute optimal cluster centers,facilitating parallel computation.This approach diminishes sensitivity to initial cluster centers and augments global search capabilities,thus preventing entrapment in local minima and enhancing clustering performance.Experiments conducted on 300 datasets,comprising both real and simulated data.The results show that the average accuracy(ACC)of the proposed algorithm is 0.86 and the combination measure(CM)is 0.81.展开更多
Electrocatalytic nitrogen reduction to ammonia has garnered significant attention with the blooming of single-atom catalysts(SACs),showcasing their potential for sustainable and energy-efficient ammonia production.How...Electrocatalytic nitrogen reduction to ammonia has garnered significant attention with the blooming of single-atom catalysts(SACs),showcasing their potential for sustainable and energy-efficient ammonia production.However,cost-effectively designing and screening efficient electrocatalysts remains a challenge.In this study,we have successfully established interpretable machine learning(ML)models to evaluate the catalytic activity of SACs by directly and accurately predicting reaction Gibbs free energy.Our models were trained using non-density functional theory(DFT)calculated features from a dataset comprising 90 graphene-supported SACs.Our results underscore the superior prediction accuracy of the gradient boosting regression(GBR)model for bothΔg(N_(2)→NNH)andΔG(NH_(2)→NH_(3)),boasting coefficient of determination(R^(2))score of 0.972 and 0.984,along with root mean square error(RMSE)of 0.051 and 0.085 eV,respectively.Moreover,feature importance analysis elucidates that the high accuracy of GBR model stems from its adept capture of characteristics pertinent to the active center and coordination environment,unveilling the significance of elementary descriptors,with the colvalent radius playing a dominant role.Additionally,Shapley additive explanations(SHAP)analysis provides global and local interpretation of the working mechanism of the GBR model.Our analysis identifies that a pyrrole-type coordination(flag=0),d-orbitals with a moderate occupation(N_(d)=5),and a moderate difference in covalent radius(r_(TM-ave)near 140 pm)are conducive to achieving high activity.Furthermore,we extend the prediction of activity to more catalysts without additional DFT calculations,validating the reliability of our feature engineering,model training,and design strategy.These findings not only highlight new opportunity for accelerating catalyst design using non-DFT calculated features,but also shed light on the working mechanism of"black box"ML model.Moreover,the model provides valuable guidance for catalytic material design in multiple proton-electron coupling reactions,particularly in driving sustainable CO_(2),O_(2),and N_(2) conversion.展开更多
A new approach is proposed in this study for accountable capability improvement based on interpretable capability evaluation using the belief rule base(BRB).Firstly,a capability evaluation model is constructed and opt...A new approach is proposed in this study for accountable capability improvement based on interpretable capability evaluation using the belief rule base(BRB).Firstly,a capability evaluation model is constructed and optimized.Then,the key sub-capabilities are identified by quantitatively calculating the contributions made by each sub-capability to the overall capability.Finally,the overall capability is improved by optimizing the identified key sub-capabilities.The theoretical contributions of the proposed approach are as follows.(i)An interpretable capability evaluation model is constructed by employing BRB which can provide complete access to decision-makers.(ii)Key sub-capabilities are identified according to the quantitative contribution analysis results.(iii)Accountable capability improvement is carried out by only optimizing the identified key sub-capabilities.Case study results show that“Surveillance”,“Positioning”,and“Identification”are identified as key sub-capabilities with a summed contribution of 75.55%in an analytical and deducible fashion based on the interpretable capability evaluation model.As a result,the overall capability is improved by optimizing only the identified key sub-capabilities.The overall capability can be greatly improved from 59.20%to 81.80%with a minimum cost of 397.Furthermore,this paper also investigates how optimizing the BRB with more collected data would affect the evaluation results:only optimizing“Surveillance”and“Positioning”can also improve the overall capability to 81.34%with a cost of 370,which thus validates the efficiency of the proposed approach.展开更多
To equip data-driven dynamic chemical process models with strong interpretability,we develop a light attention–convolution–gate recurrent unit(LACG)architecture with three sub-modules—a basic module,a brand-new lig...To equip data-driven dynamic chemical process models with strong interpretability,we develop a light attention–convolution–gate recurrent unit(LACG)architecture with three sub-modules—a basic module,a brand-new light attention module,and a residue module—that are specially designed to learn the general dynamic behavior,transient disturbances,and other input factors of chemical processes,respectively.Combined with a hyperparameter optimization framework,Optuna,the effectiveness of the proposed LACG is tested by distributed control system data-driven modeling experiments on the discharge flowrate of an actual deethanization process.The LACG model provides significant advantages in prediction accuracy and model generalization compared with other models,including the feedforward neural network,convolution neural network,long short-term memory(LSTM),and attention-LSTM.Moreover,compared with the simulation results of a deethanization model built using Aspen Plus Dynamics V12.1,the LACG parameters are demonstrated to be interpretable,and more details on the variable interactions can be observed from the model parameters in comparison with the traditional interpretable model attention-LSTM.This contribution enriches interpretable machine learning knowledge and provides a reliable method with high accuracy for actual chemical process modeling,paving a route to intelligent manufacturing.展开更多
Traffic flow forecasting constitutes a crucial component of intelligent transportation systems(ITSs).Numerous studies have been conducted for traffic flow forecasting during the past decades.However,most existing stud...Traffic flow forecasting constitutes a crucial component of intelligent transportation systems(ITSs).Numerous studies have been conducted for traffic flow forecasting during the past decades.However,most existing studies have concentrated on developing advanced algorithms or models to attain state-of-the-art forecasting accuracy.For real-world ITS applications,the interpretability of the developed models is extremely important but has largely been ignored.This study presents an interpretable traffic flow forecasting framework based on popular tree-ensemble algorithms.The framework comprises multiple key components integrated into a highly flexible and customizable multi-stage pipeline,enabling the seamless incorporation of various algorithms and tools.To evaluate the effectiveness of the framework,the developed tree-ensemble models and another three typical categories of baseline models,including statistical time series,shallow learning,and deep learning,were compared on three datasets collected from different types of roads(i.e.,arterial,expressway,and freeway).Further,the study delves into an in-depth interpretability analysis of the most competitive tree-ensemble models using six categories of interpretable machine learning methods.Experimental results highlight the potential of the proposed framework.The tree-ensemble models developed within this framework achieve competitive accuracy while maintaining high inference efficiency similar to statistical time series and shallow learning models.Meanwhile,these tree-ensemble models offer interpretability from multiple perspectives via interpretable machine-learning techniques.The proposed framework is anticipated to provide reliable and trustworthy decision support across various ITS applications.展开更多
An algorithm named InterOpt for optimizing operational parameters is proposed based on interpretable machine learning,and is demonstrated via optimization of shale gas development.InterOpt consists of three parts:a ne...An algorithm named InterOpt for optimizing operational parameters is proposed based on interpretable machine learning,and is demonstrated via optimization of shale gas development.InterOpt consists of three parts:a neural network is used to construct an emulator of the actual drilling and hydraulic fracturing process in the vector space(i.e.,virtual environment);:the Sharpley value method in inter-pretable machine learning is applied to analyzing the impact of geological and operational parameters in each well(i.e.,single well feature impact analysis):and ensemble randomized maximum likelihood(EnRML)is conducted to optimize the operational parameters to comprehensively improve the efficiency of shale gas development and reduce the average cost.In the experiment,InterOpt provides different drilling and fracturing plans for each well according to its specific geological conditions,and finally achieves an average cost reduction of 9.7%for a case study with 104 wells.展开更多
The interpretability of deep learning models has emerged as a compelling area in artificial intelligence research.The safety criteria for medical imaging are highly stringent,and models are required for an explanation...The interpretability of deep learning models has emerged as a compelling area in artificial intelligence research.The safety criteria for medical imaging are highly stringent,and models are required for an explanation.However,existing convolutional neural network solutions for left ventricular segmentation are viewed in terms of inputs and outputs.Thus,the interpretability of CNNs has come into the spotlight.Since medical imaging data are limited,many methods to fine-tune medical imaging models that are popular in transfer models have been built using massive public Image Net datasets by the transfer learning method.Unfortunately,this generates many unreliable parameters and makes it difficult to generate plausible explanations from these models.In this study,we trained from scratch rather than relying on transfer learning,creating a novel interpretable approach for autonomously segmenting the left ventricle with a cardiac MRI.Our enhanced GPU training system implemented interpretable global average pooling for graphics using deep learning.The deep learning tasks were simplified.Simplification included data management,neural network architecture,and training.Our system monitored and analyzed the gradient changes of different layers with dynamic visualizations in real-time and selected the optimal deployment model.Our results demonstrated that the proposed method was feasible and efficient:the Dice coefficient reached 94.48%,and the accuracy reached 99.7%.It was found that no current transfer learning models could perform comparably to the ImageNet transfer learning architectures.This model is lightweight and more convenient to deploy on mobile devices than transfer learning models.展开更多
As the banking industry gradually steps into the digital era of Bank 4.0,business competition is becoming increasingly fierce,and banks are also facing the problem of massive customer churn.To better maintain their cu...As the banking industry gradually steps into the digital era of Bank 4.0,business competition is becoming increasingly fierce,and banks are also facing the problem of massive customer churn.To better maintain their customer resources,it is crucial for banks to accurately predict customers with a tendency to churn.Aiming at the typical binary classification problem like customer churn,this paper establishes an early-warning model for credit card customer churn.That is a dual search algorithm named GSAIBAS by incorporating Golden Sine Algorithm(GSA)and an Improved Beetle Antennae Search(IBAS)is proposed to optimize the parameters of the CatBoost algorithm,which forms the GSAIBAS-CatBoost model.Especially,considering that the BAS algorithm has simple parameters and is easy to fall into local optimum,the Sigmoid nonlinear convergence factor and the lane flight equation are introduced to adjust the fixed step size of beetle.Then this improved BAS algorithm with variable step size is fused with the GSA to form a GSAIBAS algorithm which can achieve dual optimization.Moreover,an empirical analysis is made according to the data set of credit card customers from Analyttica official platform.The empirical results show that the values of Area Under Curve(AUC)and recall of the proposedmodel in this paper reach 96.15%and 95.56%,respectively,which are significantly better than the other 9 common machine learning models.Compared with several existing optimization algorithms,GSAIBAS algorithm has higher precision in the parameter optimization for CatBoost.Combined with two other customer churn data sets on Kaggle data platform,it is further verified that the model proposed in this paper is also valid and feasible.展开更多
The aperture of natural rock fractures significantly affects the deformation and strength properties of rock masses,as well as the hydrodynamic properties of fractured rock masses.The conventional measurement methods ...The aperture of natural rock fractures significantly affects the deformation and strength properties of rock masses,as well as the hydrodynamic properties of fractured rock masses.The conventional measurement methods are inadequate for collecting data on high-steep rock slopes in complex mountainous regions.This study establishes a high-resolution three-dimensional model of a rock slope using unmanned aerial vehicle(UAV)multi-angle nap-of-the-object photogrammetry to obtain edge feature points of fractures.Fracture opening morphology is characterized using coordinate projection and transformation.Fracture central axis is determined using vertical measuring lines,allowing for the interpretation of aperture of adaptive fracture shape.The feasibility and reliability of the new method are verified at a construction site of a railway in southeast Tibet,China.The study shows that the fracture aperture has a significant interval effect and size effect.The optimal sampling length for fractures is approximately 0.5e1 m,and the optimal aperture interpretation results can be achieved when the measuring line spacing is 1%of the sampling length.Tensile fractures in the study area generally have larger apertures than shear fractures,and their tendency to increase with slope height is also greater than that of shear fractures.The aperture of tensile fractures is generally positively correlated with their trace length,while the correlation between the aperture of shear fractures and their trace length appears to be weak.Fractures of different orientations exhibit certain differences in their distribution of aperture,but generally follow the forms of normal,log-normal,and gamma distributions.This study provides essential data support for rock and slope stability evaluation,which is of significant practical importance.展开更多
Defining the structure characteristics of amorphous materials is one of the fundamental problems that need to be solved urgently in complex materials because of their complex structure and long-range disorder.In this ...Defining the structure characteristics of amorphous materials is one of the fundamental problems that need to be solved urgently in complex materials because of their complex structure and long-range disorder.In this study,we develop an interpretable deep learning model capable of accurately classifying amorphous configurations and characterizing their structural properties.The results demonstrate that the multi-dimensional hybrid convolutional neural network can classify the two-dimensional(2D)liquids and amorphous solids of molecular dynamics simulation.The classification process does not make a priori assumptions on the amorphous particle environment,and the accuracy is 92.75%,which is better than other convolutional neural networks.Moreover,our model utilizes the gradient-weighted activation-like mapping method,which generates activation-like heat maps that can precisely identify important structures in the amorphous configuration maps.We obtain an order parameter from the heatmap and conduct finite scale analysis of this parameter.Our findings demonstrate that the order parameter effectively captures the amorphous phase transition process across various systems.These results hold significant scientific implications for the study of amorphous structural characteristics via deep learning.展开更多
Thermoelectric and thermal materials are essential in achieving carbon neutrality. However, the high cost of lattice thermal conductivity calculations and the limited applicability of classical physical models have le...Thermoelectric and thermal materials are essential in achieving carbon neutrality. However, the high cost of lattice thermal conductivity calculations and the limited applicability of classical physical models have led to the inefficient development of thermoelectric materials. In this study, we proposed a two-stage machine learning framework with physical interpretability incorporating domain knowledge to calculate high/low thermal conductivity rapidly. Specifically, crystal graph convolutional neural network(CGCNN) is constructed to predict the fundamental physical parameters related to lattice thermal conductivity. Based on the above physical parameters, an interpretable machine learning model–sure independence screening and sparsifying operator(SISSO), is trained to predict the lattice thermal conductivity. We have predicted the lattice thermal conductivity of all available materials in the open quantum materials database(OQMD)(https://www.oqmd.org/). The proposed approach guides the next step of searching for materials with ultra-high or ultralow lattice thermal conductivity and promotes the development of new thermal insulation materials and thermoelectric materials.展开更多
A liquid launch vehicle is an important carrier in aviation,and its regular operation is essential to maintain space security.In the safety assessment of fluid launch vehicle body structure,it is necessary to ensure t...A liquid launch vehicle is an important carrier in aviation,and its regular operation is essential to maintain space security.In the safety assessment of fluid launch vehicle body structure,it is necessary to ensure that the assessmentmodel can learn self-response rules from various uncertain data and not differently to provide a traceable and interpretable assessment process.Therefore,a belief rule base with interpretability(BRB-i)assessment method of liquid launch vehicle structure safety status combines data and knowledge.Moreover,an innovative whale optimization algorithm with interpretable constraints is proposed.The experiments are carried out based on the liquid launch vehicle safety experiment platform,and the information on the safety status of the liquid launch vehicle is obtained by monitoring the detection indicators under the simulation platform.The MSEs of the proposed model are 3.8000e-03,1.3000e-03,2.1000e-03,and 1.8936e-04 for 25%,45%,65%,and 84%of the training samples,respectively.It can be seen that the proposed model also shows a better ability to handle small sample data.Meanwhile,the belief distribution of the BRB-i model output has a high fitting trend with the belief distribution of the expert knowledge settings,which indicates the interpretability of the BRB-i model.Experimental results show that,compared with other methods,the BRB-i model guarantees the model’s interpretability and the high precision of experimental results.展开更多
The prediction of processor performance has important referencesignificance for future processors. Both the accuracy and rationality of theprediction results are required. The hierarchical belief rule base (HBRB)can i...The prediction of processor performance has important referencesignificance for future processors. Both the accuracy and rationality of theprediction results are required. The hierarchical belief rule base (HBRB)can initially provide a solution to low prediction accuracy. However, theinterpretability of the model and the traceability of the results still warrantfurther investigation. Therefore, a processor performance prediction methodbased on interpretable hierarchical belief rule base (HBRB-I) and globalsensitivity analysis (GSA) is proposed. The method can yield more reliableprediction results. Evidence reasoning (ER) is firstly used to evaluate thehistorical data of the processor, followed by a performance prediction modelwith interpretability constraints that is constructed based on HBRB-I. Then,the whale optimization algorithm (WOA) is used to optimize the parameters.Furthermore, to test the interpretability of the performance predictionprocess, GSA is used to analyze the relationship between the input and thepredicted output indicators. Finally, based on the UCI database processordataset, the effectiveness and superiority of the method are verified. Accordingto our experiments, our prediction method generates more reliable andaccurate estimations than traditional models.展开更多
Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of po...Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of possible future trajectories can be consid-erable(multi-modal).Most prior approaches proposed to address multi-modal motion prediction are based on complex machine learning systems that have limited interpret-ability.Moreover,the metrics used in current benchmarks do not evaluate all aspects of the problem,such as the diversity and admissibility of the output.The authors aim to advance towards the design of trustworthy motion prediction systems,based on some of the re-quirements for the design of Trustworthy Artificial Intelligence.The focus is on evaluation criteria,robustness,and interpretability of outputs.First,the evaluation metrics are comprehensively analysed,the main gaps of current benchmarks are identified,and a new holistic evaluation framework is proposed.Then,a method for the assessment of spatial and temporal robustness is introduced by simulating noise in the perception system.To enhance the interpretability of the outputs and generate more balanced results in the proposed evaluation framework,an intent prediction layer that can be attached to multi-modal motion prediction models is proposed.The effectiveness of this approach is assessed through a survey that explores different elements in the visualisation of the multi-modal trajectories and intentions.The proposed approach and findings make a significant contribution to the development of trustworthy motion prediction systems for autono-mous vehicles,advancing the field towards greater safety and reliability.展开更多
Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only f...Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only for removing irrelevant or redundant rules but also for uncovering hidden associations that impact other factors.Recently,several post-processing methods have been proposed,each with its own strengths and weaknesses.In this paper,we propose THAPE(Tunable Hybrid Associative Predictive Engine),which combines descriptive and predictive techniques.By leveraging both techniques,our aim is to enhance the quality of analyzing generated rules.This includes removing irrelevant or redundant rules,uncovering interesting and useful rules,exploring hidden association rules that may affect other factors,and providing backtracking ability for a given product.The proposed approach offers a tailored method that suits specific goals for retailers,enabling them to gain a better understanding of customer behavior based on factual transactions in the target market.We applied THAPE to a real dataset as a case study in this paper to demonstrate its effectiveness.Through this application,we successfully mined a concise set of highly interesting and useful association rules.Out of the 11,265 rules generated,we identified 125 rules that are particularly relevant to the business context.These identified rules significantly improve the interpretability and usefulness of association rules for decision-making purposes.展开更多
Gas chromatography-mass spectrometry(GC-MS)is an extremely important analytical technique that is widely used in organic geochemistry.It is the only approach to capture biomarker features of organic matter and provide...Gas chromatography-mass spectrometry(GC-MS)is an extremely important analytical technique that is widely used in organic geochemistry.It is the only approach to capture biomarker features of organic matter and provides the key evidence for oil-source correlation and thermal maturity determination.However,the conventional way of processing and interpreting the mass chromatogram is both timeconsuming and labor-intensive,which increases the research cost and restrains extensive applications of this method.To overcome this limitation,a correlation model is developed based on the convolution neural network(CNN)to link the mass chromatogram and biomarker features of samples from the Triassic Yanchang Formation,Ordos Basin,China.In this way,the mass chromatogram can be automatically interpreted.This research first performs dimensionality reduction for 15 biomarker parameters via the factor analysis and then quantifies the biomarker features using two indexes(i.e.MI and PMI)that represent the organic matter thermal maturity and parent material type,respectively.Subsequently,training,interpretation,and validation are performed multiple times using different CNN models to optimize the model structure and hyper-parameter setting,with the mass chromatogram used as the input and the obtained MI and PMI values for supervision(label).The optimized model presents high accuracy in automatically interpreting the mass chromatogram,with R2values typically above 0.85 and0.80 for the thermal maturity and parent material interpretation results,respectively.The significance of this research is twofold:(i)developing an efficient technique for geochemical research;(ii)more importantly,demonstrating the potential of artificial intelligence in organic geochemistry and providing vital references for future related studies.展开更多
With the successful application and breakthrough of deep learning technology in image segmentation,there has been continuous development in the field of seismic facies interpretation using convolutional neural network...With the successful application and breakthrough of deep learning technology in image segmentation,there has been continuous development in the field of seismic facies interpretation using convolutional neural networks.These intelligent and automated methods significantly reduce manual labor,particularly in the laborious task of manually labeling seismic facies.However,the extensive demand for training data imposes limitations on their wider application.To overcome this challenge,we adopt the UNet architecture as the foundational network structure for seismic facies classification,which has demonstrated effective segmentation results even with small-sample training data.Additionally,we integrate spatial pyramid pooling and dilated convolution modules into the network architecture to enhance the perception of spatial information across a broader range.The seismic facies classification test on the public data from the F3 block verifies the superior performance of our proposed improved network structure in delineating seismic facies boundaries.Comparative analysis against the traditional UNet model reveals that our method achieves more accurate predictive classification results,as evidenced by various evaluation metrics for image segmentation.Obviously,the classification accuracy reaches an impressive 96%.Furthermore,the results of seismic facies classification in the seismic slice dimension provide further confirmation of the superior performance of our proposed method,which accurately defines the range of different seismic facies.This approach holds significant potential for analyzing geological patterns and extracting valuable depositional information.展开更多
The rapid evolution of scientific and technological advancements and industrial changes has profoundly interconnected countries and regions in the digital information era,creating a globalized environment where effect...The rapid evolution of scientific and technological advancements and industrial changes has profoundly interconnected countries and regions in the digital information era,creating a globalized environment where effective communication is paramount.Consequently,the demand for proficient interpreting skills within the scientific and technology sectors has surged,making effective language communication increasingly crucial.This paper explores the potential impact of translation universals on enhancing sci-tech simultaneous interpreter education.By examining the selection of teaching materials,methods,and activities through the lens of translation universals,this study aims to improve the quality of teaching content,innovate instructional approaches,and ultimately,enhance the effectiveness of interpreter education.The findings of this research are expected to provide valuable insights for curriculum development and pedagogical strategies in interpreter education.展开更多
The Pennsylvanian unconformity,which is a detrital surface,separates the beds of the Permian-aged strata from the Lower Paleozoic in the Central Basin Platform.Seismic data interpretation indicates that the unconformi...The Pennsylvanian unconformity,which is a detrital surface,separates the beds of the Permian-aged strata from the Lower Paleozoic in the Central Basin Platform.Seismic data interpretation indicates that the unconformity is an angular unconformity,overlying multiple normal faults,and accompanied with a thrust fault which maximizes the region's structural complexity.Additionally,the Pennsylvanian angular unconformity creates pinch-outs between the beds above and below.We computed the spectral decomposition and reflector convergence attributes and analyzed them to characterize the angular unconformity and faults.The spectral decomposition attribute divides the broadband seismic data into different spectral bands to resolve thin beds and show thickness variations.In contrast,the reflector convergence attribute highlights the location and direction of the pinch-outs as they dip south at angles between 2° and 6°.After reviewing findings from RGB blending of the spectrally decomposed frequencies along the Pennsylvanian unconformity,we observed channel-like features and multiple linear bands in addition to the faults and pinch-outs.It can be inferred that the identified linear bands could be the result of different lithologies associated with the tilting of the beds,and the faults may possibly influence hydrocarbon migration or act as a flow barrier to entrap hydrocarbon accumulation.The identification of this angular unconformity and the associated features in the study area are vital for the following reasons:1)the unconformity surface represents a natural stratigraphic boundary;2)the stratigraphic pinch-outs act as fluid flow connectivity boundaries;3)the areal extent of compartmentalized reservoirs'boundaries created by the angular unconformity are better defined;and 4)fault displacements are better understood when planning well locations as faults can be flow barriers,or permeability conduits,depending on facies heterogeneity and/or seal effectiveness of a fault,which can affect hydrocarbon production.The methodology utilized in this study is a further step in the characterization of reservoirs and can be used to expand our knowledge and obtain more information about the Goldsmith Field.展开更多
Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as s...Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system.展开更多
基金Yulin Science and Technology Bureau production Project“Research on Smart Agricultural Product Traceability System”(No.CXY-2022-64)Light of West China(No.XAB2022YN10)+1 种基金The China Postdoctoral Science Foundation(No.2023M740760)Shaanxi Province Key Research and Development Plan(No.2024SF-YBXM-678).
文摘Hyperspectral imagery encompasses spectral and spatial dimensions,reflecting the material properties of objects.Its application proves crucial in search and rescue,concealed target identification,and crop growth analysis.Clustering is an important method of hyperspectral analysis.The vast data volume of hyperspectral imagery,coupled with redundant information,poses significant challenges in swiftly and accurately extracting features for subsequent analysis.The current hyperspectral feature clustering methods,which are mostly studied from space or spectrum,do not have strong interpretability,resulting in poor comprehensibility of the algorithm.So,this research introduces a feature clustering algorithm for hyperspectral imagery from an interpretability perspective.It commences with a simulated perception process,proposing an interpretable band selection algorithm to reduce data dimensions.Following this,amulti-dimensional clustering algorithm,rooted in fuzzy and kernel clustering,is developed to highlight intra-class similarities and inter-class differences.An optimized P systemis then introduced to enhance computational efficiency.This system coordinates all cells within a mapping space to compute optimal cluster centers,facilitating parallel computation.This approach diminishes sensitivity to initial cluster centers and augments global search capabilities,thus preventing entrapment in local minima and enhancing clustering performance.Experiments conducted on 300 datasets,comprising both real and simulated data.The results show that the average accuracy(ACC)of the proposed algorithm is 0.86 and the combination measure(CM)is 0.81.
基金supported by the Research Grants Council of Hong Kong (City U 11305919 and 11308620)the NSFC/RGC Joint Research Scheme N_City U104/19The Hong Kong Research Grant Council Collaborative Research Fund:C1002-21G and C1017-22G。
文摘Electrocatalytic nitrogen reduction to ammonia has garnered significant attention with the blooming of single-atom catalysts(SACs),showcasing their potential for sustainable and energy-efficient ammonia production.However,cost-effectively designing and screening efficient electrocatalysts remains a challenge.In this study,we have successfully established interpretable machine learning(ML)models to evaluate the catalytic activity of SACs by directly and accurately predicting reaction Gibbs free energy.Our models were trained using non-density functional theory(DFT)calculated features from a dataset comprising 90 graphene-supported SACs.Our results underscore the superior prediction accuracy of the gradient boosting regression(GBR)model for bothΔg(N_(2)→NNH)andΔG(NH_(2)→NH_(3)),boasting coefficient of determination(R^(2))score of 0.972 and 0.984,along with root mean square error(RMSE)of 0.051 and 0.085 eV,respectively.Moreover,feature importance analysis elucidates that the high accuracy of GBR model stems from its adept capture of characteristics pertinent to the active center and coordination environment,unveilling the significance of elementary descriptors,with the colvalent radius playing a dominant role.Additionally,Shapley additive explanations(SHAP)analysis provides global and local interpretation of the working mechanism of the GBR model.Our analysis identifies that a pyrrole-type coordination(flag=0),d-orbitals with a moderate occupation(N_(d)=5),and a moderate difference in covalent radius(r_(TM-ave)near 140 pm)are conducive to achieving high activity.Furthermore,we extend the prediction of activity to more catalysts without additional DFT calculations,validating the reliability of our feature engineering,model training,and design strategy.These findings not only highlight new opportunity for accelerating catalyst design using non-DFT calculated features,but also shed light on the working mechanism of"black box"ML model.Moreover,the model provides valuable guidance for catalytic material design in multiple proton-electron coupling reactions,particularly in driving sustainable CO_(2),O_(2),and N_(2) conversion.
基金supported by the National Natural Science Foundation of China(72471067,72431011,72471238,72231011,62303474,72301286)the Fundamental Research Funds for the Provincial Universities of Zhejiang(GK239909299001-010).
文摘A new approach is proposed in this study for accountable capability improvement based on interpretable capability evaluation using the belief rule base(BRB).Firstly,a capability evaluation model is constructed and optimized.Then,the key sub-capabilities are identified by quantitatively calculating the contributions made by each sub-capability to the overall capability.Finally,the overall capability is improved by optimizing the identified key sub-capabilities.The theoretical contributions of the proposed approach are as follows.(i)An interpretable capability evaluation model is constructed by employing BRB which can provide complete access to decision-makers.(ii)Key sub-capabilities are identified according to the quantitative contribution analysis results.(iii)Accountable capability improvement is carried out by only optimizing the identified key sub-capabilities.Case study results show that“Surveillance”,“Positioning”,and“Identification”are identified as key sub-capabilities with a summed contribution of 75.55%in an analytical and deducible fashion based on the interpretable capability evaluation model.As a result,the overall capability is improved by optimizing only the identified key sub-capabilities.The overall capability can be greatly improved from 59.20%to 81.80%with a minimum cost of 397.Furthermore,this paper also investigates how optimizing the BRB with more collected data would affect the evaluation results:only optimizing“Surveillance”and“Positioning”can also improve the overall capability to 81.34%with a cost of 370,which thus validates the efficiency of the proposed approach.
基金support provided by the National Natural Science Foundation of China(22122802,22278044,and 21878028)the Chongqing Science Fund for Distinguished Young Scholars(CSTB2022NSCQ-JQX0021)the Fundamental Research Funds for the Central Universities(2022CDJXY-003).
文摘To equip data-driven dynamic chemical process models with strong interpretability,we develop a light attention–convolution–gate recurrent unit(LACG)architecture with three sub-modules—a basic module,a brand-new light attention module,and a residue module—that are specially designed to learn the general dynamic behavior,transient disturbances,and other input factors of chemical processes,respectively.Combined with a hyperparameter optimization framework,Optuna,the effectiveness of the proposed LACG is tested by distributed control system data-driven modeling experiments on the discharge flowrate of an actual deethanization process.The LACG model provides significant advantages in prediction accuracy and model generalization compared with other models,including the feedforward neural network,convolution neural network,long short-term memory(LSTM),and attention-LSTM.Moreover,compared with the simulation results of a deethanization model built using Aspen Plus Dynamics V12.1,the LACG parameters are demonstrated to be interpretable,and more details on the variable interactions can be observed from the model parameters in comparison with the traditional interpretable model attention-LSTM.This contribution enriches interpretable machine learning knowledge and provides a reliable method with high accuracy for actual chemical process modeling,paving a route to intelligent manufacturing.
基金funded by the National Key R&D Program of China(Grant No.2023YFE0106800)the Humanity and Social Science Youth Foundation of Ministry of Education of China(Grant No.22YJC630109).
文摘Traffic flow forecasting constitutes a crucial component of intelligent transportation systems(ITSs).Numerous studies have been conducted for traffic flow forecasting during the past decades.However,most existing studies have concentrated on developing advanced algorithms or models to attain state-of-the-art forecasting accuracy.For real-world ITS applications,the interpretability of the developed models is extremely important but has largely been ignored.This study presents an interpretable traffic flow forecasting framework based on popular tree-ensemble algorithms.The framework comprises multiple key components integrated into a highly flexible and customizable multi-stage pipeline,enabling the seamless incorporation of various algorithms and tools.To evaluate the effectiveness of the framework,the developed tree-ensemble models and another three typical categories of baseline models,including statistical time series,shallow learning,and deep learning,were compared on three datasets collected from different types of roads(i.e.,arterial,expressway,and freeway).Further,the study delves into an in-depth interpretability analysis of the most competitive tree-ensemble models using six categories of interpretable machine learning methods.Experimental results highlight the potential of the proposed framework.The tree-ensemble models developed within this framework achieve competitive accuracy while maintaining high inference efficiency similar to statistical time series and shallow learning models.Meanwhile,these tree-ensemble models offer interpretability from multiple perspectives via interpretable machine-learning techniques.The proposed framework is anticipated to provide reliable and trustworthy decision support across various ITS applications.
文摘An algorithm named InterOpt for optimizing operational parameters is proposed based on interpretable machine learning,and is demonstrated via optimization of shale gas development.InterOpt consists of three parts:a neural network is used to construct an emulator of the actual drilling and hydraulic fracturing process in the vector space(i.e.,virtual environment);:the Sharpley value method in inter-pretable machine learning is applied to analyzing the impact of geological and operational parameters in each well(i.e.,single well feature impact analysis):and ensemble randomized maximum likelihood(EnRML)is conducted to optimize the operational parameters to comprehensively improve the efficiency of shale gas development and reduce the average cost.In the experiment,InterOpt provides different drilling and fracturing plans for each well according to its specific geological conditions,and finally achieves an average cost reduction of 9.7%for a case study with 104 wells.
基金The National Natural Science Foundation of China (62176048)provided funding for this research.
文摘The interpretability of deep learning models has emerged as a compelling area in artificial intelligence research.The safety criteria for medical imaging are highly stringent,and models are required for an explanation.However,existing convolutional neural network solutions for left ventricular segmentation are viewed in terms of inputs and outputs.Thus,the interpretability of CNNs has come into the spotlight.Since medical imaging data are limited,many methods to fine-tune medical imaging models that are popular in transfer models have been built using massive public Image Net datasets by the transfer learning method.Unfortunately,this generates many unreliable parameters and makes it difficult to generate plausible explanations from these models.In this study,we trained from scratch rather than relying on transfer learning,creating a novel interpretable approach for autonomously segmenting the left ventricle with a cardiac MRI.Our enhanced GPU training system implemented interpretable global average pooling for graphics using deep learning.The deep learning tasks were simplified.Simplification included data management,neural network architecture,and training.Our system monitored and analyzed the gradient changes of different layers with dynamic visualizations in real-time and selected the optimal deployment model.Our results demonstrated that the proposed method was feasible and efficient:the Dice coefficient reached 94.48%,and the accuracy reached 99.7%.It was found that no current transfer learning models could perform comparably to the ImageNet transfer learning architectures.This model is lightweight and more convenient to deploy on mobile devices than transfer learning models.
基金This work is supported by the National Natural Science Foundation of China(Nos.72071150,71871174).
文摘As the banking industry gradually steps into the digital era of Bank 4.0,business competition is becoming increasingly fierce,and banks are also facing the problem of massive customer churn.To better maintain their customer resources,it is crucial for banks to accurately predict customers with a tendency to churn.Aiming at the typical binary classification problem like customer churn,this paper establishes an early-warning model for credit card customer churn.That is a dual search algorithm named GSAIBAS by incorporating Golden Sine Algorithm(GSA)and an Improved Beetle Antennae Search(IBAS)is proposed to optimize the parameters of the CatBoost algorithm,which forms the GSAIBAS-CatBoost model.Especially,considering that the BAS algorithm has simple parameters and is easy to fall into local optimum,the Sigmoid nonlinear convergence factor and the lane flight equation are introduced to adjust the fixed step size of beetle.Then this improved BAS algorithm with variable step size is fused with the GSA to form a GSAIBAS algorithm which can achieve dual optimization.Moreover,an empirical analysis is made according to the data set of credit card customers from Analyttica official platform.The empirical results show that the values of Area Under Curve(AUC)and recall of the proposedmodel in this paper reach 96.15%and 95.56%,respectively,which are significantly better than the other 9 common machine learning models.Compared with several existing optimization algorithms,GSAIBAS algorithm has higher precision in the parameter optimization for CatBoost.Combined with two other customer churn data sets on Kaggle data platform,it is further verified that the model proposed in this paper is also valid and feasible.
基金This work was supported by the National Nature Science Foundation of China(Grant Nos.42177139 and 41941017)the Natural Science Foundation Project of Jilin Province,China(Grant No.20230101088JC).The authors would like to thank the anonymous reviewers for their comments and suggestions.
文摘The aperture of natural rock fractures significantly affects the deformation and strength properties of rock masses,as well as the hydrodynamic properties of fractured rock masses.The conventional measurement methods are inadequate for collecting data on high-steep rock slopes in complex mountainous regions.This study establishes a high-resolution three-dimensional model of a rock slope using unmanned aerial vehicle(UAV)multi-angle nap-of-the-object photogrammetry to obtain edge feature points of fractures.Fracture opening morphology is characterized using coordinate projection and transformation.Fracture central axis is determined using vertical measuring lines,allowing for the interpretation of aperture of adaptive fracture shape.The feasibility and reliability of the new method are verified at a construction site of a railway in southeast Tibet,China.The study shows that the fracture aperture has a significant interval effect and size effect.The optimal sampling length for fractures is approximately 0.5e1 m,and the optimal aperture interpretation results can be achieved when the measuring line spacing is 1%of the sampling length.Tensile fractures in the study area generally have larger apertures than shear fractures,and their tendency to increase with slope height is also greater than that of shear fractures.The aperture of tensile fractures is generally positively correlated with their trace length,while the correlation between the aperture of shear fractures and their trace length appears to be weak.Fractures of different orientations exhibit certain differences in their distribution of aperture,but generally follow the forms of normal,log-normal,and gamma distributions.This study provides essential data support for rock and slope stability evaluation,which is of significant practical importance.
基金National Natural Science Foundation of China(Grant No.11702289)the Key Core Technology and Generic Technology Research and Development Project of Shanxi Province,China(Grant No.2020XXX013)the National Key Research and Development Project of China。
文摘Defining the structure characteristics of amorphous materials is one of the fundamental problems that need to be solved urgently in complex materials because of their complex structure and long-range disorder.In this study,we develop an interpretable deep learning model capable of accurately classifying amorphous configurations and characterizing their structural properties.The results demonstrate that the multi-dimensional hybrid convolutional neural network can classify the two-dimensional(2D)liquids and amorphous solids of molecular dynamics simulation.The classification process does not make a priori assumptions on the amorphous particle environment,and the accuracy is 92.75%,which is better than other convolutional neural networks.Moreover,our model utilizes the gradient-weighted activation-like mapping method,which generates activation-like heat maps that can precisely identify important structures in the amorphous configuration maps.We obtain an order parameter from the heatmap and conduct finite scale analysis of this parameter.Our findings demonstrate that the order parameter effectively captures the amorphous phase transition process across various systems.These results hold significant scientific implications for the study of amorphous structural characteristics via deep learning.
基金support of the National Natural Science Foundation of China(Grant Nos.12104356 and52250191)China Postdoctoral Science Foundation(Grant No.2022M712552)+2 种基金the Opening Project of Shanghai Key Laboratory of Special Artificial Microstructure Materials and Technology(Grant No.Ammt2022B-1)the Fundamental Research Funds for the Central Universitiessupport by HPC Platform,Xi’an Jiaotong University。
文摘Thermoelectric and thermal materials are essential in achieving carbon neutrality. However, the high cost of lattice thermal conductivity calculations and the limited applicability of classical physical models have led to the inefficient development of thermoelectric materials. In this study, we proposed a two-stage machine learning framework with physical interpretability incorporating domain knowledge to calculate high/low thermal conductivity rapidly. Specifically, crystal graph convolutional neural network(CGCNN) is constructed to predict the fundamental physical parameters related to lattice thermal conductivity. Based on the above physical parameters, an interpretable machine learning model–sure independence screening and sparsifying operator(SISSO), is trained to predict the lattice thermal conductivity. We have predicted the lattice thermal conductivity of all available materials in the open quantum materials database(OQMD)(https://www.oqmd.org/). The proposed approach guides the next step of searching for materials with ultra-high or ultralow lattice thermal conductivity and promotes the development of new thermal insulation materials and thermoelectric materials.
基金This work was supported in part by the Natural Science Foundation of China under Grant 62203461 and Grant 62203365in part by the Postdoctoral Science Foundation of China under Grant No.2020M683736,in part by the Teaching Reform Project of Higher Education in Heilongjiang Province under Grant Nos.SJGY20210456 and SJGY20210457in part by the Natural Science Foundation of Heilongjiang Province of China under Grant No.LH2021F038,and in part by the Graduate Academic Innovation Project of Harbin Normal University under Grant Nos.HSDSSCX2022-17,HSDSSCX2022-18 and HSDSSCX2022-19。
文摘A liquid launch vehicle is an important carrier in aviation,and its regular operation is essential to maintain space security.In the safety assessment of fluid launch vehicle body structure,it is necessary to ensure that the assessmentmodel can learn self-response rules from various uncertain data and not differently to provide a traceable and interpretable assessment process.Therefore,a belief rule base with interpretability(BRB-i)assessment method of liquid launch vehicle structure safety status combines data and knowledge.Moreover,an innovative whale optimization algorithm with interpretable constraints is proposed.The experiments are carried out based on the liquid launch vehicle safety experiment platform,and the information on the safety status of the liquid launch vehicle is obtained by monitoring the detection indicators under the simulation platform.The MSEs of the proposed model are 3.8000e-03,1.3000e-03,2.1000e-03,and 1.8936e-04 for 25%,45%,65%,and 84%of the training samples,respectively.It can be seen that the proposed model also shows a better ability to handle small sample data.Meanwhile,the belief distribution of the BRB-i model output has a high fitting trend with the belief distribution of the expert knowledge settings,which indicates the interpretability of the BRB-i model.Experimental results show that,compared with other methods,the BRB-i model guarantees the model’s interpretability and the high precision of experimental results.
基金This work is supported in part by the Postdoctoral Science Foundation of China under Grant No.2020M683736in part by the Teaching reform project of higher education in Heilongjiang Province under Grant No.SJGY20210456in part by the Natural Science Foundation of Heilongjiang Province of China under Grant No.LH2021F038.
文摘The prediction of processor performance has important referencesignificance for future processors. Both the accuracy and rationality of theprediction results are required. The hierarchical belief rule base (HBRB)can initially provide a solution to low prediction accuracy. However, theinterpretability of the model and the traceability of the results still warrantfurther investigation. Therefore, a processor performance prediction methodbased on interpretable hierarchical belief rule base (HBRB-I) and globalsensitivity analysis (GSA) is proposed. The method can yield more reliableprediction results. Evidence reasoning (ER) is firstly used to evaluate thehistorical data of the processor, followed by a performance prediction modelwith interpretability constraints that is constructed based on HBRB-I. Then,the whale optimization algorithm (WOA) is used to optimize the parameters.Furthermore, to test the interpretability of the performance predictionprocess, GSA is used to analyze the relationship between the input and thepredicted output indicators. Finally, based on the UCI database processordataset, the effectiveness and superiority of the method are verified. Accordingto our experiments, our prediction method generates more reliable andaccurate estimations than traditional models.
基金European Commission,Joint Research Center,Grant/Award Number:HUMAINTMinisterio de Ciencia e Innovación,Grant/Award Number:PID2020‐114924RB‐I00Comunidad de Madrid,Grant/Award Number:S2018/EMT‐4362 SEGVAUTO 4.0‐CM。
文摘Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of possible future trajectories can be consid-erable(multi-modal).Most prior approaches proposed to address multi-modal motion prediction are based on complex machine learning systems that have limited interpret-ability.Moreover,the metrics used in current benchmarks do not evaluate all aspects of the problem,such as the diversity and admissibility of the output.The authors aim to advance towards the design of trustworthy motion prediction systems,based on some of the re-quirements for the design of Trustworthy Artificial Intelligence.The focus is on evaluation criteria,robustness,and interpretability of outputs.First,the evaluation metrics are comprehensively analysed,the main gaps of current benchmarks are identified,and a new holistic evaluation framework is proposed.Then,a method for the assessment of spatial and temporal robustness is introduced by simulating noise in the perception system.To enhance the interpretability of the outputs and generate more balanced results in the proposed evaluation framework,an intent prediction layer that can be attached to multi-modal motion prediction models is proposed.The effectiveness of this approach is assessed through a survey that explores different elements in the visualisation of the multi-modal trajectories and intentions.The proposed approach and findings make a significant contribution to the development of trustworthy motion prediction systems for autono-mous vehicles,advancing the field towards greater safety and reliability.
文摘Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only for removing irrelevant or redundant rules but also for uncovering hidden associations that impact other factors.Recently,several post-processing methods have been proposed,each with its own strengths and weaknesses.In this paper,we propose THAPE(Tunable Hybrid Associative Predictive Engine),which combines descriptive and predictive techniques.By leveraging both techniques,our aim is to enhance the quality of analyzing generated rules.This includes removing irrelevant or redundant rules,uncovering interesting and useful rules,exploring hidden association rules that may affect other factors,and providing backtracking ability for a given product.The proposed approach offers a tailored method that suits specific goals for retailers,enabling them to gain a better understanding of customer behavior based on factual transactions in the target market.We applied THAPE to a real dataset as a case study in this paper to demonstrate its effectiveness.Through this application,we successfully mined a concise set of highly interesting and useful association rules.Out of the 11,265 rules generated,we identified 125 rules that are particularly relevant to the business context.These identified rules significantly improve the interpretability and usefulness of association rules for decision-making purposes.
基金financially supported by China Postdoctoral Science Foundation(Grant No.2023M730365)Natural Science Foundation of Hubei Province of China(Grant No.2023AFB232)。
文摘Gas chromatography-mass spectrometry(GC-MS)is an extremely important analytical technique that is widely used in organic geochemistry.It is the only approach to capture biomarker features of organic matter and provides the key evidence for oil-source correlation and thermal maturity determination.However,the conventional way of processing and interpreting the mass chromatogram is both timeconsuming and labor-intensive,which increases the research cost and restrains extensive applications of this method.To overcome this limitation,a correlation model is developed based on the convolution neural network(CNN)to link the mass chromatogram and biomarker features of samples from the Triassic Yanchang Formation,Ordos Basin,China.In this way,the mass chromatogram can be automatically interpreted.This research first performs dimensionality reduction for 15 biomarker parameters via the factor analysis and then quantifies the biomarker features using two indexes(i.e.MI and PMI)that represent the organic matter thermal maturity and parent material type,respectively.Subsequently,training,interpretation,and validation are performed multiple times using different CNN models to optimize the model structure and hyper-parameter setting,with the mass chromatogram used as the input and the obtained MI and PMI values for supervision(label).The optimized model presents high accuracy in automatically interpreting the mass chromatogram,with R2values typically above 0.85 and0.80 for the thermal maturity and parent material interpretation results,respectively.The significance of this research is twofold:(i)developing an efficient technique for geochemical research;(ii)more importantly,demonstrating the potential of artificial intelligence in organic geochemistry and providing vital references for future related studies.
基金funded by the Fundamental Research Project of CNPC Geophysical Key Lab(2022DQ0604-4)the Strategic Cooperation Technology Projects of China National Petroleum Corporation and China University of Petroleum-Beijing(ZLZX 202003)。
文摘With the successful application and breakthrough of deep learning technology in image segmentation,there has been continuous development in the field of seismic facies interpretation using convolutional neural networks.These intelligent and automated methods significantly reduce manual labor,particularly in the laborious task of manually labeling seismic facies.However,the extensive demand for training data imposes limitations on their wider application.To overcome this challenge,we adopt the UNet architecture as the foundational network structure for seismic facies classification,which has demonstrated effective segmentation results even with small-sample training data.Additionally,we integrate spatial pyramid pooling and dilated convolution modules into the network architecture to enhance the perception of spatial information across a broader range.The seismic facies classification test on the public data from the F3 block verifies the superior performance of our proposed improved network structure in delineating seismic facies boundaries.Comparative analysis against the traditional UNet model reveals that our method achieves more accurate predictive classification results,as evidenced by various evaluation metrics for image segmentation.Obviously,the classification accuracy reaches an impressive 96%.Furthermore,the results of seismic facies classification in the seismic slice dimension provide further confirmation of the superior performance of our proposed method,which accurately defines the range of different seismic facies.This approach holds significant potential for analyzing geological patterns and extracting valuable depositional information.
文摘The rapid evolution of scientific and technological advancements and industrial changes has profoundly interconnected countries and regions in the digital information era,creating a globalized environment where effective communication is paramount.Consequently,the demand for proficient interpreting skills within the scientific and technology sectors has surged,making effective language communication increasingly crucial.This paper explores the potential impact of translation universals on enhancing sci-tech simultaneous interpreter education.By examining the selection of teaching materials,methods,and activities through the lens of translation universals,this study aims to improve the quality of teaching content,innovate instructional approaches,and ultimately,enhance the effectiveness of interpreter education.The findings of this research are expected to provide valuable insights for curriculum development and pedagogical strategies in interpreter education.
文摘The Pennsylvanian unconformity,which is a detrital surface,separates the beds of the Permian-aged strata from the Lower Paleozoic in the Central Basin Platform.Seismic data interpretation indicates that the unconformity is an angular unconformity,overlying multiple normal faults,and accompanied with a thrust fault which maximizes the region's structural complexity.Additionally,the Pennsylvanian angular unconformity creates pinch-outs between the beds above and below.We computed the spectral decomposition and reflector convergence attributes and analyzed them to characterize the angular unconformity and faults.The spectral decomposition attribute divides the broadband seismic data into different spectral bands to resolve thin beds and show thickness variations.In contrast,the reflector convergence attribute highlights the location and direction of the pinch-outs as they dip south at angles between 2° and 6°.After reviewing findings from RGB blending of the spectrally decomposed frequencies along the Pennsylvanian unconformity,we observed channel-like features and multiple linear bands in addition to the faults and pinch-outs.It can be inferred that the identified linear bands could be the result of different lithologies associated with the tilting of the beds,and the faults may possibly influence hydrocarbon migration or act as a flow barrier to entrap hydrocarbon accumulation.The identification of this angular unconformity and the associated features in the study area are vital for the following reasons:1)the unconformity surface represents a natural stratigraphic boundary;2)the stratigraphic pinch-outs act as fluid flow connectivity boundaries;3)the areal extent of compartmentalized reservoirs'boundaries created by the angular unconformity are better defined;and 4)fault displacements are better understood when planning well locations as faults can be flow barriers,or permeability conduits,depending on facies heterogeneity and/or seal effectiveness of a fault,which can affect hydrocarbon production.The methodology utilized in this study is a further step in the characterization of reservoirs and can be used to expand our knowledge and obtain more information about the Goldsmith Field.
基金The work is partially supported by Natural Science Foundation of Ningxia(Grant No.AAC03300)National Natural Science Foundation of China(Grant No.61962001)Graduate Innovation Project of North Minzu University(Grant No.YCX23152).
文摘Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system.