Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only f...Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only for removing irrelevant or redundant rules but also for uncovering hidden associations that impact other factors.Recently,several post-processing methods have been proposed,each with its own strengths and weaknesses.In this paper,we propose THAPE(Tunable Hybrid Associative Predictive Engine),which combines descriptive and predictive techniques.By leveraging both techniques,our aim is to enhance the quality of analyzing generated rules.This includes removing irrelevant or redundant rules,uncovering interesting and useful rules,exploring hidden association rules that may affect other factors,and providing backtracking ability for a given product.The proposed approach offers a tailored method that suits specific goals for retailers,enabling them to gain a better understanding of customer behavior based on factual transactions in the target market.We applied THAPE to a real dataset as a case study in this paper to demonstrate its effectiveness.Through this application,we successfully mined a concise set of highly interesting and useful association rules.Out of the 11,265 rules generated,we identified 125 rules that are particularly relevant to the business context.These identified rules significantly improve the interpretability and usefulness of association rules for decision-making purposes.展开更多
Fault diagnosis plays an irreplaceable role in the normal operation of equipment.A fault diagnosis model is often required to be interpretable for increasing the trust between humans and the model.Due to the understan...Fault diagnosis plays an irreplaceable role in the normal operation of equipment.A fault diagnosis model is often required to be interpretable for increasing the trust between humans and the model.Due to the understandable knowledge expression and transparent reasoning process,the belief rule base(BRB)has extensive applications as an interpretable expert system in fault diagnosis.Optimization is an effective means to weaken the subjectivity of experts in BRB,where the interpretability of BRB may be weakened.Hence,to obtain a credible result,the weakening factors of interpretability in the BRB-based fault diagnosis model are firstly analyzed,which are manifested in deviation from the initial judgement of experts and over-optimization of parameters.For these two factors,three indexes are proposed,namely the consistency index of rules,consistency index of the rule base and over-optimization index,tomeasure the interpretability of the optimizedmodel.Considering both the accuracy and interpretability of amodel,an improved coordinate ascent(I-CA)algorithmis proposed to fine-tune the parameters of the fault diagnosis model based on BRB.In I-CA,the algorithm combined with the advance and retreat method and the golden section method is employed to be one-dimensional search algorithm.Furthermore,the random optimization sequence and adaptive step size are proposed to improve the accuracy of the model.Finally,a case study of fault diagnosis in aerospace relays based on BRB is carried out to verify the effectiveness of the proposed method.展开更多
Early fault diagnosis of bearings is crucial for ensuring safe and reliable operations.Convolutional neural networks(CNNs)have achieved significant breakthroughs in machinery fault diagnosis.However,complex and varyin...Early fault diagnosis of bearings is crucial for ensuring safe and reliable operations.Convolutional neural networks(CNNs)have achieved significant breakthroughs in machinery fault diagnosis.However,complex and varying working conditions can lead to inter-class similarity and intra-class variability in datasets,making it more challenging for CNNs to learn discriminative features.Furthermore,CNNs are often considered“black boxes”and lack sufficient interpretability in the fault diagnosis field.To address these issues,this paper introduces a residual mixed domain attention CNN method,referred to as RMA-CNN.This method comprises multiple residual mixed domain attention modules(RMAMs),each employing one attention mechanism to emphasize meaningful features in both time and channel domains.This significantly enhances the network’s ability to learn fault-related features.Moreover,we conduct an in-depth analysis of the inherent feature learning mechanism of the attention module RMAM to improve the interpretability of CNNs in fault diagnosis applications.Experiments conducted on two datasets—a high-speed aeronautical bearing dataset and a motor bearing dataset—demonstrate that the RMA-CNN achieves remarkable results in diagnostic tasks.展开更多
Prediction systems are an important aspect of intelligent decisions.In engineering practice,the complex system structure and the external environment cause many uncertain factors in the model,which influence the model...Prediction systems are an important aspect of intelligent decisions.In engineering practice,the complex system structure and the external environment cause many uncertain factors in the model,which influence the modeling accuracy of the model.The belief rule base(BRB)can implement nonlinear modeling and express a variety of uncertain information,including fuzziness,ignorance,randomness,etc.However,the BRB system also has two main problems:Firstly,modeling methods based on expert knowledge make it difficult to guarantee the model’s accuracy.Secondly,interpretability is not considered in the optimization process of current research,resulting in the destruction of the interpretability of BRB.To balance the accuracy and interpretability of the model,a self-growth belief rule basewith interpretability constraints(SBRB-I)is proposed.The reasoning process of the SBRB-I model is based on the evidence reasoning(ER)approach.Moreover,the self-growth learning strategy ensures effective cooperation between the datadriven model and the expert system.A case study showed that the accuracy and interpretability of the model could be guaranteed.The SBRB-I model has good application prospects in prediction systems.展开更多
With the development of Fintech, applying artificial intelligence (AI) technologies to the financial field is a general trend. However, there are some inappropriate conditions, for instance, the AI model is always tre...With the development of Fintech, applying artificial intelligence (AI) technologies to the financial field is a general trend. However, there are some inappropriate conditions, for instance, the AI model is always treated as a black box and cannot be interpreted. This paper studies the AI model interpretability when the models are applied in the financial field. We analyze the reasons of black box problem and explore the effective solutions. We propose a new kind of automatic Regtech tool—LIMER, and put forward policy suggestions, thereby continuously promoting the development of Fintech to a higher level.展开更多
The accurate and reliable interpretation of regional land cover data is very important for natural resource monitoring and environmental assessment.At present,refined land cover data are mainly obtained by manual visu...The accurate and reliable interpretation of regional land cover data is very important for natural resource monitoring and environmental assessment.At present,refined land cover data are mainly obtained by manual visual interpretation,which has the problems of heavy workload and inconsistent interpretation scales.Deep learning has greatly improved the automatic processing and analysis of remote sensing data.However,the accurate interpretation of feature information from massive datasets remains a difficult problem in wide regional land cover classification.To improve the efficiency of deep learning-based remote sensing image interpretation,we selected multisource remote sensing data,assessed the interpretability of the U-Net model based on surface spatial scenes with different levels of complexity,and proposed a new method of stereoscopic accuracy verification(SAV)to evaluate the reliability of the classification result.The results show that classification accuracy is more highly correlated with terrain and landscape than with other factors related to image data,such as platform and spatial resolution.As the complexity of surface spatial scenes increases,the accuracy of the classification results mainly shows a fluctuating declining trend.We also find the distribution characteristics from the SAV evaluation results of different land cover types in each surface spatial scene.Based on the results observed in this study,we consider the distinction of interpretability and reliability in diverse ground object types and design targeted classification strategies for different surface scenes,which can greatly improve the classification efficiency.The key achievement of this study is to provide the theoretical basis for remote sensing information analysis and an accuracy evaluation method for regional land cover classification,and the proposed method can help improve the likelihood that intelligent interpretation can replace manual acquisition.展开更多
Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of po...Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of possible future trajectories can be consid-erable(multi-modal).Most prior approaches proposed to address multi-modal motion prediction are based on complex machine learning systems that have limited interpret-ability.Moreover,the metrics used in current benchmarks do not evaluate all aspects of the problem,such as the diversity and admissibility of the output.The authors aim to advance towards the design of trustworthy motion prediction systems,based on some of the re-quirements for the design of Trustworthy Artificial Intelligence.The focus is on evaluation criteria,robustness,and interpretability of outputs.First,the evaluation metrics are comprehensively analysed,the main gaps of current benchmarks are identified,and a new holistic evaluation framework is proposed.Then,a method for the assessment of spatial and temporal robustness is introduced by simulating noise in the perception system.To enhance the interpretability of the outputs and generate more balanced results in the proposed evaluation framework,an intent prediction layer that can be attached to multi-modal motion prediction models is proposed.The effectiveness of this approach is assessed through a survey that explores different elements in the visualisation of the multi-modal trajectories and intentions.The proposed approach and findings make a significant contribution to the development of trustworthy motion prediction systems for autono-mous vehicles,advancing the field towards greater safety and reliability.展开更多
Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of...Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of clinicians.Moreover,some potentially useful quantitative information in medical images,especially that which is not visible to the naked eye,is often ignored during clinical practice.In contrast,radiomics performs high-throughput feature extraction from medical images,which enables quantitative analysis of medical images and prediction of various clinical endpoints.Studies have reported that radiomics exhibits promising performance in diagnosis and predicting treatment responses and prognosis,demonstrating its potential to be a non-invasive auxiliary tool for personalized medicine.However,radiomics remains in a developmental phase as numerous technical challenges have yet to be solved,especially in feature engineering and statistical modeling.In this review,we introduce the current utility of radiomics by summarizing research on its application in the diagnosis,prognosis,and prediction of treatment responses in patients with cancer.We focus on machine learning approaches,for feature extraction and selection during feature engineering and for imbalanced datasets and multi-modality fusion during statistical modeling.Furthermore,we introduce the stability,reproducibility,and interpretability of features,and the generalizability and interpretability of models.Finally,we offer possible solutions to current challenges in radiomics research.展开更多
Predicting the displacement of landslide is of utmost practical importance as the landslide can pose serious threats to both human life and property.However,traditional methods have the limitation of random selection ...Predicting the displacement of landslide is of utmost practical importance as the landslide can pose serious threats to both human life and property.However,traditional methods have the limitation of random selection in sliding window selection and seldom incorporate weather forecast data for displacement prediction,while a single structural model cannot handle input sequences of different lengths at the same time.In order to solve these limitations,in this study,a new approach is proposed that utilizes weather forecast data and incorporates the maximum information coefficient(MIC),long short-term memory network(LSTM),and attention mechanism to establish a teacher-student coupling model with parallel structure for short-term landslide displacement prediction.Through MIC,a suitable input sequence length is selected for the LSTM model.To investigate the influence of rainfall on landslides during different seasons,a parallel teacher-student coupling model is developed that is able to learn sequential information from various time series of different lengths.The teacher model learns sequence information from rainfall intensity time series while incorporating reliable short-term weather forecast data from platforms such as China Meteorological Administration(CMA)and Reliable Prognosis(https://rp5.ru)to improve the model’s expression capability,and the student model learns sequence information from other time series.An attention module is then designed to integrate different sequence information to derive a context vector,representing seasonal temporal attention mode.Finally,the predicted displacement is obtained through a linear layer.The proposed method demonstrates superior prediction accuracies,surpassing those of the support vector machine(SVM),LSTM,recurrent neural network(RNN),temporal convolutional network(TCN),and LSTM-Attention models.It achieves a mean absolute error(MAE)of 0.072 mm,root mean square error(RMSE)of 0.096 mm,and pearson correlation coefficients(PCCS)of 0.85.Additionally,it exhibits enhanced prediction stability and interpretability,rendering it an indispensable tool for landslide disaster prevention and mitigation.展开更多
In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken a...In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.展开更多
The aperture of natural rock fractures significantly affects the deformation and strength properties of rock masses,as well as the hydrodynamic properties of fractured rock masses.The conventional measurement methods ...The aperture of natural rock fractures significantly affects the deformation and strength properties of rock masses,as well as the hydrodynamic properties of fractured rock masses.The conventional measurement methods are inadequate for collecting data on high-steep rock slopes in complex mountainous regions.This study establishes a high-resolution three-dimensional model of a rock slope using unmanned aerial vehicle(UAV)multi-angle nap-of-the-object photogrammetry to obtain edge feature points of fractures.Fracture opening morphology is characterized using coordinate projection and transformation.Fracture central axis is determined using vertical measuring lines,allowing for the interpretation of aperture of adaptive fracture shape.The feasibility and reliability of the new method are verified at a construction site of a railway in southeast Tibet,China.The study shows that the fracture aperture has a significant interval effect and size effect.The optimal sampling length for fractures is approximately 0.5e1 m,and the optimal aperture interpretation results can be achieved when the measuring line spacing is 1%of the sampling length.Tensile fractures in the study area generally have larger apertures than shear fractures,and their tendency to increase with slope height is also greater than that of shear fractures.The aperture of tensile fractures is generally positively correlated with their trace length,while the correlation between the aperture of shear fractures and their trace length appears to be weak.Fractures of different orientations exhibit certain differences in their distribution of aperture,but generally follow the forms of normal,log-normal,and gamma distributions.This study provides essential data support for rock and slope stability evaluation,which is of significant practical importance.展开更多
This paper proposes a new approach for online power system transient security assessment(TSA)and preventive control based on XGBoost and DC optimal power flow(DCOPF).The novelty of this proposal is that it applies the...This paper proposes a new approach for online power system transient security assessment(TSA)and preventive control based on XGBoost and DC optimal power flow(DCOPF).The novelty of this proposal is that it applies the XGBoost and data selection method based on the 1-norm distance in local feature importance evaluation which can provide a certain model interpretability.The method of SMOTE+ENN is adopted for data rebalancing.The contingency-oriented XGBoost model is trained with databases generated by time domain simulations to represent the transient security constraint in the DCOPF model,which has a relatively fast speed of calculation.The transient security constrained generation rescheduling is implemented with the differential evolution algorithm,which is utilized to optimize the rescheduled generation in the preventive control.Feasibility and effectiveness of the proposed approach are demonstrated on an IEEE 39-bus test system and a 500-bus operational model for South Carolina,USA.展开更多
The prediction of structural performance plays a significant role in damage assessment of glass fiber reinforcement polymer(GFRP)elastic gridshell structures.Machine learning(ML)approaches are implemented in this stud...The prediction of structural performance plays a significant role in damage assessment of glass fiber reinforcement polymer(GFRP)elastic gridshell structures.Machine learning(ML)approaches are implemented in this study,to predict maximum stress and displacement of GFRP elastic gridshell structures.Several ML algorithms,including linear regression(LR),ridge regression(RR),support vector regression(SVR),K-nearest neighbors(KNN),decision tree(DT),random forest(RF),adaptive boosting(AdaBoost),extreme gradient boosting(XGBoost),category boosting(CatBoost),and light gradient boosting machine(LightGBM),are implemented in this study.Output features of structural performance considered in this study are the maximum stress as f1(x)and the maximum displacement to self-weight ratio as f2(x).A comparative study is conducted and the Catboost model presents the highest prediction accuracy.Finally,interpretable ML approaches,including shapely additive explanations(SHAP),partial dependence plot(PDP),and accumulated local effects(ALE),are applied to explain the predictions.SHAP is employed to describe the importance of each variable to structural performance both locally and globally.The results of sensitivity analysis(SA),feature importance of the CatBoost model and SHAP approach indicate the same parameters as the most significant variables for f1(x)and f2(x).展开更多
Gas chromatography-mass spectrometry(GC-MS)is an extremely important analytical technique that is widely used in organic geochemistry.It is the only approach to capture biomarker features of organic matter and provide...Gas chromatography-mass spectrometry(GC-MS)is an extremely important analytical technique that is widely used in organic geochemistry.It is the only approach to capture biomarker features of organic matter and provides the key evidence for oil-source correlation and thermal maturity determination.However,the conventional way of processing and interpreting the mass chromatogram is both timeconsuming and labor-intensive,which increases the research cost and restrains extensive applications of this method.To overcome this limitation,a correlation model is developed based on the convolution neural network(CNN)to link the mass chromatogram and biomarker features of samples from the Triassic Yanchang Formation,Ordos Basin,China.In this way,the mass chromatogram can be automatically interpreted.This research first performs dimensionality reduction for 15 biomarker parameters via the factor analysis and then quantifies the biomarker features using two indexes(i.e.MI and PMI)that represent the organic matter thermal maturity and parent material type,respectively.Subsequently,training,interpretation,and validation are performed multiple times using different CNN models to optimize the model structure and hyper-parameter setting,with the mass chromatogram used as the input and the obtained MI and PMI values for supervision(label).The optimized model presents high accuracy in automatically interpreting the mass chromatogram,with R2values typically above 0.85 and0.80 for the thermal maturity and parent material interpretation results,respectively.The significance of this research is twofold:(i)developing an efficient technique for geochemical research;(ii)more importantly,demonstrating the potential of artificial intelligence in organic geochemistry and providing vital references for future related studies.展开更多
The potential for reducing greenhouse gas(GHG)emissions and energy consumption in wastewater treatment can be realized through intelligent control,with machine learning(ML)and multimodality emerging as a promising sol...The potential for reducing greenhouse gas(GHG)emissions and energy consumption in wastewater treatment can be realized through intelligent control,with machine learning(ML)and multimodality emerging as a promising solution.Here,we introduce an ML technique based on multimodal strategies,focusing specifically on intelligent aeration control in wastewater treatment plants(WWTPs).The generalization of the multimodal strategy is demonstrated on eight ML models.The results demonstrate that this multimodal strategy significantly enhances model indicators for ML in environmental science and the efficiency of aeration control,exhibiting exceptional performance and interpretability.Integrating random forest with visual models achieves the highest accuracy in forecasting aeration quantity in multimodal models,with a mean absolute percentage error of 4.4%and a coefficient of determination of 0.948.Practical testing in a full-scale plant reveals that the multimodal model can reduce operation costs by 19.8%compared to traditional fuzzy control methods.The potential application of these strategies in critical water science domains is discussed.To foster accessibility and promote widespread adoption,the multimodal ML models are freely available on GitHub,thereby eliminating technical barriers and encouraging the application of artificial intelligence in urban wastewater treatment.展开更多
Hyperspectral imagery encompasses spectral and spatial dimensions,reflecting the material properties of objects.Its application proves crucial in search and rescue,concealed target identification,and crop growth analy...Hyperspectral imagery encompasses spectral and spatial dimensions,reflecting the material properties of objects.Its application proves crucial in search and rescue,concealed target identification,and crop growth analysis.Clustering is an important method of hyperspectral analysis.The vast data volume of hyperspectral imagery,coupled with redundant information,poses significant challenges in swiftly and accurately extracting features for subsequent analysis.The current hyperspectral feature clustering methods,which are mostly studied from space or spectrum,do not have strong interpretability,resulting in poor comprehensibility of the algorithm.So,this research introduces a feature clustering algorithm for hyperspectral imagery from an interpretability perspective.It commences with a simulated perception process,proposing an interpretable band selection algorithm to reduce data dimensions.Following this,amulti-dimensional clustering algorithm,rooted in fuzzy and kernel clustering,is developed to highlight intra-class similarities and inter-class differences.An optimized P systemis then introduced to enhance computational efficiency.This system coordinates all cells within a mapping space to compute optimal cluster centers,facilitating parallel computation.This approach diminishes sensitivity to initial cluster centers and augments global search capabilities,thus preventing entrapment in local minima and enhancing clustering performance.Experiments conducted on 300 datasets,comprising both real and simulated data.The results show that the average accuracy(ACC)of the proposed algorithm is 0.86 and the combination measure(CM)is 0.81.展开更多
On September 5, 2022, a magnitude Ms 6.8 earthquake occurred along the Moxi fault in the southern part of the Xianshuihe fault zone located in the southeastern margin of the Tibetan Plateau,resulting in severe damage ...On September 5, 2022, a magnitude Ms 6.8 earthquake occurred along the Moxi fault in the southern part of the Xianshuihe fault zone located in the southeastern margin of the Tibetan Plateau,resulting in severe damage and substantial economic loss. In this study, we established a coseismic landslide database triggered by Luding Ms 6.8 earthquake, which includes 4794 landslides with a total area of 46.79 km^(2). The coseismic landslides primarily consisted of medium and small-sized landslides, characterized by shallow surface sliding. Some exhibited characteristics of high-position initiation resulted in the obstruction or partial obstruction of rivers, leading to the formation of dammed lakes. Our research found that the coseismic landslides were predominantly observed on slopes ranging from 30° to 50°, occurring at between 1000 m and 2500 m, with slope aspects varying from 90° to 180°. Landslides were also highly developed in granitic bodies that had experienced structural fracturing and strong-tomoderate weathering. Coseismic landslides concentrated within a 6 km range on both sides of the Xianshuihe and Daduhe fault zones. The area and number of coseismic landslides exhibited a negative correlation with the distance to fault lines, road networks, and river systems, as they were influenced by fault activity, road excavation, and river erosion. The coseismic landslides were mainly distributed in the southeastern region of the epicenter, exhibiting relatively concentrated patterns within the IX-degree zones such as Moxi Town, Wandong River basin, Detuo Town to Wanggangping Township. Our research findings provide important data on the coseismic landslides triggered by the Luding Ms 6.8 earthquake and reveal the spatial distribution patterns of these landslides. These findings can serve as important references for risk mitigation, reconstruction planning, and regional earthquake disaster research in the earthquake-affected area.展开更多
The periphery of the Qinghai-Tibet Plateau is renowned for its susceptibility to landslides.However,the northwestern margin of this region,characterised by limited human activities and challenging transportation,remai...The periphery of the Qinghai-Tibet Plateau is renowned for its susceptibility to landslides.However,the northwestern margin of this region,characterised by limited human activities and challenging transportation,remains insufficiently explored concerning landslide occurrence and dispersion.With the planning and construction of the Xinjiang-Tibet Railway,a comprehensive investigation into disastrous landslides in this area is essential for effective disaster preparedness and mitigation strategies.By using the human-computer interaction interpretation approach,the authors established a landslide database encompassing 13003 landslides,collectively spanning an area of 3351.24 km^(2)(36°N-40°N,73°E-78°E).The database incorporates diverse topographical and environmental parameters,including regional elevation,slope angle,slope aspect,distance to faults,distance to roads,distance to rivers,annual precipitation,and stratum.The statistical characteristics of number and area of landslides,landslide number density(LND),and landslide area percentage(LAP)are analyzed.The authors found that a predominant concentration of landslide origins within high slope angle regions,with the highest incidence observed in intervals characterised by average slopes of 20°to 30°,maximum slope angle above 80°,along with orientations towards the north(N),northeast(NE),and southwest(SW).Additionally,elevations above 4.5 km,distance to rivers below 1 km,rainfall between 20-30 mm and 30-40 mm emerge as particularly susceptible to landslide development.The study area’s geological composition primarily comprises Mesozoic and Upper Paleozoic outcrops.Both fault and human engineering activities have different degrees of influence on landslide development.Furthermore,the significance of the landslide database,the relationship between landslide distribution and environmental factors,and the geometric and morphological characteristics of landslides are discussed.The landslide H/L ratios in the study area are mainly concentrated between 0.4 and 0.64.It means the landslides mobility in the region is relatively low,and the authors speculate that landslides in this region more possibly triggered by earthquakes or located in meizoseismal area.展开更多
Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as s...Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system.展开更多
Recently,exploration breakthroughs have been made in the Lower Cretaceous sandstone reservoirs in the Doseo Basin,but the identification of reservoir fluid property is difficult due to variable reservoir lithology,com...Recently,exploration breakthroughs have been made in the Lower Cretaceous sandstone reservoirs in the Doseo Basin,but the identification of reservoir fluid property is difficult due to variable reservoir lithology,complex oil-water contact within and faint responses of the oil zone,which causes the lower accuracy of reservoir fluid property identification with conventional mudlogging and wirelogging techniques.Applying the geochemical logging,fluorescent logging,mud logging and cutting logging technology,in combination with formation test data,this paper distinguishes the crude oil types,analyzes the logging response characteristics of oil zone after water washing,and establishes the interpretation charts and parameter standards for reservoir fluid properties.The crude oil can be divided into two types,namely viscous-heavy and thin-light,based on total hydrocarbon content and component concentration tested by mud logging,features of pyrolysis gas chromatogram and fluorescence spectroscopy.The general characteristics of oil layers experienced water washing include the decrease of total hydrocarbon content and component concentration from mud logging,the decrease of S1 and PS values from geochemical logging,the decrease of hydrocarbon abundance and absence of some light components in pyrolysis gas chromatogram,and the decrease of fluorescence area and intensity from fluorescence logging.According to crude oil types,the cross plots of S1 versus peak-baseline ratio,and the cross plots of rock wettability versus fluorescence area ratio are drawn and used to interpret reservoir fluid property.Meanwhile,the standards of reservoir fluid parameter are established combining with the parameters of PS and the parameters in above charts,and comprehensive multiparameter correlation in both vertical and horizontal ways is also performed to interpret reservoir fluid property.The application in the Doseo Basin achieved great success,improving interpretation ability of fluid property in the reservoir with complex oil-water contact,and also provided technical reference for the efficient exploration and development of similar reservoirs.展开更多
文摘Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only for removing irrelevant or redundant rules but also for uncovering hidden associations that impact other factors.Recently,several post-processing methods have been proposed,each with its own strengths and weaknesses.In this paper,we propose THAPE(Tunable Hybrid Associative Predictive Engine),which combines descriptive and predictive techniques.By leveraging both techniques,our aim is to enhance the quality of analyzing generated rules.This includes removing irrelevant or redundant rules,uncovering interesting and useful rules,exploring hidden association rules that may affect other factors,and providing backtracking ability for a given product.The proposed approach offers a tailored method that suits specific goals for retailers,enabling them to gain a better understanding of customer behavior based on factual transactions in the target market.We applied THAPE to a real dataset as a case study in this paper to demonstrate its effectiveness.Through this application,we successfully mined a concise set of highly interesting and useful association rules.Out of the 11,265 rules generated,we identified 125 rules that are particularly relevant to the business context.These identified rules significantly improve the interpretability and usefulness of association rules for decision-making purposes.
基金supported by the Natural Science Foundation of China (No.61833016)the Shaanxi Outstanding Youth Science Foundation (No.2020JC-34)the Shaanxi Science and Technology Innovation Team (No.2022TD-24).
文摘Fault diagnosis plays an irreplaceable role in the normal operation of equipment.A fault diagnosis model is often required to be interpretable for increasing the trust between humans and the model.Due to the understandable knowledge expression and transparent reasoning process,the belief rule base(BRB)has extensive applications as an interpretable expert system in fault diagnosis.Optimization is an effective means to weaken the subjectivity of experts in BRB,where the interpretability of BRB may be weakened.Hence,to obtain a credible result,the weakening factors of interpretability in the BRB-based fault diagnosis model are firstly analyzed,which are manifested in deviation from the initial judgement of experts and over-optimization of parameters.For these two factors,three indexes are proposed,namely the consistency index of rules,consistency index of the rule base and over-optimization index,tomeasure the interpretability of the optimizedmodel.Considering both the accuracy and interpretability of amodel,an improved coordinate ascent(I-CA)algorithmis proposed to fine-tune the parameters of the fault diagnosis model based on BRB.In I-CA,the algorithm combined with the advance and retreat method and the golden section method is employed to be one-dimensional search algorithm.Furthermore,the random optimization sequence and adaptive step size are proposed to improve the accuracy of the model.Finally,a case study of fault diagnosis in aerospace relays based on BRB is carried out to verify the effectiveness of the proposed method.
基金The authors would like to acknowledge the support of the China Scholarship Council,the Flemish Government under the“Onderzoeksprogramma Artificiële Intelligentie(AI)Vlaanderen”Program and the Research Foundation–Flanders(FWO)under the ROBUSTIFY research grant no.S006119N.
文摘Early fault diagnosis of bearings is crucial for ensuring safe and reliable operations.Convolutional neural networks(CNNs)have achieved significant breakthroughs in machinery fault diagnosis.However,complex and varying working conditions can lead to inter-class similarity and intra-class variability in datasets,making it more challenging for CNNs to learn discriminative features.Furthermore,CNNs are often considered“black boxes”and lack sufficient interpretability in the fault diagnosis field.To address these issues,this paper introduces a residual mixed domain attention CNN method,referred to as RMA-CNN.This method comprises multiple residual mixed domain attention modules(RMAMs),each employing one attention mechanism to emphasize meaningful features in both time and channel domains.This significantly enhances the network’s ability to learn fault-related features.Moreover,we conduct an in-depth analysis of the inherent feature learning mechanism of the attention module RMAM to improve the interpretability of CNNs in fault diagnosis applications.Experiments conducted on two datasets—a high-speed aeronautical bearing dataset and a motor bearing dataset—demonstrate that the RMA-CNN achieves remarkable results in diagnostic tasks.
基金This work was supported in part by the Postdoctoral Science Foundation of China under Grant No.2020M683736in part by the Natural Science Foundation of Heilongjiang Province of China under Grant No.LH2021F038+2 种基金in part by the innovation practice project of college students in Heilongjiang Province under Grant Nos.202010231009,202110231024,and 202110231155in part by the basic scientific research business expenses scientific research projects of provincial universities in Heilongjiang Province Grant Nos.XJGZ2021001in part by the Education and teaching reform program of 2021 in Heilongjiang Province under Grant No.SJGY20210457.
文摘Prediction systems are an important aspect of intelligent decisions.In engineering practice,the complex system structure and the external environment cause many uncertain factors in the model,which influence the modeling accuracy of the model.The belief rule base(BRB)can implement nonlinear modeling and express a variety of uncertain information,including fuzziness,ignorance,randomness,etc.However,the BRB system also has two main problems:Firstly,modeling methods based on expert knowledge make it difficult to guarantee the model’s accuracy.Secondly,interpretability is not considered in the optimization process of current research,resulting in the destruction of the interpretability of BRB.To balance the accuracy and interpretability of the model,a self-growth belief rule basewith interpretability constraints(SBRB-I)is proposed.The reasoning process of the SBRB-I model is based on the evidence reasoning(ER)approach.Moreover,the self-growth learning strategy ensures effective cooperation between the datadriven model and the expert system.A case study showed that the accuracy and interpretability of the model could be guaranteed.The SBRB-I model has good application prospects in prediction systems.
文摘With the development of Fintech, applying artificial intelligence (AI) technologies to the financial field is a general trend. However, there are some inappropriate conditions, for instance, the AI model is always treated as a black box and cannot be interpreted. This paper studies the AI model interpretability when the models are applied in the financial field. We analyze the reasons of black box problem and explore the effective solutions. We propose a new kind of automatic Regtech tool—LIMER, and put forward policy suggestions, thereby continuously promoting the development of Fintech to a higher level.
基金Under the auspices of National Natural Science Foundation of China(No.41971352)Key Research and Development Project of Shaanxi Province(No.2022ZDLSF06-01)。
文摘The accurate and reliable interpretation of regional land cover data is very important for natural resource monitoring and environmental assessment.At present,refined land cover data are mainly obtained by manual visual interpretation,which has the problems of heavy workload and inconsistent interpretation scales.Deep learning has greatly improved the automatic processing and analysis of remote sensing data.However,the accurate interpretation of feature information from massive datasets remains a difficult problem in wide regional land cover classification.To improve the efficiency of deep learning-based remote sensing image interpretation,we selected multisource remote sensing data,assessed the interpretability of the U-Net model based on surface spatial scenes with different levels of complexity,and proposed a new method of stereoscopic accuracy verification(SAV)to evaluate the reliability of the classification result.The results show that classification accuracy is more highly correlated with terrain and landscape than with other factors related to image data,such as platform and spatial resolution.As the complexity of surface spatial scenes increases,the accuracy of the classification results mainly shows a fluctuating declining trend.We also find the distribution characteristics from the SAV evaluation results of different land cover types in each surface spatial scene.Based on the results observed in this study,we consider the distinction of interpretability and reliability in diverse ground object types and design targeted classification strategies for different surface scenes,which can greatly improve the classification efficiency.The key achievement of this study is to provide the theoretical basis for remote sensing information analysis and an accuracy evaluation method for regional land cover classification,and the proposed method can help improve the likelihood that intelligent interpretation can replace manual acquisition.
基金European Commission,Joint Research Center,Grant/Award Number:HUMAINTMinisterio de Ciencia e Innovación,Grant/Award Number:PID2020‐114924RB‐I00Comunidad de Madrid,Grant/Award Number:S2018/EMT‐4362 SEGVAUTO 4.0‐CM。
文摘Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of possible future trajectories can be consid-erable(multi-modal).Most prior approaches proposed to address multi-modal motion prediction are based on complex machine learning systems that have limited interpret-ability.Moreover,the metrics used in current benchmarks do not evaluate all aspects of the problem,such as the diversity and admissibility of the output.The authors aim to advance towards the design of trustworthy motion prediction systems,based on some of the re-quirements for the design of Trustworthy Artificial Intelligence.The focus is on evaluation criteria,robustness,and interpretability of outputs.First,the evaluation metrics are comprehensively analysed,the main gaps of current benchmarks are identified,and a new holistic evaluation framework is proposed.Then,a method for the assessment of spatial and temporal robustness is introduced by simulating noise in the perception system.To enhance the interpretability of the outputs and generate more balanced results in the proposed evaluation framework,an intent prediction layer that can be attached to multi-modal motion prediction models is proposed.The effectiveness of this approach is assessed through a survey that explores different elements in the visualisation of the multi-modal trajectories and intentions.The proposed approach and findings make a significant contribution to the development of trustworthy motion prediction systems for autono-mous vehicles,advancing the field towards greater safety and reliability.
基金supported in part by the National Natural Science Foundation of China(82072019)the Shenzhen Basic Research Program(JCYJ20210324130209023)+5 种基金the Shenzhen-Hong Kong-Macao S&T Program(Category C)(SGDX20201103095002019)the Mainland-Hong Kong Joint Funding Scheme(MHKJFS)(MHP/005/20),the Project of Strategic Importance Fund(P0035421)the Projects of RISA(P0043001)from the Hong Kong Polytechnic University,the Natural Science Foundation of Jiangsu Province(BK20201441)the Provincial and Ministry Co-constructed Project of Henan Province Medical Science and Technology Research(SBGJ202103038,SBGJ202102056)the Henan Province Key R&D and Promotion Project(Science and Technology Research)(222102310015)the Natural Science Foundation of Henan Province(222300420575),and the Henan Province Science and Technology Research(222102310322).
文摘Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of clinicians.Moreover,some potentially useful quantitative information in medical images,especially that which is not visible to the naked eye,is often ignored during clinical practice.In contrast,radiomics performs high-throughput feature extraction from medical images,which enables quantitative analysis of medical images and prediction of various clinical endpoints.Studies have reported that radiomics exhibits promising performance in diagnosis and predicting treatment responses and prognosis,demonstrating its potential to be a non-invasive auxiliary tool for personalized medicine.However,radiomics remains in a developmental phase as numerous technical challenges have yet to be solved,especially in feature engineering and statistical modeling.In this review,we introduce the current utility of radiomics by summarizing research on its application in the diagnosis,prognosis,and prediction of treatment responses in patients with cancer.We focus on machine learning approaches,for feature extraction and selection during feature engineering and for imbalanced datasets and multi-modality fusion during statistical modeling.Furthermore,we introduce the stability,reproducibility,and interpretability of features,and the generalizability and interpretability of models.Finally,we offer possible solutions to current challenges in radiomics research.
基金This research work is supported by Sichuan Science and Technology Program(Grant No.2022YFS0586)the National Key R&D Program of China(Grant No.2019YFC1509301)the National Natural Science Foundation of China(Grant No.61976046).
文摘Predicting the displacement of landslide is of utmost practical importance as the landslide can pose serious threats to both human life and property.However,traditional methods have the limitation of random selection in sliding window selection and seldom incorporate weather forecast data for displacement prediction,while a single structural model cannot handle input sequences of different lengths at the same time.In order to solve these limitations,in this study,a new approach is proposed that utilizes weather forecast data and incorporates the maximum information coefficient(MIC),long short-term memory network(LSTM),and attention mechanism to establish a teacher-student coupling model with parallel structure for short-term landslide displacement prediction.Through MIC,a suitable input sequence length is selected for the LSTM model.To investigate the influence of rainfall on landslides during different seasons,a parallel teacher-student coupling model is developed that is able to learn sequential information from various time series of different lengths.The teacher model learns sequence information from rainfall intensity time series while incorporating reliable short-term weather forecast data from platforms such as China Meteorological Administration(CMA)and Reliable Prognosis(https://rp5.ru)to improve the model’s expression capability,and the student model learns sequence information from other time series.An attention module is then designed to integrate different sequence information to derive a context vector,representing seasonal temporal attention mode.Finally,the predicted displacement is obtained through a linear layer.The proposed method demonstrates superior prediction accuracies,surpassing those of the support vector machine(SVM),LSTM,recurrent neural network(RNN),temporal convolutional network(TCN),and LSTM-Attention models.It achieves a mean absolute error(MAE)of 0.072 mm,root mean square error(RMSE)of 0.096 mm,and pearson correlation coefficients(PCCS)of 0.85.Additionally,it exhibits enhanced prediction stability and interpretability,rendering it an indispensable tool for landslide disaster prevention and mitigation.
基金This work is funded by the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the National Science Fund for Distinguished Young Scholars of China(Grant No.52222905).
文摘In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.
基金This work was supported by the National Nature Science Foundation of China(Grant Nos.42177139 and 41941017)the Natural Science Foundation Project of Jilin Province,China(Grant No.20230101088JC).The authors would like to thank the anonymous reviewers for their comments and suggestions.
文摘The aperture of natural rock fractures significantly affects the deformation and strength properties of rock masses,as well as the hydrodynamic properties of fractured rock masses.The conventional measurement methods are inadequate for collecting data on high-steep rock slopes in complex mountainous regions.This study establishes a high-resolution three-dimensional model of a rock slope using unmanned aerial vehicle(UAV)multi-angle nap-of-the-object photogrammetry to obtain edge feature points of fractures.Fracture opening morphology is characterized using coordinate projection and transformation.Fracture central axis is determined using vertical measuring lines,allowing for the interpretation of aperture of adaptive fracture shape.The feasibility and reliability of the new method are verified at a construction site of a railway in southeast Tibet,China.The study shows that the fracture aperture has a significant interval effect and size effect.The optimal sampling length for fractures is approximately 0.5e1 m,and the optimal aperture interpretation results can be achieved when the measuring line spacing is 1%of the sampling length.Tensile fractures in the study area generally have larger apertures than shear fractures,and their tendency to increase with slope height is also greater than that of shear fractures.The aperture of tensile fractures is generally positively correlated with their trace length,while the correlation between the aperture of shear fractures and their trace length appears to be weak.Fractures of different orientations exhibit certain differences in their distribution of aperture,but generally follow the forms of normal,log-normal,and gamma distributions.This study provides essential data support for rock and slope stability evaluation,which is of significant practical importance.
基金supported in part by the National Key Research and Development Program of China under Grant 2020YFB0905900.
文摘This paper proposes a new approach for online power system transient security assessment(TSA)and preventive control based on XGBoost and DC optimal power flow(DCOPF).The novelty of this proposal is that it applies the XGBoost and data selection method based on the 1-norm distance in local feature importance evaluation which can provide a certain model interpretability.The method of SMOTE+ENN is adopted for data rebalancing.The contingency-oriented XGBoost model is trained with databases generated by time domain simulations to represent the transient security constraint in the DCOPF model,which has a relatively fast speed of calculation.The transient security constrained generation rescheduling is implemented with the differential evolution algorithm,which is utilized to optimize the rescheduled generation in the preventive control.Feasibility and effectiveness of the proposed approach are demonstrated on an IEEE 39-bus test system and a 500-bus operational model for South Carolina,USA.
基金The research work was supported by the National Natural Science Foundation of China(Grant No.51978400)the National Key Research and Development Program of China(No.2021YFE0107800).The support is gratefully acknowledged.
文摘The prediction of structural performance plays a significant role in damage assessment of glass fiber reinforcement polymer(GFRP)elastic gridshell structures.Machine learning(ML)approaches are implemented in this study,to predict maximum stress and displacement of GFRP elastic gridshell structures.Several ML algorithms,including linear regression(LR),ridge regression(RR),support vector regression(SVR),K-nearest neighbors(KNN),decision tree(DT),random forest(RF),adaptive boosting(AdaBoost),extreme gradient boosting(XGBoost),category boosting(CatBoost),and light gradient boosting machine(LightGBM),are implemented in this study.Output features of structural performance considered in this study are the maximum stress as f1(x)and the maximum displacement to self-weight ratio as f2(x).A comparative study is conducted and the Catboost model presents the highest prediction accuracy.Finally,interpretable ML approaches,including shapely additive explanations(SHAP),partial dependence plot(PDP),and accumulated local effects(ALE),are applied to explain the predictions.SHAP is employed to describe the importance of each variable to structural performance both locally and globally.The results of sensitivity analysis(SA),feature importance of the CatBoost model and SHAP approach indicate the same parameters as the most significant variables for f1(x)and f2(x).
基金financially supported by China Postdoctoral Science Foundation(Grant No.2023M730365)Natural Science Foundation of Hubei Province of China(Grant No.2023AFB232)。
文摘Gas chromatography-mass spectrometry(GC-MS)is an extremely important analytical technique that is widely used in organic geochemistry.It is the only approach to capture biomarker features of organic matter and provides the key evidence for oil-source correlation and thermal maturity determination.However,the conventional way of processing and interpreting the mass chromatogram is both timeconsuming and labor-intensive,which increases the research cost and restrains extensive applications of this method.To overcome this limitation,a correlation model is developed based on the convolution neural network(CNN)to link the mass chromatogram and biomarker features of samples from the Triassic Yanchang Formation,Ordos Basin,China.In this way,the mass chromatogram can be automatically interpreted.This research first performs dimensionality reduction for 15 biomarker parameters via the factor analysis and then quantifies the biomarker features using two indexes(i.e.MI and PMI)that represent the organic matter thermal maturity and parent material type,respectively.Subsequently,training,interpretation,and validation are performed multiple times using different CNN models to optimize the model structure and hyper-parameter setting,with the mass chromatogram used as the input and the obtained MI and PMI values for supervision(label).The optimized model presents high accuracy in automatically interpreting the mass chromatogram,with R2values typically above 0.85 and0.80 for the thermal maturity and parent material interpretation results,respectively.The significance of this research is twofold:(i)developing an efficient technique for geochemical research;(ii)more importantly,demonstrating the potential of artificial intelligence in organic geochemistry and providing vital references for future related studies.
基金the financial support by the National Natural Science Foundation of China(52230004 and 52293445)the Key Research and Development Project of Shandong Province(2020CXGC011202-005)the Shenzhen Science and Technology Program(KCXFZ20211020163404007 and KQTD20190929172630447).
文摘The potential for reducing greenhouse gas(GHG)emissions and energy consumption in wastewater treatment can be realized through intelligent control,with machine learning(ML)and multimodality emerging as a promising solution.Here,we introduce an ML technique based on multimodal strategies,focusing specifically on intelligent aeration control in wastewater treatment plants(WWTPs).The generalization of the multimodal strategy is demonstrated on eight ML models.The results demonstrate that this multimodal strategy significantly enhances model indicators for ML in environmental science and the efficiency of aeration control,exhibiting exceptional performance and interpretability.Integrating random forest with visual models achieves the highest accuracy in forecasting aeration quantity in multimodal models,with a mean absolute percentage error of 4.4%and a coefficient of determination of 0.948.Practical testing in a full-scale plant reveals that the multimodal model can reduce operation costs by 19.8%compared to traditional fuzzy control methods.The potential application of these strategies in critical water science domains is discussed.To foster accessibility and promote widespread adoption,the multimodal ML models are freely available on GitHub,thereby eliminating technical barriers and encouraging the application of artificial intelligence in urban wastewater treatment.
基金Yulin Science and Technology Bureau production Project“Research on Smart Agricultural Product Traceability System”(No.CXY-2022-64)Light of West China(No.XAB2022YN10)+1 种基金The China Postdoctoral Science Foundation(No.2023M740760)Shaanxi Province Key Research and Development Plan(No.2024SF-YBXM-678).
文摘Hyperspectral imagery encompasses spectral and spatial dimensions,reflecting the material properties of objects.Its application proves crucial in search and rescue,concealed target identification,and crop growth analysis.Clustering is an important method of hyperspectral analysis.The vast data volume of hyperspectral imagery,coupled with redundant information,poses significant challenges in swiftly and accurately extracting features for subsequent analysis.The current hyperspectral feature clustering methods,which are mostly studied from space or spectrum,do not have strong interpretability,resulting in poor comprehensibility of the algorithm.So,this research introduces a feature clustering algorithm for hyperspectral imagery from an interpretability perspective.It commences with a simulated perception process,proposing an interpretable band selection algorithm to reduce data dimensions.Following this,amulti-dimensional clustering algorithm,rooted in fuzzy and kernel clustering,is developed to highlight intra-class similarities and inter-class differences.An optimized P systemis then introduced to enhance computational efficiency.This system coordinates all cells within a mapping space to compute optimal cluster centers,facilitating parallel computation.This approach diminishes sensitivity to initial cluster centers and augments global search capabilities,thus preventing entrapment in local minima and enhancing clustering performance.Experiments conducted on 300 datasets,comprising both real and simulated data.The results show that the average accuracy(ACC)of the proposed algorithm is 0.86 and the combination measure(CM)is 0.81.
基金supported by the National Natural Science Foundation of China project (No. 42372339)the China Geological Survey Project (Nos. DD20221816, DD20190319)。
文摘On September 5, 2022, a magnitude Ms 6.8 earthquake occurred along the Moxi fault in the southern part of the Xianshuihe fault zone located in the southeastern margin of the Tibetan Plateau,resulting in severe damage and substantial economic loss. In this study, we established a coseismic landslide database triggered by Luding Ms 6.8 earthquake, which includes 4794 landslides with a total area of 46.79 km^(2). The coseismic landslides primarily consisted of medium and small-sized landslides, characterized by shallow surface sliding. Some exhibited characteristics of high-position initiation resulted in the obstruction or partial obstruction of rivers, leading to the formation of dammed lakes. Our research found that the coseismic landslides were predominantly observed on slopes ranging from 30° to 50°, occurring at between 1000 m and 2500 m, with slope aspects varying from 90° to 180°. Landslides were also highly developed in granitic bodies that had experienced structural fracturing and strong-tomoderate weathering. Coseismic landslides concentrated within a 6 km range on both sides of the Xianshuihe and Daduhe fault zones. The area and number of coseismic landslides exhibited a negative correlation with the distance to fault lines, road networks, and river systems, as they were influenced by fault activity, road excavation, and river erosion. The coseismic landslides were mainly distributed in the southeastern region of the epicenter, exhibiting relatively concentrated patterns within the IX-degree zones such as Moxi Town, Wandong River basin, Detuo Town to Wanggangping Township. Our research findings provide important data on the coseismic landslides triggered by the Luding Ms 6.8 earthquake and reveal the spatial distribution patterns of these landslides. These findings can serve as important references for risk mitigation, reconstruction planning, and regional earthquake disaster research in the earthquake-affected area.
基金supported by the National Key Research and Development Program of China(2021YFB3901205)National Institute of Natural Hazards,Ministry of Emergency Management of China(2023-JBKY-57)。
文摘The periphery of the Qinghai-Tibet Plateau is renowned for its susceptibility to landslides.However,the northwestern margin of this region,characterised by limited human activities and challenging transportation,remains insufficiently explored concerning landslide occurrence and dispersion.With the planning and construction of the Xinjiang-Tibet Railway,a comprehensive investigation into disastrous landslides in this area is essential for effective disaster preparedness and mitigation strategies.By using the human-computer interaction interpretation approach,the authors established a landslide database encompassing 13003 landslides,collectively spanning an area of 3351.24 km^(2)(36°N-40°N,73°E-78°E).The database incorporates diverse topographical and environmental parameters,including regional elevation,slope angle,slope aspect,distance to faults,distance to roads,distance to rivers,annual precipitation,and stratum.The statistical characteristics of number and area of landslides,landslide number density(LND),and landslide area percentage(LAP)are analyzed.The authors found that a predominant concentration of landslide origins within high slope angle regions,with the highest incidence observed in intervals characterised by average slopes of 20°to 30°,maximum slope angle above 80°,along with orientations towards the north(N),northeast(NE),and southwest(SW).Additionally,elevations above 4.5 km,distance to rivers below 1 km,rainfall between 20-30 mm and 30-40 mm emerge as particularly susceptible to landslide development.The study area’s geological composition primarily comprises Mesozoic and Upper Paleozoic outcrops.Both fault and human engineering activities have different degrees of influence on landslide development.Furthermore,the significance of the landslide database,the relationship between landslide distribution and environmental factors,and the geometric and morphological characteristics of landslides are discussed.The landslide H/L ratios in the study area are mainly concentrated between 0.4 and 0.64.It means the landslides mobility in the region is relatively low,and the authors speculate that landslides in this region more possibly triggered by earthquakes or located in meizoseismal area.
基金The work is partially supported by Natural Science Foundation of Ningxia(Grant No.AAC03300)National Natural Science Foundation of China(Grant No.61962001)Graduate Innovation Project of North Minzu University(Grant No.YCX23152).
文摘Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system.
基金funded by a project entitled exploration field evaluation and target optimization of key basins in Chad and Niger(No.2019D-4308)initiated by the scientific research and technology development project of china national petroleum corporation.
文摘Recently,exploration breakthroughs have been made in the Lower Cretaceous sandstone reservoirs in the Doseo Basin,but the identification of reservoir fluid property is difficult due to variable reservoir lithology,complex oil-water contact within and faint responses of the oil zone,which causes the lower accuracy of reservoir fluid property identification with conventional mudlogging and wirelogging techniques.Applying the geochemical logging,fluorescent logging,mud logging and cutting logging technology,in combination with formation test data,this paper distinguishes the crude oil types,analyzes the logging response characteristics of oil zone after water washing,and establishes the interpretation charts and parameter standards for reservoir fluid properties.The crude oil can be divided into two types,namely viscous-heavy and thin-light,based on total hydrocarbon content and component concentration tested by mud logging,features of pyrolysis gas chromatogram and fluorescence spectroscopy.The general characteristics of oil layers experienced water washing include the decrease of total hydrocarbon content and component concentration from mud logging,the decrease of S1 and PS values from geochemical logging,the decrease of hydrocarbon abundance and absence of some light components in pyrolysis gas chromatogram,and the decrease of fluorescence area and intensity from fluorescence logging.According to crude oil types,the cross plots of S1 versus peak-baseline ratio,and the cross plots of rock wettability versus fluorescence area ratio are drawn and used to interpret reservoir fluid property.Meanwhile,the standards of reservoir fluid parameter are established combining with the parameters of PS and the parameters in above charts,and comprehensive multiparameter correlation in both vertical and horizontal ways is also performed to interpret reservoir fluid property.The application in the Doseo Basin achieved great success,improving interpretation ability of fluid property in the reservoir with complex oil-water contact,and also provided technical reference for the efficient exploration and development of similar reservoirs.