期刊文献+
共找到17篇文章
< 1 >
每页显示 20 50 100
Multimodal Machine Learning Guides Low Carbon Aeration Strategies in Urban Wastewater Treatment
1
作者 Hong-Cheng Wang Yu-Qi Wang +4 位作者 Xu Wang Wan-Xin Yin Ting-Chao Yu Chen-Hao Xue Ai-Jie Wang 《Engineering》 SCIE EI CAS CSCD 2024年第5期51-62,共12页
The potential for reducing greenhouse gas(GHG)emissions and energy consumption in wastewater treatment can be realized through intelligent control,with machine learning(ML)and multimodality emerging as a promising sol... The potential for reducing greenhouse gas(GHG)emissions and energy consumption in wastewater treatment can be realized through intelligent control,with machine learning(ML)and multimodality emerging as a promising solution.Here,we introduce an ML technique based on multimodal strategies,focusing specifically on intelligent aeration control in wastewater treatment plants(WWTPs).The generalization of the multimodal strategy is demonstrated on eight ML models.The results demonstrate that this multimodal strategy significantly enhances model indicators for ML in environmental science and the efficiency of aeration control,exhibiting exceptional performance and interpretability.Integrating random forest with visual models achieves the highest accuracy in forecasting aeration quantity in multimodal models,with a mean absolute percentage error of 4.4%and a coefficient of determination of 0.948.Practical testing in a full-scale plant reveals that the multimodal model can reduce operation costs by 19.8%compared to traditional fuzzy control methods.The potential application of these strategies in critical water science domains is discussed.To foster accessibility and promote widespread adoption,the multimodal ML models are freely available on GitHub,thereby eliminating technical barriers and encouraging the application of artificial intelligence in urban wastewater treatment. 展开更多
关键词 Wastewater treatment Multimodal machine learning Deep learning Aeration control Interpretable machine learning
下载PDF
Interpretable machine learning optimization(InterOpt)for operational parameters:A case study of highly-efficient shale gas development
2
作者 Yun-Tian Chen Dong-Xiao Zhang +1 位作者 Qun Zhao De-Xun Liu 《Petroleum Science》 SCIE EI CAS CSCD 2023年第3期1788-1805,共18页
An algorithm named InterOpt for optimizing operational parameters is proposed based on interpretable machine learning,and is demonstrated via optimization of shale gas development.InterOpt consists of three parts:a ne... An algorithm named InterOpt for optimizing operational parameters is proposed based on interpretable machine learning,and is demonstrated via optimization of shale gas development.InterOpt consists of three parts:a neural network is used to construct an emulator of the actual drilling and hydraulic fracturing process in the vector space(i.e.,virtual environment);:the Sharpley value method in inter-pretable machine learning is applied to analyzing the impact of geological and operational parameters in each well(i.e.,single well feature impact analysis):and ensemble randomized maximum likelihood(EnRML)is conducted to optimize the operational parameters to comprehensively improve the efficiency of shale gas development and reduce the average cost.In the experiment,InterOpt provides different drilling and fracturing plans for each well according to its specific geological conditions,and finally achieves an average cost reduction of 9.7%for a case study with 104 wells. 展开更多
关键词 Interpretable machine learning Operational parameters optimization Shapley value Shale gas development Neural network
下载PDF
Classification and structural characteristics of amorphous materials based on interpretable deep learning
3
作者 崔佳梅 李韵洁 +1 位作者 赵偲 郑文 《Chinese Physics B》 SCIE EI CAS CSCD 2023年第9期356-363,共8页
Defining the structure characteristics of amorphous materials is one of the fundamental problems that need to be solved urgently in complex materials because of their complex structure and long-range disorder.In this ... Defining the structure characteristics of amorphous materials is one of the fundamental problems that need to be solved urgently in complex materials because of their complex structure and long-range disorder.In this study,we develop an interpretable deep learning model capable of accurately classifying amorphous configurations and characterizing their structural properties.The results demonstrate that the multi-dimensional hybrid convolutional neural network can classify the two-dimensional(2D)liquids and amorphous solids of molecular dynamics simulation.The classification process does not make a priori assumptions on the amorphous particle environment,and the accuracy is 92.75%,which is better than other convolutional neural networks.Moreover,our model utilizes the gradient-weighted activation-like mapping method,which generates activation-like heat maps that can precisely identify important structures in the amorphous configuration maps.We obtain an order parameter from the heatmap and conduct finite scale analysis of this parameter.Our findings demonstrate that the order parameter effectively captures the amorphous phase transition process across various systems.These results hold significant scientific implications for the study of amorphous structural characteristics via deep learning. 展开更多
关键词 AMORPHOUS interpretable deep learning image classification finite scale analysis
下载PDF
Prediction of lattice thermal conductivity with two-stage interpretable machine learning
4
作者 胡锦龙 左钰婷 +10 位作者 郝昱州 舒国钰 王洋 冯敏轩 李雪洁 王晓莹 孙军 丁向东 高志斌 朱桂妹 李保文 《Chinese Physics B》 SCIE EI CAS CSCD 2023年第4期11-18,共8页
Thermoelectric and thermal materials are essential in achieving carbon neutrality. However, the high cost of lattice thermal conductivity calculations and the limited applicability of classical physical models have le... Thermoelectric and thermal materials are essential in achieving carbon neutrality. However, the high cost of lattice thermal conductivity calculations and the limited applicability of classical physical models have led to the inefficient development of thermoelectric materials. In this study, we proposed a two-stage machine learning framework with physical interpretability incorporating domain knowledge to calculate high/low thermal conductivity rapidly. Specifically, crystal graph convolutional neural network(CGCNN) is constructed to predict the fundamental physical parameters related to lattice thermal conductivity. Based on the above physical parameters, an interpretable machine learning model–sure independence screening and sparsifying operator(SISSO), is trained to predict the lattice thermal conductivity. We have predicted the lattice thermal conductivity of all available materials in the open quantum materials database(OQMD)(https://www.oqmd.org/). The proposed approach guides the next step of searching for materials with ultra-high or ultralow lattice thermal conductivity and promotes the development of new thermal insulation materials and thermoelectric materials. 展开更多
关键词 low lattice thermal conductivity interpretable machine learning thermoelectric materials physical domain knowledge
下载PDF
An Interpretable Light Attention-Convolution-Gate Recurrent Unit Architecture for the Highly Accurate Modeling of Actual Chemical Dynamic Processes
5
作者 Yue Li Ning Li +1 位作者 Jingzheng Ren Weifeng Shen 《Engineering》 SCIE EI CAS CSCD 2024年第8期104-116,共13页
To equip data-driven dynamic chemical process models with strong interpretability,we develop a light attention–convolution–gate recurrent unit(LACG)architecture with three sub-modules—a basic module,a brand-new lig... To equip data-driven dynamic chemical process models with strong interpretability,we develop a light attention–convolution–gate recurrent unit(LACG)architecture with three sub-modules—a basic module,a brand-new light attention module,and a residue module—that are specially designed to learn the general dynamic behavior,transient disturbances,and other input factors of chemical processes,respectively.Combined with a hyperparameter optimization framework,Optuna,the effectiveness of the proposed LACG is tested by distributed control system data-driven modeling experiments on the discharge flowrate of an actual deethanization process.The LACG model provides significant advantages in prediction accuracy and model generalization compared with other models,including the feedforward neural network,convolution neural network,long short-term memory(LSTM),and attention-LSTM.Moreover,compared with the simulation results of a deethanization model built using Aspen Plus Dynamics V12.1,the LACG parameters are demonstrated to be interpretable,and more details on the variable interactions can be observed from the model parameters in comparison with the traditional interpretable model attention-LSTM.This contribution enriches interpretable machine learning knowledge and provides a reliable method with high accuracy for actual chemical process modeling,paving a route to intelligent manufacturing. 展开更多
关键词 Interpretable machine learning Light attention-convolution-gate recurrent unit architecture Process knowledge discovery Data-driven process model Intelligent manufacturing
下载PDF
Interpretable deep learning for roof fall hazard detection in underground mines 被引量:4
6
作者 Ergin Isleyen Sebnem Duzgun R.McKell Carter 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2021年第6期1246-1255,共10页
Roof falls due to geological conditions are major hazards in the mining industry,causing work time loss,injuries,and fatalities.There are roof fall problems caused by high horizontal stress in several largeopening lim... Roof falls due to geological conditions are major hazards in the mining industry,causing work time loss,injuries,and fatalities.There are roof fall problems caused by high horizontal stress in several largeopening limestone mines in the eastern and midwestern United States.The typical hazard management approach for this type of roof fall hazards relies heavily on visual inspections and expert knowledge.In this context,we proposed a deep learning system for detection of the roof fall hazards caused by high horizontal stress.We used images depicting hazardous and non-hazardous roof conditions to develop a convolutional neural network(CNN)for autonomous detection of hazardous roof conditions.To compensate for limited input data,we utilized a transfer learning approach.In the transfer learning approach,an already-trained network is used as a starting point for classification in a similar domain.Results show that this approach works well for classifying roof conditions as hazardous or safe,achieving a statistical accuracy of 86.4%.This result is also compared with a random forest classifier,and the deep learning approach is more successful at classification of roof conditions.However,accuracy alone is not enough to ensure a reliable hazard management system.System constraints and reliability are improved when the features used by the network are understood.Therefore,we used a deep learning interpretation technique called integrated gradients to identify the important geological features in each image for prediction.The analysis of integrated gradients shows that the system uses the same roof features as the experts do on roof fall hazards detection.The system developed in this paper demonstrates the potential of deep learning in geotechnical hazard management to complement human experts,and likely to become an essential part of autonomous operations in cases where hazard identification heavily depends on expert knowledge.Moreover,deep learning-based systems reduce expert exposure to hazardous conditions. 展开更多
关键词 Roof fall Convolutional neural network(CNN) Transfer learning Deep learning interpretation Integrated gradients
下载PDF
Interpretable and Adaptable Early Warning Learning Analytics Model
7
作者 Shaleeza Sohail Atif Alvi Aasia Khanum 《Computers, Materials & Continua》 SCIE EI 2022年第5期3211-3225,共15页
Major issues currently restricting the use of learning analytics are the lack of interpretability and adaptability of the machine learning models used in this domain.Interpretability makes it easy for the stakeholders... Major issues currently restricting the use of learning analytics are the lack of interpretability and adaptability of the machine learning models used in this domain.Interpretability makes it easy for the stakeholders to understand the working of these models and adaptability makes it easy to use the same model for multiple cohorts and courses in educational institutions.Recently,some models in learning analytics are constructed with the consideration of interpretability but their interpretability is not quantified.However,adaptability is not specifically considered in this domain.This paper presents a new framework based on hybrid statistical fuzzy theory to overcome these limitations.It also provides explainability in the form of rules describing the reasoning behind a particular output.The paper also discusses the system evaluation on a benchmark dataset showing promising results.The measure of explainability,fuzzy index,shows that the model is highly interpretable.This system achieves more than 82%recall in both the classification and the context adaptation stages. 展开更多
关键词 learning analytics interpretable machine learning fuzzy systems early warning INTERPRETABILITY explainable artificial intelligence
下载PDF
Predicting Hurricane Evacuation Decisions with Interpretable Machine Learning Methods
8
作者 Yuran Sun Shih‑Kai Huang Xilei Zhao 《International Journal of Disaster Risk Science》 SCIE CSCD 2024年第1期134-148,共15页
Facing the escalating effects of climate change,it is critical to improve the prediction and understanding of the hurricane evacuation decisions made by households in order to enhance emergency management.Current stud... Facing the escalating effects of climate change,it is critical to improve the prediction and understanding of the hurricane evacuation decisions made by households in order to enhance emergency management.Current studies in this area often have relied on psychology-driven linear models,which frequently exhibited limitations in practice.The present study proposed a novel interpretable machine learning approach to predict household-level evacuation decisions by leveraging easily accessible demographic and resource-related predictors,compared to existing models that mainly rely on psychological factors.An enhanced logistic regression model(that is,an interpretable machine learning approach) was developed for accurate predictions by automatically accounting for nonlinearities and interactions(that is,univariate and bivariate threshold effects).Specifically,nonlinearity and interaction detection were enabled by low-depth decision trees,which offer transparent model structure and robustness.A survey dataset collected in the aftermath of Hurricanes Katrina and Rita,two of the most intense tropical storms of the last two decades,was employed to test the new methodology.The findings show that,when predicting the households’ evacuation decisions,the enhanced logistic regression model outperformed previous linear models in terms of both model fit and predictive capability.This outcome suggests that our proposed methodology could provide a new tool and framework for emergency management authorities to improve the prediction of evacuation traffic demands in a timely and accurate manner. 展开更多
关键词 Artifcial Intelligence(AI) Decision-making modeling Hurricane evacuation Interpretable machine learning Nonlinearity and interaction detection
原文传递
Directly predicting N_(2) electroreduction reaction free energy using interpretable machine learning with non-DFT calculated features
9
作者 Yaqin Zhang Yuhang Wang +1 位作者 Ninggui Ma Jun Fan 《Journal of Energy Chemistry》 SCIE EI CAS 2024年第10期139-148,I0004,共11页
Electrocatalytic nitrogen reduction to ammonia has garnered significant attention with the blooming of single-atom catalysts(SACs),showcasing their potential for sustainable and energy-efficient ammonia production.How... Electrocatalytic nitrogen reduction to ammonia has garnered significant attention with the blooming of single-atom catalysts(SACs),showcasing their potential for sustainable and energy-efficient ammonia production.However,cost-effectively designing and screening efficient electrocatalysts remains a challenge.In this study,we have successfully established interpretable machine learning(ML)models to evaluate the catalytic activity of SACs by directly and accurately predicting reaction Gibbs free energy.Our models were trained using non-density functional theory(DFT)calculated features from a dataset comprising 90 graphene-supported SACs.Our results underscore the superior prediction accuracy of the gradient boosting regression(GBR)model for bothΔg(N_(2)→NNH)andΔG(NH_(2)→NH_(3)),boasting coefficient of determination(R^(2))score of 0.972 and 0.984,along with root mean square error(RMSE)of 0.051 and 0.085 eV,respectively.Moreover,feature importance analysis elucidates that the high accuracy of GBR model stems from its adept capture of characteristics pertinent to the active center and coordination environment,unveilling the significance of elementary descriptors,with the colvalent radius playing a dominant role.Additionally,Shapley additive explanations(SHAP)analysis provides global and local interpretation of the working mechanism of the GBR model.Our analysis identifies that a pyrrole-type coordination(flag=0),d-orbitals with a moderate occupation(N_(d)=5),and a moderate difference in covalent radius(r_(TM-ave)near 140 pm)are conducive to achieving high activity.Furthermore,we extend the prediction of activity to more catalysts without additional DFT calculations,validating the reliability of our feature engineering,model training,and design strategy.These findings not only highlight new opportunity for accelerating catalyst design using non-DFT calculated features,but also shed light on the working mechanism of"black box"ML model.Moreover,the model provides valuable guidance for catalytic material design in multiple proton-electron coupling reactions,particularly in driving sustainable CO_(2),O_(2),and N_(2) conversion. 展开更多
关键词 Nitrogen reduction Single-atom catalyst Interpretable machine learning Graphene Non-DFT features
下载PDF
Interpretable machine learning analysis and automated modeling to simulate fluid-particle flows 被引量:1
10
作者 Bo Ouyang Litao Zhu Zhenghong Luo 《Particuology》 SCIE EI CAS CSCD 2023年第9期42-52,共11页
The present study extracts human-understandable insights from machine learning(ML)-based mesoscale closure in fluid-particle flows via several novel data-driven analysis approaches,i.e.,maximal information coefficient... The present study extracts human-understandable insights from machine learning(ML)-based mesoscale closure in fluid-particle flows via several novel data-driven analysis approaches,i.e.,maximal information coefficient(MIC),interpretable ML,and automated ML.It is previously shown that the solidvolume fraction has the greatest effect on the drag force.The present study aims to quantitativelyinvestigate the influence of flow properties on mesoscale drag correction(H_(d)).The MIC results showstrong correlations between the features(i.e.,slip velocity(u^(*)_(sy))and particle volume fraction(εs))and thelabel H_(d).The interpretable ML analysis confirms this conclusion,and quantifies the contribution of u^(*)_(sy),εs and gas pressure gradient to the model as 71.9%,27.2%and 0.9%,respectively.Automated ML without theneed to select the model structure and hyperparameters is used for modeling,improving the predictionaccuracy over our previous model(Zhu et al.,2020;Ouyang,Zhu,Su,&Luo,2021). 展开更多
关键词 Filtered two-fluid model Fluid-particle flow Mesoscale closure Interpretable machine learning Automated machine learning Maximal information coefficient
原文传递
Approaching the upper boundary of driver-response relationships:identifying factors using a novel framework integrating quantile regression with interpretable machine learning
11
作者 Zhongyao Liang Yaoyang Xu +4 位作者 Gang Zhao Wentao Lu Zhenghui Fu Shuhang Wang Tyler Wagner 《Frontiers of Environmental Science & Engineering》 SCIE EI CSCD 2023年第6期153-163,共11页
The identification of factors that may be forcing ecological observations to approach the upper boundary provides insight into potential mechanisms affecting driver-response relationships,and can help inform ecosystem... The identification of factors that may be forcing ecological observations to approach the upper boundary provides insight into potential mechanisms affecting driver-response relationships,and can help inform ecosystem management,but has rarely been explored.In this study,we propose a novel framework integrating quantile regression with interpretable machine learning.In the first stage of the framework,we estimate the upper boundary of a driver-response relationship using quantile regression.Next,we calculate“potentials”of the response variable depending on the driver,which are defined as vertical distances from the estimated upper boundary of the relationship to observations in the driver-response variable scatter plot.Finally,we identify key factors impacting the potential using a machine learning model.We illustrate the necessary steps to implement the framework using the total phosphorus(TP)-Chlorophyll a(CHL)relationship in lakes across the continental US.We found that the nitrogen to phosphorus ratio(N:P),annual average precipitation,total nitrogen(TN),and summer average air temperature were key factors impacting the potential of CHL depending on TP.We further revealed important implications of our findings for lake eutrophication management.The important role of N:P and TN on the potential highlights the co-limitation of phosphorus and nitrogen and indicates the need for dual nutrient criteria.Future wetter and/or warmer climate scenarios can decrease the potential which may reduce the efficacy of lake eutrophication management.The novel framework advances the application of quantile regression to identify factors driving observations to approach the upper boundary of driver-response relationships. 展开更多
关键词 Driver-response Upper boundary of relationship Interpretable machine learning Quantile regression Total phosphorus Chlorophyll a
原文传递
Integrating multi-omics data of childhood asthma using a deep association model 被引量:1
12
作者 Kai Wei Fang Qian +2 位作者 Yixue Lia Tao Zeng Tao Huang 《Fundamental Research》 CAS CSCD 2024年第4期738-751,共14页
Childhood asthma is one of the most common respiratory diseases with rising mortality and morbidity.The multi-omics data is providing a new chance to explore collaborative biomarkers and corresponding diagnostic model... Childhood asthma is one of the most common respiratory diseases with rising mortality and morbidity.The multi-omics data is providing a new chance to explore collaborative biomarkers and corresponding diagnostic models of childhood asthma.To capture the nonlinear association of multi-omics data and improve interpretability of diagnostic model,we proposed a novel deep association model(DAM)and corresponding efficient analysis framework.First,the Deep Subspace Reconstruction was used to fuse the omics data and diagnostic information,thereby correcting the distribution of the original omics data and reducing the influence of unnecessary data noises.Second,the Joint Deep Semi-Negative Matrix Factorization was applied to identify different latent sample patterns and extract biomarkers from different omics data levels.Third,our newly proposed Deep Orthogonal Canonical Correlation Analysis can rank features in the collaborative module,which are able to construct the diagnostic model considering nonlinear correlation between different omics data levels.Using DAM,we deeply analyzed the transcriptome and methylation data of childhood asthma.The effectiveness of DAM is verified from the perspectives of algorithm performance and biological significance on the independent test dataset,by ablation experiment and comparison with many baseline methods from clinical and biological studies.The DAM-induced diagnostic model can achieve a prediction AUC of o.912,which is higher than that of many other alternative methods.Meanwhile,relevant pathways and biomarkers of childhood asthma are also recognized to be collectively altered on the gene expression and methylation levels.As an interpretable machine learning approach,DAM simultaneously considers the non-linear associations among samples and those among biological features,which should help explore interpretative biomarker candidates and efficient diagnostic models from multi-omics data analysis for human complexdiseases. 展开更多
关键词 Deepsub space reconstruction Deepnon-negative matrix factorization Deepcanonical correlationanalysis Multi-omics Interpretable machine learning Childhood asthma
原文传递
Visual interpretability for deep learning:a survey 被引量:48
13
作者 Quan-shi ZHANG Song-chun ZHU 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2018年第1期27-39,共13页
This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations.Although deep neural networks have exhibited ... This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations.Although deep neural networks have exhibited superior performance in various tasks,interpretability is always Achilles' heel of deep neural networks.At present,deep neural networks obtain high discrimination power at the cost of a low interpretability of their black-box representations.We believe that high model interpretability may help people break several bottlenecks of deep learning,e.g.,learning from a few annotations,learning via human–computer communications at the semantic level,and semantically debugging network representations.We focus on convolutional neural networks(CNNs),and revisit the visualization of CNN representations,methods of diagnosing representations of pre-trained CNNs,approaches for disentangling pre-trained CNN representations,learning of CNNs with disentangled representations,and middle-to-end learning based on model interpretability.Finally,we discuss prospective trends in explainable artificial intelligence. 展开更多
关键词 Artificial intelligence Deep learning Interpretable model
原文传递
An explainable framework for load forecasting of a regional integrated energy system based on coupled features and multi-task learning 被引量:4
14
作者 Kailang Wu Jie Gu +2 位作者 Lu Meng Honglin Wen Jinghuan Ma 《Protection and Control of Modern Power Systems》 2022年第1期349-362,共14页
To extract strong correlations between different energy loads and improve the interpretability and accuracy for load forecasting of a regional integrated energy system(RIES),an explainable framework for load forecasti... To extract strong correlations between different energy loads and improve the interpretability and accuracy for load forecasting of a regional integrated energy system(RIES),an explainable framework for load forecasting of an RIES is proposed.This includes the load forecasting model of RIES and its interpretation.A coupled feature extracting strat-egy is adopted to construct coupled features between loads as the input variables of the model.It is designed based on multi-task learning(MTL)with a long short-term memory(LSTM)model as the sharing layer.Based on SHapley Additive exPlanations(SHAP),this explainable framework combines global and local interpretations to improve the interpretability of load forecasting of the RIES.In addition,an input variable selection strategy based on the global SHAP value is proposed to select input feature variables of the model.A case study is given to verify the effectiveness of the proposed model,constructed coupled features,and input variable selection strategy.The results show that the explainable framework intuitively improves the interpretability of the prediction model. 展开更多
关键词 Load forecasting Regional integrated energy system Coupled feature SHapley additive exPlanations Interpretability of deep learning
原文传递
Explainable artificial intelligence and interpretable machine learning for agricultural data analysis 被引量:1
15
作者 Masahiro Ryo 《Artificial Intelligence in Agriculture》 2022年第1期257-265,共9页
Artificial intelligence and machine learning have been increasingly applied for prediction in agricultural science.However,many models are typically black boxes,meaning we cannot explain what the models learned from t... Artificial intelligence and machine learning have been increasingly applied for prediction in agricultural science.However,many models are typically black boxes,meaning we cannot explain what the models learned from the data and the reasons behind predictions.To address this issue,I introduce an emerging subdomain of artificial intelligence,explainable artificial intelligence(XAI),and associated toolkits,interpretable machine learning.This study demonstrates the usefulness of several methods by applying them to an openly available dataset.The dataset includes the no-tillage effect on crop yield relative to conventional tillage and soil,climate,and management variables.Data analysis discovered that no-tillage management can increase maize crop yield where yield in conventional tillage is<5000 kg/ha and the maximum temperature is higher than 32°.These methods are useful to answer(i)which variables are important for prediction in regression/classification,(ii)which variable interactions are important for prediction,(iii)how important variables and their interactions are associated with the response variable,(iv)what are the reasons underlying a predicted value for a certain instance,and(v)whether different machine learning algorithms offer the same answer to these questions.I argue that the goodness of model fit is overly evaluated with model performance measures in the current practice,while these questions are unanswered.XAI and interpretable machine learning can enhance trust and explainability in AI. 展开更多
关键词 Interpretable machine learning Explainable artificial intelligence AGRICULTURE Crop yield NO-TILLAGE XAI
原文传递
Compressor geometric uncertainty quantification under conditions from near choke to near stall 被引量:2
16
作者 Junying WANG Baotong WANG +3 位作者 Heli YANG Zhenzhong SUN Kai ZHOU Xinqian ZHENG 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2023年第3期16-29,共14页
Geometric and working condition uncertainties are inevitable in a compressor,deviating the compressor performance from the design value.It’s necessary to explore the influence of geometric uncertainty on performance ... Geometric and working condition uncertainties are inevitable in a compressor,deviating the compressor performance from the design value.It’s necessary to explore the influence of geometric uncertainty on performance deviation under different working conditions.In this paper,the geometric uncertainty influences at near stall,peak efficiency,and near choke conditions under design speed and low speed are investigated.Firstly,manufacturing geometric uncertainties are analyzed.Next,correlation models between geometry and performance under different working conditions are constructed based on a neural network.Then the Shapley additive explanations(SHAP)method is introduced to explain the output of the neural network.Results show that under real manufacturing uncertainty,the efficiency deviation range is small under the near stall and peak efficiency conditions.However,under the near choke conditions,efficiency is highly sensitive to flow capacity changes caused by geometric uncertainty,leading to a significant increase in the efficiency deviation amplitude,up to a magnitude of-3.6%.Moreover,the tip leading-edge radius and tip thickness are two main factors affecting efficiency deviation.Therefore,to reduce efficiency uncertainty,a compressor should be avoided working near the choke condition,and the tolerances of the tip leading-edge radius and tip thickness should be strictly controlled. 展开更多
关键词 COMPRESSOR Geometric uncertainty quantification Interpretable machine learning Multiple conditions Neural network
原文传递
Towards Interpretable Defense Against Adversarial Attacks via Causal Inference 被引量:1
17
作者 Min Ren Yun-Long Wang Zhao-Feng He 《Machine Intelligence Research》 EI CSCD 2022年第3期209-226,共18页
Deep learning-based models are vulnerable to adversarial attacks. Defense against adversarial attacks is essential for sensitive and safety-critical scenarios. However, deep learning methods still lack effective and e... Deep learning-based models are vulnerable to adversarial attacks. Defense against adversarial attacks is essential for sensitive and safety-critical scenarios. However, deep learning methods still lack effective and efficient defense mechanisms against adversarial attacks. Most of the existing methods are just stopgaps for specific adversarial samples. The main obstacle is that how adversarial samples fool the deep learning models is still unclear. The underlying working mechanism of adversarial samples has not been well explored, and it is the bottleneck of adversarial attack defense. In this paper, we build a causal model to interpret the generation and performance of adversarial samples. The self-attention/transformer is adopted as a powerful tool in this causal model. Compared to existing methods, causality enables us to analyze adversarial samples more naturally and intrinsically. Based on this causal model, the working mechanism of adversarial samples is revealed, and instructive analysis is provided. Then, we propose simple and effective adversarial sample detection and recognition methods according to the revealed working mechanism. The causal insights enable us to detect and recognize adversarial samples without any extra model or training. Extensive experiments are conducted to demonstrate the effectiveness of the proposed methods. Our methods outperform the state-of-the-art defense methods under various adversarial attacks. 展开更多
关键词 Adversarial sample adversarial defense causal inference interpretable machine learning TRANSFORMERS
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部