期刊文献+
共找到10篇文章
< 1 >
每页显示 20 50 100
Interpretable machine learning optimization(InterOpt)for operational parameters:A case study of highly-efficient shale gas development
1
作者 Yun-Tian Chen Dong-Xiao Zhang +1 位作者 Qun Zhao De-Xun Liu 《Petroleum Science》 SCIE EI CAS CSCD 2023年第3期1788-1805,共18页
An algorithm named InterOpt for optimizing operational parameters is proposed based on interpretable machine learning,and is demonstrated via optimization of shale gas development.InterOpt consists of three parts:a ne... An algorithm named InterOpt for optimizing operational parameters is proposed based on interpretable machine learning,and is demonstrated via optimization of shale gas development.InterOpt consists of three parts:a neural network is used to construct an emulator of the actual drilling and hydraulic fracturing process in the vector space(i.e.,virtual environment);:the Sharpley value method in inter-pretable machine learning is applied to analyzing the impact of geological and operational parameters in each well(i.e.,single well feature impact analysis):and ensemble randomized maximum likelihood(EnRML)is conducted to optimize the operational parameters to comprehensively improve the efficiency of shale gas development and reduce the average cost.In the experiment,InterOpt provides different drilling and fracturing plans for each well according to its specific geological conditions,and finally achieves an average cost reduction of 9.7%for a case study with 104 wells. 展开更多
关键词 interpretable machine learning Operational parameters optimization Shapley value Shale gas development Neural network
下载PDF
Prediction of lattice thermal conductivity with two-stage interpretable machine learning
2
作者 胡锦龙 左钰婷 +10 位作者 郝昱州 舒国钰 王洋 冯敏轩 李雪洁 王晓莹 孙军 丁向东 高志斌 朱桂妹 李保文 《Chinese Physics B》 SCIE EI CAS CSCD 2023年第4期11-18,共8页
Thermoelectric and thermal materials are essential in achieving carbon neutrality. However, the high cost of lattice thermal conductivity calculations and the limited applicability of classical physical models have le... Thermoelectric and thermal materials are essential in achieving carbon neutrality. However, the high cost of lattice thermal conductivity calculations and the limited applicability of classical physical models have led to the inefficient development of thermoelectric materials. In this study, we proposed a two-stage machine learning framework with physical interpretability incorporating domain knowledge to calculate high/low thermal conductivity rapidly. Specifically, crystal graph convolutional neural network(CGCNN) is constructed to predict the fundamental physical parameters related to lattice thermal conductivity. Based on the above physical parameters, an interpretable machine learning model–sure independence screening and sparsifying operator(SISSO), is trained to predict the lattice thermal conductivity. We have predicted the lattice thermal conductivity of all available materials in the open quantum materials database(OQMD)(https://www.oqmd.org/). The proposed approach guides the next step of searching for materials with ultra-high or ultralow lattice thermal conductivity and promotes the development of new thermal insulation materials and thermoelectric materials. 展开更多
关键词 low lattice thermal conductivity interpretable machine learning thermoelectric materials physical domain knowledge
下载PDF
Predicting Hurricane Evacuation Decisions with Interpretable Machine Learning Methods
3
作者 Yuran Sun Shih‑Kai Huang Xilei Zhao 《International Journal of Disaster Risk Science》 SCIE CSCD 2024年第1期134-148,共15页
Facing the escalating effects of climate change,it is critical to improve the prediction and understanding of the hurricane evacuation decisions made by households in order to enhance emergency management.Current stud... Facing the escalating effects of climate change,it is critical to improve the prediction and understanding of the hurricane evacuation decisions made by households in order to enhance emergency management.Current studies in this area often have relied on psychology-driven linear models,which frequently exhibited limitations in practice.The present study proposed a novel interpretable machine learning approach to predict household-level evacuation decisions by leveraging easily accessible demographic and resource-related predictors,compared to existing models that mainly rely on psychological factors.An enhanced logistic regression model(that is,an interpretable machine learning approach) was developed for accurate predictions by automatically accounting for nonlinearities and interactions(that is,univariate and bivariate threshold effects).Specifically,nonlinearity and interaction detection were enabled by low-depth decision trees,which offer transparent model structure and robustness.A survey dataset collected in the aftermath of Hurricanes Katrina and Rita,two of the most intense tropical storms of the last two decades,was employed to test the new methodology.The findings show that,when predicting the households’ evacuation decisions,the enhanced logistic regression model outperformed previous linear models in terms of both model fit and predictive capability.This outcome suggests that our proposed methodology could provide a new tool and framework for emergency management authorities to improve the prediction of evacuation traffic demands in a timely and accurate manner. 展开更多
关键词 Artifcial Intelligence(AI) Decision-making modeling Hurricane evacuation interpretable machine learning Nonlinearity and interaction detection
原文传递
Interpretable and Adaptable Early Warning Learning Analytics Model
4
作者 Shaleeza Sohail Atif Alvi Aasia Khanum 《Computers, Materials & Continua》 SCIE EI 2022年第5期3211-3225,共15页
Major issues currently restricting the use of learning analytics are the lack of interpretability and adaptability of the machine learning models used in this domain.Interpretability makes it easy for the stakeholders... Major issues currently restricting the use of learning analytics are the lack of interpretability and adaptability of the machine learning models used in this domain.Interpretability makes it easy for the stakeholders to understand the working of these models and adaptability makes it easy to use the same model for multiple cohorts and courses in educational institutions.Recently,some models in learning analytics are constructed with the consideration of interpretability but their interpretability is not quantified.However,adaptability is not specifically considered in this domain.This paper presents a new framework based on hybrid statistical fuzzy theory to overcome these limitations.It also provides explainability in the form of rules describing the reasoning behind a particular output.The paper also discusses the system evaluation on a benchmark dataset showing promising results.The measure of explainability,fuzzy index,shows that the model is highly interpretable.This system achieves more than 82%recall in both the classification and the context adaptation stages. 展开更多
关键词 learning analytics interpretable machine learning fuzzy systems early warning INTERPRETABILITY explainable artificial intelligence
下载PDF
Interpretable machine learning analysis and automated modeling to simulate fluid-particle flows 被引量:1
5
作者 Bo Ouyang Litao Zhu Zhenghong Luo 《Particuology》 SCIE EI CAS CSCD 2023年第9期42-52,共11页
The present study extracts human-understandable insights from machine learning(ML)-based mesoscale closure in fluid-particle flows via several novel data-driven analysis approaches,i.e.,maximal information coefficient... The present study extracts human-understandable insights from machine learning(ML)-based mesoscale closure in fluid-particle flows via several novel data-driven analysis approaches,i.e.,maximal information coefficient(MIC),interpretable ML,and automated ML.It is previously shown that the solidvolume fraction has the greatest effect on the drag force.The present study aims to quantitativelyinvestigate the influence of flow properties on mesoscale drag correction(H_(d)).The MIC results showstrong correlations between the features(i.e.,slip velocity(u^(*)_(sy))and particle volume fraction(εs))and thelabel H_(d).The interpretable ML analysis confirms this conclusion,and quantifies the contribution of u^(*)_(sy),εs and gas pressure gradient to the model as 71.9%,27.2%and 0.9%,respectively.Automated ML without theneed to select the model structure and hyperparameters is used for modeling,improving the predictionaccuracy over our previous model(Zhu et al.,2020;Ouyang,Zhu,Su,&Luo,2021). 展开更多
关键词 Filtered two-fluid model Fluid-particle flow Mesoscale closure interpretable machine learning Automated machine learning Maximal information coefficient
原文传递
Approaching the upper boundary of driver-response relationships:identifying factors using a novel framework integrating quantile regression with interpretable machine learning
6
作者 Zhongyao Liang Yaoyang Xu +4 位作者 Gang Zhao Wentao Lu Zhenghui Fu Shuhang Wang Tyler Wagner 《Frontiers of Environmental Science & Engineering》 SCIE EI CSCD 2023年第6期153-163,共11页
The identification of factors that may be forcing ecological observations to approach the upper boundary provides insight into potential mechanisms affecting driver-response relationships,and can help inform ecosystem... The identification of factors that may be forcing ecological observations to approach the upper boundary provides insight into potential mechanisms affecting driver-response relationships,and can help inform ecosystem management,but has rarely been explored.In this study,we propose a novel framework integrating quantile regression with interpretable machine learning.In the first stage of the framework,we estimate the upper boundary of a driver-response relationship using quantile regression.Next,we calculate“potentials”of the response variable depending on the driver,which are defined as vertical distances from the estimated upper boundary of the relationship to observations in the driver-response variable scatter plot.Finally,we identify key factors impacting the potential using a machine learning model.We illustrate the necessary steps to implement the framework using the total phosphorus(TP)-Chlorophyll a(CHL)relationship in lakes across the continental US.We found that the nitrogen to phosphorus ratio(N:P),annual average precipitation,total nitrogen(TN),and summer average air temperature were key factors impacting the potential of CHL depending on TP.We further revealed important implications of our findings for lake eutrophication management.The important role of N:P and TN on the potential highlights the co-limitation of phosphorus and nitrogen and indicates the need for dual nutrient criteria.Future wetter and/or warmer climate scenarios can decrease the potential which may reduce the efficacy of lake eutrophication management.The novel framework advances the application of quantile regression to identify factors driving observations to approach the upper boundary of driver-response relationships. 展开更多
关键词 Driver-response Upper boundary of relationship interpretable machine learning Quantile regression Total phosphorus Chlorophyll a
原文传递
Multimodal Machine Learning Guides Low Carbon Aeration Strategies in Urban Wastewater Treatment
7
作者 Hong-Cheng Wang Yu-Qi Wang +4 位作者 Xu Wang Wan-Xin Yin Ting-Chao Yu Chen-Hao Xue Ai-Jie Wang 《Engineering》 SCIE EI CAS 2024年第5期51-62,共12页
The potential for reducing greenhouse gas(GHG)emissions and energy consumption in wastewater treatment can be realized through intelligent control,with machine learning(ML)and multimodality emerging as a promising sol... The potential for reducing greenhouse gas(GHG)emissions and energy consumption in wastewater treatment can be realized through intelligent control,with machine learning(ML)and multimodality emerging as a promising solution.Here,we introduce an ML technique based on multimodal strategies,focusing specifically on intelligent aeration control in wastewater treatment plants(WWTPs).The generalization of the multimodal strategy is demonstrated on eight ML models.The results demonstrate that this multimodal strategy significantly enhances model indicators for ML in environmental science and the efficiency of aeration control,exhibiting exceptional performance and interpretability.Integrating random forest with visual models achieves the highest accuracy in forecasting aeration quantity in multimodal models,with a mean absolute percentage error of 4.4%and a coefficient of determination of 0.948.Practical testing in a full-scale plant reveals that the multimodal model can reduce operation costs by 19.8%compared to traditional fuzzy control methods.The potential application of these strategies in critical water science domains is discussed.To foster accessibility and promote widespread adoption,the multimodal ML models are freely available on GitHub,thereby eliminating technical barriers and encouraging the application of artificial intelligence in urban wastewater treatment. 展开更多
关键词 Wastewater treatment Multimodal machine learning Deep learning Aeration control interpretable machine learning
下载PDF
Explainable artificial intelligence and interpretable machine learning for agricultural data analysis 被引量:1
8
作者 Masahiro Ryo 《Artificial Intelligence in Agriculture》 2022年第1期257-265,共9页
Artificial intelligence and machine learning have been increasingly applied for prediction in agricultural science.However,many models are typically black boxes,meaning we cannot explain what the models learned from t... Artificial intelligence and machine learning have been increasingly applied for prediction in agricultural science.However,many models are typically black boxes,meaning we cannot explain what the models learned from the data and the reasons behind predictions.To address this issue,I introduce an emerging subdomain of artificial intelligence,explainable artificial intelligence(XAI),and associated toolkits,interpretable machine learning.This study demonstrates the usefulness of several methods by applying them to an openly available dataset.The dataset includes the no-tillage effect on crop yield relative to conventional tillage and soil,climate,and management variables.Data analysis discovered that no-tillage management can increase maize crop yield where yield in conventional tillage is<5000 kg/ha and the maximum temperature is higher than 32°.These methods are useful to answer(i)which variables are important for prediction in regression/classification,(ii)which variable interactions are important for prediction,(iii)how important variables and their interactions are associated with the response variable,(iv)what are the reasons underlying a predicted value for a certain instance,and(v)whether different machine learning algorithms offer the same answer to these questions.I argue that the goodness of model fit is overly evaluated with model performance measures in the current practice,while these questions are unanswered.XAI and interpretable machine learning can enhance trust and explainability in AI. 展开更多
关键词 interpretable machine learning Explainable artificial intelligence AGRICULTURE Crop yield NO-TILLAGE XAI
原文传递
Towards Interpretable Defense Against Adversarial Attacks via Causal Inference 被引量:1
9
作者 Min Ren Yun-Long Wang Zhao-Feng He 《Machine Intelligence Research》 EI CSCD 2022年第3期209-226,共18页
Deep learning-based models are vulnerable to adversarial attacks. Defense against adversarial attacks is essential for sensitive and safety-critical scenarios. However, deep learning methods still lack effective and e... Deep learning-based models are vulnerable to adversarial attacks. Defense against adversarial attacks is essential for sensitive and safety-critical scenarios. However, deep learning methods still lack effective and efficient defense mechanisms against adversarial attacks. Most of the existing methods are just stopgaps for specific adversarial samples. The main obstacle is that how adversarial samples fool the deep learning models is still unclear. The underlying working mechanism of adversarial samples has not been well explored, and it is the bottleneck of adversarial attack defense. In this paper, we build a causal model to interpret the generation and performance of adversarial samples. The self-attention/transformer is adopted as a powerful tool in this causal model. Compared to existing methods, causality enables us to analyze adversarial samples more naturally and intrinsically. Based on this causal model, the working mechanism of adversarial samples is revealed, and instructive analysis is provided. Then, we propose simple and effective adversarial sample detection and recognition methods according to the revealed working mechanism. The causal insights enable us to detect and recognize adversarial samples without any extra model or training. Extensive experiments are conducted to demonstrate the effectiveness of the proposed methods. Our methods outperform the state-of-the-art defense methods under various adversarial attacks. 展开更多
关键词 Adversarial sample adversarial defense causal inference interpretable machine learning TRANSFORMERS
原文传递
Compressor geometric uncertainty quantification under conditions from near choke to near stall 被引量:1
10
作者 Junying WANG Baotong WANG +3 位作者 Heli YANG Zhenzhong SUN Kai ZHOU Xinqian ZHENG 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2023年第3期16-29,共14页
Geometric and working condition uncertainties are inevitable in a compressor,deviating the compressor performance from the design value.It’s necessary to explore the influence of geometric uncertainty on performance ... Geometric and working condition uncertainties are inevitable in a compressor,deviating the compressor performance from the design value.It’s necessary to explore the influence of geometric uncertainty on performance deviation under different working conditions.In this paper,the geometric uncertainty influences at near stall,peak efficiency,and near choke conditions under design speed and low speed are investigated.Firstly,manufacturing geometric uncertainties are analyzed.Next,correlation models between geometry and performance under different working conditions are constructed based on a neural network.Then the Shapley additive explanations(SHAP)method is introduced to explain the output of the neural network.Results show that under real manufacturing uncertainty,the efficiency deviation range is small under the near stall and peak efficiency conditions.However,under the near choke conditions,efficiency is highly sensitive to flow capacity changes caused by geometric uncertainty,leading to a significant increase in the efficiency deviation amplitude,up to a magnitude of-3.6%.Moreover,the tip leading-edge radius and tip thickness are two main factors affecting efficiency deviation.Therefore,to reduce efficiency uncertainty,a compressor should be avoided working near the choke condition,and the tolerances of the tip leading-edge radius and tip thickness should be strictly controlled. 展开更多
关键词 COMPRESSOR Geometric uncertainty quantification interpretable machine learning Multiple conditions Neural network
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部