The application of Artificial Intelligence in various fields has witnessed tremendous progress in the recent years.The field of geosciences and natural hazard modelling has also benefitted immensely from the introduct...The application of Artificial Intelligence in various fields has witnessed tremendous progress in the recent years.The field of geosciences and natural hazard modelling has also benefitted immensely from the introduction of novel algorithms,the availability of large quantities of data,and the increase in computational capacity.The enhancement in algorithms can be largely attributed to the elevated complexity of the network architecture and the heightened level of abstraction found in the network's later layers.As a result,AI models lack transparency and accountability,often being dubbed as"black box"models.Explainable AI(XAI)is emerging as a solution to make AI models more transparent,especially in domains where transparency is essential.Much discussion surrounds the use of XAI for diverse purposes,as researchers explore its applications across various domains.With the growing body of research papers on XAI case studies,it has become increasingly important to address existing gaps in the literature.The current literature lacks a comprehensive understanding of the capabilities,limitations,and practical implications of XAI.This study provides a comprehensive overview of what constitutes XAI,how it is being used and potential applications in hydrometeorological natural hazards.It aims to serve as a useful reference for researchers,practitioners,and stakeholders who are currently using or intending to adopt XAI,thereby contributing to the advancements for wider acceptance of XAI in the future.展开更多
基金supported by the Centre for Advanced Modelling and Geospatial Information Systems,Faculty of Engineering and Information Technology,University of Technology Sydneysupported by the IRTP scholarship funded by the Department of Education and Training,Govt.of Australia.
文摘The application of Artificial Intelligence in various fields has witnessed tremendous progress in the recent years.The field of geosciences and natural hazard modelling has also benefitted immensely from the introduction of novel algorithms,the availability of large quantities of data,and the increase in computational capacity.The enhancement in algorithms can be largely attributed to the elevated complexity of the network architecture and the heightened level of abstraction found in the network's later layers.As a result,AI models lack transparency and accountability,often being dubbed as"black box"models.Explainable AI(XAI)is emerging as a solution to make AI models more transparent,especially in domains where transparency is essential.Much discussion surrounds the use of XAI for diverse purposes,as researchers explore its applications across various domains.With the growing body of research papers on XAI case studies,it has become increasingly important to address existing gaps in the literature.The current literature lacks a comprehensive understanding of the capabilities,limitations,and practical implications of XAI.This study provides a comprehensive overview of what constitutes XAI,how it is being used and potential applications in hydrometeorological natural hazards.It aims to serve as a useful reference for researchers,practitioners,and stakeholders who are currently using or intending to adopt XAI,thereby contributing to the advancements for wider acceptance of XAI in the future.