We present Parameter Quantification Network(PQ-Net),a regression deep convolutional neural network providing quantitative analysis of powder X-ray diffraction patterns from multi-phase systems.The network is tested ag...We present Parameter Quantification Network(PQ-Net),a regression deep convolutional neural network providing quantitative analysis of powder X-ray diffraction patterns from multi-phase systems.The network is tested against simulated and experimental datasets of increasing complexity with the last one being an X-ray diffraction computed tomography dataset of a multi-phase Ni-Pd/CeO_(2)-ZrO_(2)/Al_(2)O_(3) catalytic material system consisting of ca.20,000 diffraction patterns.It is shown that the network predicts accurate scale factor,lattice parameter and crystallite size maps for all phases,which are comparable to those obtained through full profile analysis using the Rietveld method,also providing a reliable uncertainty measure on the results.The main advantage of PQNet is its ability to yield these results orders of magnitude faster showing its potential as a tool for real-time diffraction data analysis during in situ/operando experiments.展开更多
Materials for energy-related applications,which are crucial for a sustainable energy economy,rely on combining materials that form complex heterogenous interfaces.Simultaneously,progress in computational materials sci...Materials for energy-related applications,which are crucial for a sustainable energy economy,rely on combining materials that form complex heterogenous interfaces.Simultaneously,progress in computational materials science in describing complex interfaces is critical for improving the understanding and performance of energy materials.Hence,we present an in-depth review of the physical quantities regulating interfaces in batteries,photovoltaics,and photocatalysts,that are accessible from modern electronic structure methods,with a focus on density functional theory calculations.For each energy application,we highlight unique approaches that have been developed to calculate interfacial properties and explore the possibility of applying some of these approaches across disciplines,leading to a unified overview of interface design.Finally,we identify a set of challenges for further improving the theoretical description of interfaces in energy devices.展开更多
Machine learning has become a common and powerful tool in materials research.As more data become available,with the use of high-performance computing and high-throughput experimentation,machine learning has proven pot...Machine learning has become a common and powerful tool in materials research.As more data become available,with the use of high-performance computing and high-throughput experimentation,machine learning has proven potential to accelerate scientific research and technology development.Though the uptake of data-driven approaches for materials science is at an exciting,early stage,to realize the true potential of machine learning models for successful scientific discovery,they must have qualities beyond purely predictive power.The predictions and inner workings of models should provide a certain degree of explainability by human experts,permitting the identification of potential model issues or limitations,building trust in model predictions,and unveiling unexpected correlations that may lead to scientific insights.In this work,we summarize applications of interpretability and explainability techniques for materials science and chemistry and discuss how these techniques can improve the outcome of scientific studies.We start by defining the fundamental concepts of interpretability and explainability in machine learning and making them less abstract by providing examples in the field.We show how interpretability in scientific machine learning has additional constraints compared to general applications.Building upon formal definitions in machine learning,we formulate the basic trade-offs among the explainability,completeness,and scientific validity of model explanations in scientific problems.In the context of these trade-offs,we discuss how interpretable models can be constructed,what insights they provide,and what drawbacks they have.We present numerous examples of the application of interpretable machine learning in a variety of experimental and simulation studies,encompassing first-principles calculations,physicochemical characterization,materials development,and integration into complex systems.We discuss the varied impacts and uses of interpretabiltiy in these cases according to the nature and constraints of the scientific study of interest.We discuss various challenges for interpretable machine learning in materials science and,more broadly,in scientific settings.In particular,we emphasize the risks of inferring causation or reaching generalization by purely interpreting machine learning models and the need for uncertainty estimates for model explanations.Finally,we showcase a number of exciting developments in other fields that could benefit interpretability in material science problems.Adding interpretability to a machine learning model often requires no more technical know-how than building the model itself.By providing concrete examples of studies(many with associated open source code and data),we hope that this Account will encourage all practitioners of machine learning in materials science to look deeper into their models.展开更多
The use of machine learning is becoming increasingly common in computational materials science.To build effective models of the chemistry of materials,useful machine-based representations of atoms and their compounds ...The use of machine learning is becoming increasingly common in computational materials science.To build effective models of the chemistry of materials,useful machine-based representations of atoms and their compounds are required.We derive distributed representations of compounds from their chemical formulas only,via pooling operations of distributed representations of atoms.These compound representations are evaluated on ten different tasks,such as the prediction of formation energy and band gap,and are found to be competitive with existing benchmarks that make use of structure,and even superior in cases where only composition is available.Finally,we introduce an approach for learning distributed representations of atoms,named SkipAtom,which makes use of the growing information in materials structure databases.展开更多
基金We would like to thank Marco di Michiel(ID15A,ESRF)and Jakub Drnec(ID31,ESRF)for preparing beamline instrumentation and setup and for their help with the experimental XRD-CT data acquisition.We acknowledge ESRF for beamtime.Finden acknowledges funding through the Innovate UK Analysis for Innovators(A4i)program(Project No:106107)A.M.B.acknowledges EPSRC(grants EP/R026815/1 and EP/S016481/1).
文摘We present Parameter Quantification Network(PQ-Net),a regression deep convolutional neural network providing quantitative analysis of powder X-ray diffraction patterns from multi-phase systems.The network is tested against simulated and experimental datasets of increasing complexity with the last one being an X-ray diffraction computed tomography dataset of a multi-phase Ni-Pd/CeO_(2)-ZrO_(2)/Al_(2)O_(3) catalytic material system consisting of ca.20,000 diffraction patterns.It is shown that the network predicts accurate scale factor,lattice parameter and crystallite size maps for all phases,which are comparable to those obtained through full profile analysis using the Rietveld method,also providing a reliable uncertainty measure on the results.The main advantage of PQNet is its ability to yield these results orders of magnitude faster showing its potential as a tool for real-time diffraction data analysis during in situ/operando experiments.
基金K.T.B.acknowledges the support of STFC and UKRI.P.C.is funded from the Singapore Ministry of Education Academic Fund Tier 1(R-284-000-186-133).
文摘Materials for energy-related applications,which are crucial for a sustainable energy economy,rely on combining materials that form complex heterogenous interfaces.Simultaneously,progress in computational materials science in describing complex interfaces is critical for improving the understanding and performance of energy materials.Hence,we present an in-depth review of the physical quantities regulating interfaces in batteries,photovoltaics,and photocatalysts,that are accessible from modern electronic structure methods,with a focus on density functional theory calculations.For each energy application,we highlight unique approaches that have been developed to calculate interfacial properties and explore the possibility of applying some of these approaches across disciplines,leading to a unified overview of interface design.Finally,we identify a set of challenges for further improving the theoretical description of interfaces in energy devices.
文摘Machine learning has become a common and powerful tool in materials research.As more data become available,with the use of high-performance computing and high-throughput experimentation,machine learning has proven potential to accelerate scientific research and technology development.Though the uptake of data-driven approaches for materials science is at an exciting,early stage,to realize the true potential of machine learning models for successful scientific discovery,they must have qualities beyond purely predictive power.The predictions and inner workings of models should provide a certain degree of explainability by human experts,permitting the identification of potential model issues or limitations,building trust in model predictions,and unveiling unexpected correlations that may lead to scientific insights.In this work,we summarize applications of interpretability and explainability techniques for materials science and chemistry and discuss how these techniques can improve the outcome of scientific studies.We start by defining the fundamental concepts of interpretability and explainability in machine learning and making them less abstract by providing examples in the field.We show how interpretability in scientific machine learning has additional constraints compared to general applications.Building upon formal definitions in machine learning,we formulate the basic trade-offs among the explainability,completeness,and scientific validity of model explanations in scientific problems.In the context of these trade-offs,we discuss how interpretable models can be constructed,what insights they provide,and what drawbacks they have.We present numerous examples of the application of interpretable machine learning in a variety of experimental and simulation studies,encompassing first-principles calculations,physicochemical characterization,materials development,and integration into complex systems.We discuss the varied impacts and uses of interpretabiltiy in these cases according to the nature and constraints of the scientific study of interest.We discuss various challenges for interpretable machine learning in materials science and,more broadly,in scientific settings.In particular,we emphasize the risks of inferring causation or reaching generalization by purely interpreting machine learning models and the need for uncertainty estimates for model explanations.Finally,we showcase a number of exciting developments in other fields that could benefit interpretability in material science problems.Adding interpretability to a machine learning model often requires no more technical know-how than building the model itself.By providing concrete examples of studies(many with associated open source code and data),we hope that this Account will encourage all practitioners of machine learning in materials science to look deeper into their models.
文摘The use of machine learning is becoming increasingly common in computational materials science.To build effective models of the chemistry of materials,useful machine-based representations of atoms and their compounds are required.We derive distributed representations of compounds from their chemical formulas only,via pooling operations of distributed representations of atoms.These compound representations are evaluated on ten different tasks,such as the prediction of formation energy and band gap,and are found to be competitive with existing benchmarks that make use of structure,and even superior in cases where only composition is available.Finally,we introduce an approach for learning distributed representations of atoms,named SkipAtom,which makes use of the growing information in materials structure databases.