Author

Chunyan Li

Date of Award

Fall 2023

Document Type

Open Access Dissertation

Department

Mathematics

First Advisor

Qi Wang

Abstract

Deep learning has achieved remarkable success in various fields, including image processing, natural language processing, and signal processing, ushering in a transformative era in research and real-life applications. This rise underscores the transformative capabilities of deep learning, with its innovative approach to function approximation. Traditionally, approximation theory relied on additive constructions, but deep neural networks have introduced a compositional approach, reshaping the landscape of function approximation. This thesis explores the efficacy of this compositional construction within the realm of material science and the solution of thermodynamically consistent partial differential equations with thermodynamically consistent dynamic boundary conditions in arbitrary domains. As a prominent application of deep learning in material science, our investigation focuses on the chemical stability of substituted cobaltocenium (CoCp$_2^+$), a crucial component within anion exchange membranes used in fuel cells. Employing advanced deep learning techniques to enhance our comprehension of electronic structures, this study delves into the behavior of CoCp$_2^+$OH$^-$ complexes within an aqueous environment. Utilizing a comprehensive dataset encompassing diverse substituent groups, our analysis unveils robust correlations between stability, quantified by bond dissociation energy (BDE), and 12 chemistry-informed descriptors associated with relevant fragments. The BDE, a key indicator of complex stability, demonstrates a pronounced correlation with the energies of the highest occupied and lowest unoccupied molecular orbitals (HOMO and LUMO). These orbital energies undergo modulation through a switching function of the Hirshfeld charge, providing insights into the electron-donating/withdrawing nature of substituents. This valuable insight lays the foundation for developing two distinct deep neural network models: the chemistry-informed neural network (CINN), incorporating domain-specific knowledge into the neural network structure, and quadratic neural networks (QNN) with enhanced expressive capability—requiring much fewer parameters to achieve better accuracy. Both the CINN and QNN models outperform conventional regression and support vector machine models in predicting bond dissociation energies. This highlights the potential of CINN and QNN to capture intricate relationships within the data. The integration of domain knowledge into neural network structures emerges as a promising strategy, particularly beneficial for relatively small datasets, yielding predictive models that surpass traditional machine learning techniques in terms of accuracy and effectiveness. Additionally, we present a novel sampling strategy named the Energy Dissipation Rate-Based Adaptive Refinement (EDRR) method, meticulously crafted to enhance the effectiveness of physics-informed neural networks (PINN) in addressing phase field models. The core principle behind EDRR is to leverage the energy dissipation rate density as an indicator for resampling collocation points. These additional points are strategically incorporated into the training set to facilitate model retraining. Using the Allen-Cahn equation as a benchmark, our study conducts a comparative analysis between EDRR and the residual-based adaptive refinement (RAR) method. The results unequivocally establish EDRR's supremacy in terms of both accuracy and computational efficiency. EDRR surpasses RAR across various metrics, including relative mean squared error, relative mean absolute error and relative $L^{\infty}$ error. The study also addresses the importance of dynamic boundary conditions when the system exhibits comparable scales in the surface and bulk or when the system possesses larger surface scales in comparison to its scales in the bulk. We derive thermodynamically consistent phase field models equipped with thermodynamically consistent dynamic boundary conditions using the generalized Onsager principle. To solve these models, we develop a numerical tool known as dynamic physics-informed neural networks by introducing the residual of dynamic boundary conditions into the loss function, which can be combined with the sampling strategy, EDRR method, to improve the performance. We present the use of auto-differentiation techniques to easily and precisely evaluate surface differential operators associated with dynamic boundary conditions, making it applicable to arbitrary smooth geometries. The efficacy of our methodology is demonstrated through extensive numerical experiments. These experiments explore the different impacts of dynamic boundary conditions and static boundary conditions as well as the impact of dynamic boundary conditions on bulk dynamics, offering insights gleaned from grain coarsening simulations of Allen-Cahn and Cahn-Hilliard models. The simulations are conducted in varied geometries, including disk and oval disk.

Rights

© 2024, Chunyan Li

Share

COinS