[2022/12/16] Convergence analysis of unsupervised Legendre-Galerkin neural networks for linear second-order elliptic PDEs
Title : Convergence analysis of unsupervised Legendre-Galerkin neural networks for linear second-order elliptic PDEs
Time : Dec. 16 (Fri), 2:00 pm - 3:00 pm
Place : 31351 SKKU, Suwon
Speaker : Seungchan Ko (BRL)
In this talk, I will discuss the convergence analysis of unsupervised Legendre-Galerkin neural networks (ULGNet), a deep-learning-based numerical method for solving partial differential equations (PDEs). Unlike existing deep learning-based numerical methods for PDEs, the ULGNet expresses the solution as a spectral expansion with respect to the Legendre basis and predicts the coefficients with deep neural networks by solving a variational residual minimization problem. Using the fact that the corresponding loss function is equivalent to the residual induced by the linear algebraic system depending on the choice of basis functions, we prove that the minimizer of the discrete loss function converges to the weak solution of the PDEs. Numerical evidence will also be provided to support the theoretical result. Key technical tools include the variant of the universal approximation theorem for bounded neural networks, the analysis of the stiffness and mass matrices, and the uniform law of large numbers in terms of the Rademacher complexity.