Scientific and technical journal
«Automation and Informatization of the fuel and energy complex»
ISSN 0132-2222
Constructing a finite element approximation for the inverse problem solution using physics-informed neural networks
UDC: 517.9+519.6+004.032.26
DOI: -
Authors:
1 National University of Oil and Gas "Gubkin University", Moscow, Russia
Keywords: physically informed neural networks (PINN), inverse problems, finite element approximations, domain decomposition (DD), deep learning (DL), full waveform inversion (FWI), wave equation, finite difference method, convolutional neural networks (CNN)
Annotation:
The authors of the article propose methods and algorithms for constructing a finite element approximation for the inverse coefficient problem solution using physics-informed neural networks. The key feature of the proposed approach is the independent prediction of a local approximation of the desired parameter by piecewise defined functions in each part of the decomposed domain using a trained neural network. Averaging of the coefficients for basic functions with a common vertex at the boundaries of the subdomains allows obtaining a continuous global coefficient and compensating errors within the region. The encoder is represented by a multi-layered convolutional architecture, created by analogy with EfficientNetV2. The possibilities of the proposed methods and algorithms were demonstrated on a synthetic data set of wave propagation described by an inhomogeneous two-dimensional wave equation with a variable speed coefficient. Synthetic solutions were obtained using the finite difference method. The advantages of using the described approach to the problem of full waveform inverse (FWI) are noted.
Bibliography:
1. Tikhonov A.N. O nekorrektnykh zadachakh lineynoy algebry i ustoychivom metode ikh resheniya // Dokl. AN SSSR. – 1965. – T. 163, № 3. – S. 591–594.2. An introduction to full waveform inversion / J. Virieux, A. Asnaashari, R. Brossier [et al.] // Encyclopedia of exploration geophysics. – 2017. – P. R1-1–R1-40. – DOI: 10.1190/1.9781560803027.entry6
3. Physics-informed machine learning / G.E. Karniadakis, I.G. Kevrekidis, Lu Lu [et al.] // Nature Reviews Physics. – 2021. – Vol. 3. – P. 422–440. – DOI: 10.1038/s42254-021-00314-5
4. Scientific Machine Learning Through Physics–Informed Neural Networks: Where we are and What's Next / S. Cuomo, V.S. Di Cola, F. Giampaolo [et al.] // J. of Scientific Computing. – 2022. – Vol. 92. – Article No. 88. – DOI: 10.1007/s10915-022-01939-z
5. InversionNet3D: Efficient and Scalable Learning for 3-D Full-Waveform Inversion / Zeng Qili, Feng Shihang, B. Wohlberg, Lin Youzuo // IEEE Transactions on Geoscience and Remote Sensing. – 2022. – Vol. 60. – P. 1–16. – DOI: 10.1109/TGRS.2021.3135354
6. Solving inverse-PDE problems with physics-aware neural networks / S. Pakravan, P.A. Mistani, M.A. Aragon-Calvo, F. Gibou // J. of Computational Physics. – 2021. – Vol. 440. – DOI: 10.1016/j.jcp.2021.110414
7. Lu P.Y., Kim S., Soljačić M. Extracting Interpretable Physical Parameters from Spatiotemporal Systems Using Unsupervised Learning // Physical Review X. – 2020. – Vol. 10, Issue 3. – P. 031056. – DOI: 10.1103/PhysRevX.10.031056
8. Moseley B., Markham A., Nissen-Meyer T. Finite basis physics-informed neural networks (FBPINNs): a scalable domain decomposition approach for solving differential equations // Advances in Computational Mathematics. – 2023. – Vol. 49. – Article No. 62. – DOI: 10.1007/s10444-023-10065-9
9. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators / Lu Lu, Jin Pengzhan, Pang Guofei [et al.] // Nature Machine Intelligence. – 2021. – Vol. 3. – P. 218–229. – DOI: 10.1038/s42256-021-00302-5
10. An Expert's Guide to Training Physics-informed Neural Networks / Wang Sifan, S. Sankaran, Wang Hanwen, P. Perdikaris. – 2023. – DOI: 10.48550/arXiv.2308.08468
11. Maurício J., Domingues I., Bernardino J. Comparing Vision Transformers and Convolutional Neural Networks for Image Classification: A Literature Review // Applied Sciences. – 2023. – Vol. 13, Issue 9. – P. 5521. – DOI: 10.3390/app13095521
12. Tan Mingxing, Le Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks // ICML. – 2019. – DOI: 10.48550/arXiv.1905.11946
13. Tan Mingxing, Le Q.V. EfficientNetV2: Smaller Models and Faster Training // ICML. – 2021. – DOI: 10.48550/arXiv.2104.00298
14. Dropout: a simple way to prevent neural networks from overfitting / N. Srivastava, G. Hinton, A. Krizhevsky [et al.] // J. of Machine Learning Research. – 2014. – Vol. 15, Issue 1. – P. 1929–1958. – DOI: 10.5555/2627435.2670313
15. Kingma D.P., Ba J. Adam: A Method for Stochastic Optimization // ICLR. – 2015. – DOI: 10.48550/arXiv.1412.6980
16. Arsen'ev-Obraztsov S.S., Plyushch G.O. Primenenie metodov glubokogo obucheniya v aktual'nykh zadachakh obrabotki mikroKT obraztsov kerna. Reshenie obratnoy zadachi, interpolyatsiya razrezhennykh sinogramm, fil'tratsiya izobrazheniy srezov // Avtomatizatsiya i informatizatsiya TEK. – 2023. – № 10(603). – S. 48–58. – DOI: 10.33285/2782-604X-2023-10(603)-48-58