Главная страница

Алгоритм увеличения точности нейронных сетей и его приложения


Скачать 3.63 Mb.
НазваниеАлгоритм увеличения точности нейронных сетей и его приложения
Дата24.06.2022
Размер3.63 Mb.
Формат файлаpdf
Имя файлаdiss_avrutskiy.pdf
ТипДиссертация
#613262
страница13 из 13
1   ...   5   6   7   8   9   10   11   12   13
[28] Rudd, K. A constrained integration (CINT) approach to solving partial differential equations using artificial neural networks / K. Rudd, S. Ferrari // Neurocomputing. — 2015. — Vol. 155. — Pp. 277–285.
[29] Shirvany, Y. Multilayer perceptron neural networks with novel unsupervised training method for numerical solution of the partial differential equations / Y. Shirvany, M. Hayati, R. Moradian // Applied Soft Computing.
— 2009. — Vol. 9, no. 1. — Pp. 20–29.
[30] Sirignano, J. DGM: A deep learning algorithm for solving partial differential equations / J. Sirignano,
K. Spiliopoulos // Journal of Computational Physics. — 2018. — Vol. 375. — Pp. 1339–1364.
[31] Tsoulos, I.G. Solving differential equations with constructed neural networks / I.G. Tsoulos, D. Gavrilis,
E. Glavas // Neurocomputing. — 2009. — Vol. 72, no. 10-12. — Pp. 2385–2391.
[32] Горбаченко, В.И. Решение краевых задач математической физики с помощью сетей радиальных базисных функций / В.И. Горбаченко, М.В. Жуков // Журнал вычислительной математики и математической физики. — 2017. — Vol. 57, no. 1. — Pp. 133–143.
[33] Горбаченко, В.И. Подходы и методы обучения сетей радиальных базисных функций для решения задач математической физики / В.И. Горбаченко, М.В. Жуков // Нейрокомпьютеры разработка применение.
— 2013. — no. 9. — Pp. 012–018.
[34] Васильев, А.Н. Нейросетевые подходы к решению краевых задач в многомерных составных областях /
А.Н. Васильев, Д.А. Тархов // Известия Южного федерального университета. Технические науки. —
2004. — Vol. 44, no. 9.
[35] Васильев, А.Н. Эволюционные алгоритмы решения краевых задач в областях, допускающих декомпозицию / А.Н. Васильев, Д.А. Тархов // Математическое моделирование. — 2007. — Vol. 19,
no. 12. — Pp. 52–62.
[36] Васильев, А.Н. Параметрические нейросетевые модели классических и неклассических задач для уравнения теплопроводности / А.Н. Васильев, Д.А. Тархов // Научно-технические ведомости Санкт-
Петербургского государственного политехнического университета. Физико-математические науки.
— 2012. — no. 3 (153).

114
[37] Нейросетевой подход к решению сложных задач для обыкновенных дифференциальных уравнений /
А.Н. Васильев, Т.В. Лазовская, Д.А. Тархов, Т.А. Шемякина // XVIII международная научно- техническая конференция Нейроинформатика-2016. — 2016. — Pp. 52–60.
[38] Smith, G.D. Numerical solution of partial differential equations: finite difference methods / G.D. Smith. —
Oxford university press, 1985.
[39] Taflove, A. Computational electrodynamics: the finite-difference time-domain method / A. Taflove, S.C. Hag- ness. — Artech house, 2005.
[40] Eymard, R. The finite volume method for Richards equation / R. Eymard, M. Gutnic, D. Hilhorst // Com- putational Geosciences. — 1999. — Vol. 3, no. 3. — Pp. 259–294.
[41] Godunov, S.K. A difference method for numerical calculation of discontinuous solutions of the equations of hydrodynamics / S.K. Godunov // Matematicheskii Sbornik. — 1959. — Vol. 89, no. 3. — Pp. 271–306.
[42] Kolgan, V.P. Application of the minimum-derivative principle in the construction of finite-difference schemes for numerical analysis of discontinuous solutions in gas dynamics / V.P. Kolgan // Uchenye Zapiski TsaGI
[Sci. Notes Central Inst. Aerodyn]. — 1972. — Vol. 3, no. 6. — Pp. 68–77.
[43] Application of a meshless method in electromagnetics / S.L. Ho, S. Yang, J.M. Machado, H.C. Wong //
IEEE transactions on magnetics. — 2001. — Vol. 37, no. 5. — Pp. 3198–3202.
[44] The meshless finite element method / S.R. Idelsohn, E. Onate, N. Calvo, F. Del Pin // International Journal for Numerical Methods in Engineering. — 2003. — Vol. 58, no. 6. — Pp. 893–912.
[45] Katz, A.J. Meshless methods for computational fluid dynamics / A.J. Katz. — Stanford University Stanford,
CA, 2009.
[46] Artificial neural network methods for the solution of second order boundary value problems / C. Anitescu,
E. Atroshchenko, N. Alajlan, T. Rabczuk // Computers, Materials & Continua. — 2019. — Vol. 59, no. 1. —
Pp. 345–359.
[47] Ponulak, F. Introduction to spiking neural networks: Information processing, learning and applications. /
F. Ponulak, A. Kasinski // Acta neurobiologiae experimentalis. — 2011. — Vol. 71, no. 4. — Pp. 409–433.
[48] Analog circuits for modeling biological neural networks: design and applications / S. Le Masson,
A. Laflaquiere, T. Bal, G. Le Masson // IEEE transactions on biomedical engineering. — 1999. — Vol. 46,
no. 6. — Pp. 638–645.
[49] Amit, D.J. Modeling brain function: The world of attractor neural networks / D.J. Amit, D.J. Amit. —
Cambridge university press, 1992.
[50] Prostov, Y.S. Match-Mismatch Detection Neural Circuit Based on Multistable Neurons / Y.S. Prostov,
Y.V. Tiumentsev // International Conference on Neuroinformatics / Springer. — 2018. — Pp. 84–90.
[51] Prostov, Y.S. Functional plasticity in a recurrent neurodynamic model: from gradual to trigger behavior /
Y.S. Prostov, Y.V. Tiumentsev // Procedia computer science. — 2018. — Vol. 123. — Pp. 366–372.
[52] Dayan, P. Theoretical neuroscience: computational and mathematical modeling of neural systems / P. Dayan,
L.F. Abbott. — 2001.
[53] Cowan, J.D. Discussion: McCulloch-Pitts and related neural nets from 1943 to 1989 / J.D. Cowan // Bulletin of mathematical biology. — 1990. — Vol. 52, no. 1-2. — Pp. 73–97.
[54] Maass, W. Networks of spiking neurons: the third generation of neural network models / W. Maass // Neural networks. — 1997. — Vol. 10, no. 9. — Pp. 1659–1671.
[55] Kolmogorov, A.N. On the representation of continuous functions of many variables by superposition of con- tinuous functions of one variable and addition / A.N. Kolmogorov // Doklady Akademii Nauk / Russian
Academy of Sciences. — Vol. 114. — 1957. — Pp. 953–956.
[56] Hornik, K. Multilayer feedforward networks are universal approximators / K. Hornik, M. Stinchcombe,
H. White // Neural networks. — 1989. — Vol. 2, no. 5. — Pp. 359–366.
[57] Kurkova, V. Kolmogorov’s theorem and multilayer neural networks / V. Kurkova // Neural networks. —
1992. — Vol. 5, no. 3. — Pp. 501–506.
[58] Simonyan, K. Very deep convolutional networks for large-scale image recognition / K. Simonyan, A. Zisser- man // arXiv preprint arXiv:1409.1556. — 2014.

115
[59] Inception-v4, inception-resnet and the impact of residual connections on learning / C. Szegedy, S. Ioffe,
V. Vanhoucke, A. Alemi // Proceedings of the AAAI Conference on Artificial Intelligence. — Vol. 31. — 2017.
[60] Hornik, K. Approximation capabilities of multilayer feedforward networks / K. Hornik // Neural networks.
— 1991. — Vol. 4, no. 2. — Pp. 251–257.
[61] Rumelhart, D.E. Learning representations by back-propagating errors / D.E. Rumelhart, G.E. Hinton,
R.J. Williams // Cognitive modeling. — 1988. — Vol. 5, no. 3. — P. 1.
[62] Tesauro, G. Asymptotic convergence of backpropagation / G. Tesauro, Y. He, S. Ahmad // Neural Compu- tation. — 1989. — Vol. 1, no. 3. — Pp. 382–391.
[63] Kolen, J.F. Back propagation is sensitive to initial conditions / J.F. Kolen, J.B. Pollack // Advances in neural information processing systems. — 1991. — Pp. 860–867.
[64] Glorot, X. Understanding the difficulty of training deep feedforward neural networks / X. Glorot, Y. Bengio //
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. — 2010. —
Pp. 249–256.
[65] Mishkin, D. All you need is a good init / D. Mishkin, J. Matas // arXiv preprint arXiv:1511.06422. — 2015.
[66] Intriguing properties of neural networks / C. Szegedy, W. Zaremba, I. Sutskever et al. // arXiv preprint arXiv:1312.6199. — 2013.
[67] Nguyen, A. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images /
A. Nguyen, J. Yosinski, J. Clune // Proceedings of the IEEE conference on computer vision and pattern recognition. — 2015. — Pp. 427–436.
[68] Møller, M.F. A scaled conjugate gradient algorithm for fast supervised learning / M.F. Møller. — Aarhus
University, Computer Science Department, 1990.
[69] Riedmiller, M. A direct adaptive method for faster backpropagation learning: The RPROP algorithm /
M. Riedmiller, H. Braun // Neural Networks, 1993., IEEE International Conference on / IEEE. — 1993. —
Pp. 586–591.
[70] Kingma, D.P. Adam: A method for stochastic optimization / D.P. Kingma, J. Ba // arXiv preprint arX- iv:1412.6980. — 2014.
[71] Tieleman, T. Divide the gradient by a running average of its recent magnitude. coursera: Neural networks for machine learning / T. Tieleman, G. Hinton // Technical Report. — 2017.
[72] Гайфуллин, А.М. Автомодельное нестационарное течение вязкой жидкости / А.М. Гайфуллин //
Известия Российской академии наук. Механика жидкости и газа. — 2005. — no. 4. — Pp. 29–35.
[73] Gusev, V.N. An exact solution of the one-dimensional Navier-Stokes equations / V.N. Gusev, A.V. Zhbako- va // Fluid Dynamics. — 1968. — Vol. 3, no. 3. — Pp. 75–77.
[74] Cai, R. Explicit analytical solutions of 2-D laminar natural convection / R. Cai, N. Zhang // International journal of heat and mass transfer. — 2003. — Vol. 46, no. 5. — Pp. 931–934.
[75] Li, X. Collisionless periodic orbits in the free-fall three-body problem / X. Li, S. Liao // New Astronomy. —
2019. — Vol. 70. — Pp. 22–26.
[76] Hudomal, A. New periodic solutions to the three-body problem and gravitational waves / A. Hudomal //
Master of Science thesis at the Faculty of Physics, Belgrade University. — 2015.
[77] Kerr, R.P. Gravitational field of a spinning mass as an example of algebraically special metrics / R.P. Kerr //
Physical review letters. — 1963. — Vol. 11, no. 5. — P. 237.
[78] Gaifullin, A.M. Flow past a plate with an upstream-moving surface / A.M. Gaifullin // Fluid Dynamics. —
2006. — Vol. 41, no. 3. — Pp. 375–380.
[79] Gaifullin, A.M. Diffusion of two vortices / A.M. Gaifullin, A.V. Zubtsov // Fluid Dynamics. — 2004. —
Vol. 39, no. 1. — Pp. 112–127.
[80] Neiland, V.Ya. Asymptotic solutions of the Navier-Stokes equations in regions with large local perturbations /
V.Ya. Neiland, V.V. Sychev // Fluid Dynamics. — 1966. — Vol. 1, no. 4. — Pp. 29–33.
[81] Neiland, V.Ya. Asymptotic problems in the theory of viscous supersonic flows / V.Ya. Neiland // Tr. TsAGI.
— 1974. — Vol. 1529.

116
[82] Mikhailov, V.V. The theory of viscous hypersonic flow / V.V. Mikhailov, V.Ya. Neiland, V.V. Sychev //
Annual Review of Fluid Mechanics. — 1971. — Vol. 3, no. 1. — Pp. 371–396.
[83] Kadochnikov, I.N. A modified model of mode approximation for nitrogen plasma based on the state-to-state approach / I.N. Kadochnikov, B.I. Loukhovitski, A.M. Starik // Plasma Sources Science and Technology. —
2015. — Vol. 24, no. 5. — P. 055008.
[84] Kadochnikov, I.N. Modelling of vibrational nonequilibrium effects on the H 2–air mixture ignition under shock wave conditions in the state-to-state and mode approximations / I.N. Kadochnikov, I.V. Arsentiev //
Shock Waves. — 2020. — Vol. 30, no. 5. — Pp. 491–504.
[85] Ryzhov, A.A. Numerical modeling of the receptivity of a supersonic boundary layer to entropy disturbances /
A.A. Ryzhov, V.G. Sudakov // Fluid Dynamics. — 2012. — Vol. 47, no. 3. — Pp. 338–345.
[86] Griewank, A. Evaluating derivatives: principles and techniques of algorithmic differentiation / A. Griewank,
A. Walther. — SIAM, 2008.
[87] Lagaris, I.E. Neural-network methods for boundary value problems with irregular boundaries / I.E. Lagaris,
A.C. Likas, D.G. Papageorgiou // IEEE Transactions on Neural Networks. — 2000. — Vol. 11, no. 5. —
Pp. 1041–1049.
[88] Быстрый метод аэродинамического расчета для задач проектирования / А.В. Бернштейн,
В.В. Вышинский, А.П. Кулешов, Ю.Н. Свириденко // Труды ЦАГИ. — 2008. — no. 2678. — P. 35.
[89] Бернштейн, А.В. Построение адаптивных суррогатных моделей сложных объектов на основе анализа данных. — 2009.
[90] On solving some data analysis problems encountered in the construction of adaptive surrogate models of complex objects / A. Bernstein, E. Burnaev, S. Chernova et al. // Artificial Intelligence. — 2008. — Vol. 4.
— Pp. 40–48.
[91] Применение искусственных нейронных сетей для обработки и анализа данных аэродинамического экс- перимента / Е.А. Дорофеев, А.И. Дынников, А.В. Каргопольцев и др. // Ученые записки ЦАГИ. —
2007. — Т. 38, № 3-4.
[92] Свириденко, Ю.Н. Применение искусственных нейронных сетей в задачах прикладной аэродинамики /
Ю.Н. Свириденко // Авиакосмическое приборостроение. — 2015. — no. 2. — Pp. 3–8.
[93] Vyshinsky, V.V. Fast aerodynamic design technologies / V.V. Vyshinsky, Y.A. Dorofeev, Yu.N. Sviridenko //
27th International Congress of the Aeronautical Sciences. — 2010.
[94] Дорофеев, Е.А. Применение нейросетевых технологий в задачах аэродинамического проектирования и определения характеристик летательных аппаратов / Е.А. Дорофеев, Свириденко Ю.Н. // Модели и методы аэродинамики. — 2002. — Pp. 86–87.
[95] Dissanayake, M.W.M.G. Neural-network-based approximations for solving partial differential equations /
M.W.M.G. Dissanayake, N. Phan-Thien // communications in Numerical Methods in Engineering. — 1994.
— Vol. 10, no. 3. — Pp. 195–201.
[96] Meade, A.J. The numerical solution of linear ordinary differential equations by feedforward neural networks /
A.J. Meade, A.A. Fernandez // Mathematical and Computer Modelling. — 1994. — Vol. 19, no. 12. — Pp. 1–25.
[97] Lagaris, I.E. Artificial neural network methods in quantum mechanics / I.E. Lagaris, A. Likas, D.I. Fotiadis //
Computer Physics Communications. — 1997. — Vol. 104, no. 1-3. — Pp. 1–14.
[98] Flake, G.W. Differentiating Functions of the Jacobian with Respect to the Weights / G.W. Flake, B.A. Pearl- mutter // Advances in Neural Information Processing Systems. — 2000. — Pp. 435–441.
[99] Siskind, J.M. Efficient implementation of a higher-order language with built-in AD / J.M. Siskind, B.A. Pearl- mutter // arXiv preprint arXiv:1611.03416. — 2016.
[100] Avrutskiy, V.I. Backpropagation generalized for output derivatives / V.I. Avrutskiy // arXiv preprint arX- iv:1712.04185. — 2017. — https://arxiv.org/abs/1712.04185.
[101] Avrutskiy, V.I. Enhancing Function Approximation Abilities of Neural Networks by Training Deriva- tives / V.I. Avrutskiy // IEEE Transactions on Neural Networks and Learning Systems. — 2020. —
https://doi.org/10.1109/TNNLS.2020.2979706.

117
[102] Saint Raymond, X. Elementary introduction to the theory of pseudodifferential operators / X. Saint Raymond.
— Routledge, 2018.
[103] Sj¨oberg, J. Overtraining, regularization and searching for a minimum, with application to neural networks /
J. Sj¨oberg, L. Ljung // International Journal of Control. — 1995. — Vol. 62, no. 6. — Pp. 1391–1407.
[104] Practical training framework for fitting a function and its derivatives / A. Pukrittayakamee, M. Hagan,
L. Raff et al. // IEEE transactions on neural networks. — 2011. — Vol. 22, no. 6. — Pp. 936–947.
[105] Generative adversarial nets / I. Goodfellow, J. Pouget-Abadie, M. Mirza et al. // Advances in neural infor- mation processing systems. — 2014. — Pp. 2672–2680.
[106] Karatsiolis, S. Conditional Generative Denoising Autoencoder / S. Karatsiolis, C.N. Schizas // IEEE Trans- actions on Neural Networks and Learning Systems. — 2019.
[107] Variational autoencoder reconstruction of complex many-body physics / I.A. Luchnikov, A. Ryzhov, P.J. Stas et al. // Entropy. — 2019. — Vol. 21, no. 11. — P. 1091.
[108] Transformation invariance in pattern recognition-tangent distance and tangent propagation / P. Simard,
Y. LeCun, J. Denker, B. Victorri // Neural networks: tricks of the trade. — 1998. — Pp. 549–550.
[109] Drucker, H. Improving generalization performance using double backpropagation / H. Drucker, Y. Le Cun //
IEEE Transactions on Neural Networks. — 1992. — Vol. 3, no. 6. — Pp. 991–997.
[110] Guzhev, D.S. Burgers equation is a test for numerical methods / D.S. Guzhev, N.N. Kalitkin // Matematich- eskoe modelirovanie. — 1995. — Vol. 7, no. 4. — Pp. 99–127.
[111] Adams, N.A. A high-resolution hybrid compact-ENO scheme for shock-turbulence interaction problems /
N.A. Adams, K. Shariff // Journal of Computational Physics. — 1996. — Vol. 127, no. 1. — Pp. 27–51.
[112] Avrutskiy, V.I. Rational solutions of 1+1-dimensional Burgers equation and their asymptotic / V.I. Avrutskiy,
V.P. Krainov // arXiv preprint arXiv:1910.05488. — 2019.
[113] Avrutskiy, V.I. Preventing Overfitting by Training Derivatives / V.I. Avrutskiy // Proceedings of the Future
Technologies Conference / Springer. — 2019. — Pp. 144–163. — https://doi.org/10.1007/978-3-030-32520-
6_12.
[114] Sarle, W.S. Stopped training and other remedies for overfitting / W.S. Sarle // Computing science and statistics. — 1996. — Pp. 352–360.
[115] LeCun, Y. Optimal brain damage / Y. LeCun, J.S. Denker, S.A. Solla // Advances in neural information processing systems. — 1990. — Pp. 598–605.
[116] Hassibi, B. Optimal brain surgeon and general network pruning / B. Hassibi, D.G. Stork, G.J. Wolff //
Neural Networks, 1993., IEEE International Conference on / IEEE. — 1993. — Pp. 293–299.
[117] Improving neural networks by preventing co-adaptation of feature detectors / G.E. Hinton, N. Srivastava,
A. Krizhevsky et al. // arXiv preprint arXiv:1207.0580. — 2012.
[118] Dropout: A simple way to prevent neural networks from overfitting / N. Srivastava, G. Hinton, A. Krizhevsky et al. // The Journal of Machine Learning Research. — 2014. — Vol. 15, no. 1. — Pp. 1929–1958.
[119] Zaremba, W. Recurrent neural network regularization / W. Zaremba, I. Sutskever, O. Vinyals // arXiv preprint arXiv:1409.2329. — 2014.
[120] Liu, H. Accelerating preconditioned iterative linear solvers on GPU / H. Liu, Z. Chen, B. Yang // Interna- tional Journal of Numerical Analysis and Modelling: Series B. — 2014. — Vol. 5, no. 1-2. — Pp. 136–146.
[121] Liu, H. Accelerating algebraic multigrid solvers on NVIDIA GPUs / H. Liu, B. Yang, Z. Chen // Computers
& Mathematics with Applications. — 2015. — Vol. 70, no. 5. — Pp. 1162–1181.
[122] St¨uben, K. A review of algebraic multigrid / K. St¨uben // Numerical Analysis: Historical Developments in the 20th Century. — Elsevier, 2001. — Pp. 331–359.
[123] Ainsworth, M. A posteriori error estimation in finite element analysis / M. Ainsworth, J.T. Oden. — John
Wiley & Sons, 2011. — Vol. 37.
[124] Han, J. Overcoming the curse of dimensionality: Solving high-dimensional partial differential equations using deep learning / J. Han, A. Jentzen, E. Weinan // arXiv preprint arXiv:1707.02568. — 2017.
[125] Hardy, M. Combinatorics of partial derivatives / M. Hardy // the electronic journal of combinatorics. —
2006. — Vol. 13, no. 1. — P. 1.

118
[126] Nvidia, CUDA. Programming guide. — 2010.
[127] Hochberg, R. Matrix Multiplication with CUDA-a basic introduction to the CUDA programming model /
R. Hochberg. — 2012.
[128] CUDA. Cublas library / CUDA // NVIDIA Corporation, Santa Clara, California. — 2008. — Vol. 15, no. 27.
— P. 31.
[129] Theano: Deep learning on gpus with python / J. Bergstra, O. Breuleux, P. Lamblin et al. — 2011.
Отпечатано с оригинал-макетов Заказчика в типографии "Переплетофф"
Адрес: г. Долгопрудный, ул. Циолковского, 4.
Тел: 8(903) 511 76 03. www.perepletoff.ru
Формат 210 х 297 мм. Бумага офсетная.
Печать цифровая. Тираж 11 экз.
Твердый переплет.
Заказ № ______. 20.02.21 г.
1   ...   5   6   7   8   9   10   11   12   13


написать администратору сайта