نوع مقاله : مقاله کامل پژوهشی

نویسندگان

1 دانشجوی دکتری مهندسی پزشکی، گروه بیوالکتریک، دانشکده مهندسی پزشکی، دانشگاه صنعتی امیرکبیر، تهران

2 دانشیار، گروه بیوالکتریک، دانشکده مهندسی پزشکی، دانشگاه صنعتی امیرکبیر، تهران

10.22041/ijbme.2021.532847.1701

چکیده

ارائه‌ی مدل‌های نورونی جدید به منظور شبیه‌سازی پدیده‌های شناختی در مغز در سال‌های اخیر مورد توجه پژوهش‌گران قرار گرفته است. در این پژوهش مدل نورونی جدیدی مبتنی بر رفتار آشوب‌گونه‌ی وزن‌های شبکه‌های عصبی مصنوعی در حین یادگیری با استفاده از الگوریتم پس‌انتشار خطا ارائه شده است. این مدل نخستین مدل نورونی گسسته با قابلیت یادگیری است و توانایی بروز رفتارهای پیچیده و آشوبی را دارد. قابلیت یادگیری این امکان را به این مدل نورونی داده است که پدیده‌های شناختی مانند هم‌آوایی نورون‌ها را در شرایطی نزدیک به واقعیت شبیه‌سازی کند. مدل نورونی مذکور که از یک شبکه‌ی عصبی جلوسوی سه‌لایه به دست آمده، دارای جاذب‌های هم‌زیست متعددی است که یادگیری را در بستر جذب‌های مختلف امکان‌پذیر می‌کند. بررسی پارامترهای مدل نشان داده که بایفورکیشن نه تنها با تغییر پارامتر ضریب یادگیری روی می‌دهد، بلکه تحریک بیرونی نیز می‌تواند به عنوان یک پارامتر کنترل باعث تغییر رفتار مدل و بایفورکیشن شود. بنابراین این مدل می‌تواند در طراحی و مدل‌سازی روش‌های درمانی برای اختلالات شناختی مورد استفاده قرار گیرد. 

کلیدواژه‌ها

موضوعات

عنوان مقاله [English]

A Novel Neuronal Model based on Chaotic Behavior of Artificial Neural Networks

نویسندگان [English]

  • Hossein Banki-Koshki 1
  • Seyyed Ali Seyyedsalehi 2

1 Ph.D. Student, Bioelectric Department, Biomedical Engineering Faculty, Amirkabir University of Technology, Tehran, Iran

2 Associate Professor, Bioelectric Department, Biomedical Engineering Faculty, Amirkabir University of Technology, Tehran, Iran

چکیده [English]

The presentation of new neuronal models to simulate cognitive phenomena in the brain has attracted the research interests in recent years. In this study, a new neural model based on the chaotic behavior of weights of artificial neural networks during training by back-propagation algorithm is presented. This model is the first discrete neuronal model with learning ability and shows complex and chaotic behaviors. The learning ability of this model has enabled it to simulate cognitive phenomena such as neuronal synchronization in near-realistic conditions. The model, which is derived from a simple three-layered feed-forward neural network, has several coexisting attractors that make learning possible in various basins of attraction. The study of model parameters shows that bifurcation occurs not only by changing the learning rate, but also external stimulation can change the model behavior and bifurcation pattern. This point that can be used in modeling and designing new therapies for cognitive disorders.

کلیدواژه‌ها [English]

  • Discrete Neuronal Model
  • Artificial Neural Network
  • Cognition
  • Chaos
  • Learning
  • Synchronization
  1. Dreyfus, Neural networks: methodology and applications. Springer Science & Business Media, 2005.
  2. A. Basheer and M. Hajmeer, “Artificial neural networks: fundamentals, computing, design, and application,” J. Microbiol. Methods, vol. 43, no. 1, pp. 3–31, 2000.
  3. م. منهاج, مبانی شبکه های عصبی. انتشارات صنعتی امیرکبیر, 1381.
  4. F. Ting, Y. J. Tan, and K. S. Sim, “Convolutional neural network improvement for breast cancer classification,” Expert Syst. Appl., vol. 120, pp. 103–115, 2019.
  5. Gao, Y. L. Murphey, and H. Zhu, “Multivariate time series prediction of lane changing behavior using deep neural network,” Appl. Intell., vol. 48, no. 10, pp. 3523–3537, 2018.
  6. M. E. Ghadiri and K. Mazlumi, “Adaptive protection scheme for microgrids based on SOM clustering technique,” Appl. Soft Comput., vol. 88, p. 106062, 2020.
  7. Xiong, H. Wang, M. Liu, and X. Liu, “Denoising autoencoder for eletrocardiogram signal enhancement,” J. Med. Imaging Heal. Informatics, vol. 5, no. 8, pp. 1804–1810, 2015.
  8. Chen, H. Jiang, C. Li, X. Jia, and P. Ghamisi, “Deep feature extraction and classification of hyperspectral images based on convolutional neural networks,” IEEE Trans. Geosci. Remote Sens., vol. 54, no. 10, pp. 6232–6251, 2016.
  9. Wang and J. Gu, “VASC: dimension reduction and visualization of single-cell RNA-seq data by deep variational autoencoder,” Genomics. Proteomics Bioinformatics, vol. 16, no. 5, pp. 320–331, 2018.
  10. Shao, P. Wang, and R. Yan, “Generative adversarial networks for data augmentation in machine fault diagnosis,” Comput. Ind., vol. 106, pp. 85–93, 2019.
  11. Choi, A. Schuetz, W. F. Stewart, and J. Sun, “Using recurrent neural network models for early detection of heart failure onset,” J. Am. Med. Informatics Assoc., vol. 24, no. 2, pp. 361–370, 2017.
  12. Schmidhuber, “Deep learning in neural networks: An overview,” Neural networks, vol. 61, pp. 85–117, 2015.
  13. C. Hilborn, Chaos and nonlinear dynamics: an introduction for scientists and engineers. Oxford University Press on Demand, 2000.
  14. Korn and P. Faure, “Is there chaos in the brain? II. Experimental evidence and related models,” C. R. Biol., vol. 326, no. 9, pp. 787–840, 2003.
  15. N. Sarbadhikari and K. Chakrabarty, “Chaos in the brain: a short review alluding to epilepsy, depression, exercise and lateralization,” Med. Eng. Phys., vol. 23, no. 7, pp. 447–457, 2001.
  16. Rodriguez-Bermudez and P. J. Garcia-Laencina, “Analysis of EEG signals using nonlinear dynamics and chaos: a review,” Appl. Math. Inf. Sci., vol. 9, no. 5, p. 2309, 2015.
  17. محمدی, ‌احسان, کرمانی, and گلپرور, “ارزیابی آشوبناکی سیگنال الکتروانسفالوگرام در سطوح مختلف بیهوشی,” دانشکده پزشکی اصفهان, vol. 482, no. 36, pp. 601–606, 2018.
  18. Ibarz, J. M. Casado, and M. A. F. Sanjuán, “Map-based models in neuronal dynamics,” Phys. Rep., vol. 501, no. 1–2, pp. 1–74, 2011.
  19. M. Kuva, G. F. Lima, O. Kinouchi, M. H. R. Tragtenberg, and A. C. Roque, “A minimal model for excitable and bursting elements,” Neurocomputing, vol. 38, pp. 255–261, 2001.
  20. Girardi-Schappo, G. S. Bortolotto, R. V Stenzinger, J. J. Gonsalves, and M. H. R. Tragtenberg, “Phase diagrams and dynamics of a computationally efficient map-based neuron model,” PLoS One, vol. 12, no. 3, p. e0174621, 2017.
  21. Zerroug, L. Terrissa, and A. Faure, “Chaotic dynamical behavior of recurrent neural network,” Annu. Rev. Chaos Theory Bifurc. Dyn. Syst, vol. 4, pp. 55–66, 2013.
  22. Rakkiyappan, K. Udhayakumar, G. Velmurugan, J. Cao, and A. Alsaedi, “Stability and Hopf bifurcation analysis of fractional-order complex-valued neural networks with time delays,” Adv. Differ. Equations, vol. 2017, no. 1, p. 225, 2017.
  23. Wang, H. Tang, Y. Wang, and J. Wu, “Beautiful chaotic patterns generated using simple untrained recurrent neural networks under harmonic excitation.”
  24. L. J. Van der Maas, P. F. M. J. Verschure, and P. C. M. Molenaar, “A note on chaotic behavior in simple neural networks,” Neural Networks, vol. 3, no. 1, pp. 119–122, 1990.
  25. F. Kolen and J. B. Pollack, “Back propagation is sensitive to initial conditions,” in Advances in neural information processing systems, 1991, pp. 860–867.
  26. Bertels, L. Neuberg, S. Vassiliadis, and D. G. Pechanek, “XOR and backpropagation learning: in and out of the chaos?,” 1995.
  27. Bertels, L. Neuberg, S. Vassiliadis, and D. G. Pechanek, “Chaos and neural network learning. Some observations,” Neural Process. Lett., vol. 7, no. 2, pp. 69–80, 1998.
  28. U. Ahmed, M. Shahjahan, and K. Murase, “Chaotic dynamics of supervised neural network,” in 2010 13th International Conference on Computer and Information Technology (ICCIT), 2010, pp. 412–417.
  29. Zhang et al., “Dynamics of a hippocampal neuronal ensemble encoding trace fear memory revealed by in vivo Ca2+ imaging,” PLoS One, vol. 14, no. 7, p. e0219152, 2019.
  30. Zhou, L. Qiu, H. Wang, and X. Chen, “Induction of activity synchronization among primed hippocampal neurons out of random dynamics is key for trace memory formation and retrieval,” FASEB J., vol. 34, no. 3, pp. 3658–3676, 2020.