A.R. Nurutdinova∗, R.Kh. Latypovb∗∗
aTattelecom Company, Kazan, 420061 Russia
bKazan Federal University, Kazan, 420008 Russia
E-mail: ∗ayrat.nurutdinov@gmail.com, ∗∗roustam.latypov@kpfu.ru
Received May 16, 2022
REVIEW ARTICLE
Full text PDF
DOI: 10.26907/2541-7746.2022.2-3.244-265
For citation: Nurutdinov A.R., Latypov R.Kh. Potentials of the bio-inspired approach in the development of artificial intelligence systems (trends review). Uchenye Zapiski Kazanskogo Universiteta. Seriya Fiziko-Matematicheskie Nauki, 2022, vol. 164, no. 2–3, pp. 244–265. doi: 10.26907/2541-7746.2022.2-3.244-265. (In Russian)
Abstract
Artificial intelligence (AI) efficiently builds predictive models in engineering, politics, economics, and science, as well as provides optimal strategies for solving various problems. However, modern AIs are often far from being as accurate as one might have expected a few decades ago. As a result, a number of problems linked to the widespread use of AI hinder the positive effects of the tasks it solves. This article focuses on the difficulties and limitations in using AI systems that have arisen to date and possible ways to overcome them.
Keywords: artificial intelligence, machine learning, bio-inspired approach, cerebellum model
References
- McCarthy J., Minsky M.L., Rochester N., Shannon C.E. A proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955. AI Mag., 2006, vol. 27, no. 4, pp. 12–14. doi: 10.1609/aimag.v27i4.1904.
- van Lent M., Fisher W., Mancuso M. An explainable artificial intelligence system for small-unit tactical behavior. Proc. 16th Conf. on Innovative Applications of Artificial Intelligence. AAAI Press, 2004, pp. 900–907.
- Dellermann D., Calma A., Lipusch N., Weber Th., Weigel S., Ebel P. The future of human-AI collaboration: A taxonomy of design knowledge for hybrid intelligence systems. Proc. 52nd Hawaii Int. Conf. on System Sciences, 2019, pp. 274–283.
- Kirsh D. Foundations of AI: The big issues. Artif. Intell., 1991, vol. 47, nos. 1–3, pp. 3–30. doi: 10.1016/0004-3702(91)90048-O.
- Monett D., Lewis C.W.P. Getting clarity by defining artificial intelligence – a survey. In: Mu¨ller V. (Ed.) Philosophy and Theory of Artificial Intelligence 2017. PT-AI 2017. Studies in Applied Philosophy, Epistemology and Rational Ethics, 2018, vol. 44, pp. 212– 214. doi: 10.1007/978-3-319-96448-5 21.
- Turing A.M. Computing machinery and intelligence. Mind. New Ser., 1950, vol. 59, no. 236, pp. 433–460.
- Hayes P., Ford K. Turing test considered harmful. IJCAI’95: Proc. 14th Int. Joint Conf. on Artificial intelligence, 1995, vol. 1, pp. 972–977.
- Marcus G., Rossi F., Veloso M. Beyond the Turing test. AI Mag., 2016, vol. 37, no. 1, pp. 3–4. doi: 10.1609/aimag.v37i1.2650.
- WIPO Technology Trends 2019 – Artificial Intelligence. Geneva, Switzerland, WIPO, 2019. 154 p. Available at: https://www.wipo.int/edocs/pubdocs/en/wipo pub 1055.pdf/.
- Batarseh F.A., Freeman L. Huang Ch.-H. A survey on artificial intelligence assurance. J. Big Data., 2021, vol. 8, art. 60, pp. 1–30. doi: 10.1186/s40537-021-00445-7.
- Blagec K., Barbosa-Silva A., Ott S., Samwald M. A curated, ontology-based, large-scale knowledge graph of artificial intelligence tasks and benchmarks. Sci. Data, 2022, vol. 9, no. 1, art. 322, pp. 1–10. doi: 10.1038/s41597-022-01435-x.
- Pearson K. LIII. On lines and planes of closest fit to systems of points in space. London, Edinburgh, Dublin Philos. Mag. J. Sci. Ser. 6., 1901, vol. 2, no. 11, pp. 559–572. doi: 10.1080/14786440109462720.
- LeCun Y., Bengio Y., Hinton G. Deep learning. Nature, 2015, vol. 521, no. 7553, pp. 436– 444. doi: 10.1038/nature14539.
- Russell S.J, Norvig P. Artificial Intelligence: A Modern Approach. Prentice Hall, 2010. xviii, 1132 p.
- Shannon C. XXII. Programming a computer for playing chess. Philos. Mag., Ser. 7, 1950, vol. 41, no. 314, pp. 1–18.
- Brown T.B., Mann B., Ryder N., Subbiah M., Kaplan J., Dhariwal P., Neelakantan A., Shyam P., Sastry G., Askell A., Agarwal S., Herbert-Voss A., Krueger G., Henighan T., Child R., Ramesh A., Ziegler D.M., Wu J., Winter C., Hesse Ch., Chen M., Sigler E., Litwin M., Gray S., Chess B., Clark J., Berner Ch., McCandlish S., Radford A., Sutskever I., Amodei D. Language models are few-shot learners. arXiv:2005.14165v4, 2020. doi: 10.48550/arXiv.2005.14165.
- Littman M.L., Ajunwa I., Berger G., Boutilier C., Currie M., Doshi-Velez F., Hadfield G., Horowitz M.C., Isbell Ch., Kitano H., Levy K., Lyons T., Mitchell M., Shah J., Sloman St., Vallor Sh., Walsh T. Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report. Stanford, CA, Stanford Univ., 2021. 82 p. Available at: https://ai100.stanford.edu/gathering-strength-gathering-storms-one-hundred-year-study-artificial-intelligence-ai100-2021-study/.
- Sarker I.H. Machine learning: Algorithms, real-world applications and research directions. SN Comput. Sci., 2021, vol. 2, art. 160, pp. 1–21. doi: 10.1007/s42979-021-00592-x.
- Pearl J., Mackenzie D. The Book of Why: The New Science of Cause and Effect. New York, Basic Books, 2018. 432 p.
- Nilsson N. The Quest for Artificial Intelligence: A History of Ideas and Achievements. Cambridge Univ. Press, 2010. 562 p. doi: 10.1017/CBO9780511819346.
- Levesque H.J., Davis E., Morgenstern L. The winograd schema challenge. Proc. 13th Int. Conf. on the Principles of Knowledge Representation and Reasoning. Inst. Electr. Electron. Eng. Inc., 2012, pp. 552–561.
- LeCun Y. Kak uchitsya mashina: Revolyutsiya v oblasti neironnykh setei i glubokogo obucheniya [How the Machine Learns: The Revolution in Neural Networks and Deep Learning]. Moscow, Al’pina PRO, 2021. 335 p. (In Russian)
- Barricelli N. Esempi numerici di processi di evoluzione. Methodos, 1954, vol. 6, pp. 45–68. (In Italian)
- Rajkumar R., Ganapathy V. Bio-inspiring learning style Chatbot inventory using brain computing interface to increase the efficiency of E-learning. IEEE Access, 2020, vol. 8, pp. 67377–67395. doi: 10.1109/ACCESS.2020.2984591.
- Ashby R.W. An Introduction to Cybernetics. London, Chapman & Hall, 1956. ix, 295 p.
- Boisot M., McKelvey B. Complexity and organization-environment relations: Revisiting Ashby’s law of requisite variety. In: Allen P., Maguire St., McKelvey B. (Eds.) The Sage Handbook of Complexity and Management. London, Sage Publ., 2011, pp. 279–298. doi: 10.4135/9781446201084.
- Schelling C. Dynamic models of segregation. J. Math. Soc., 1971, vol. 1, no. 2, pp. 143–186. doi: 10.1080/0022250X.1971.9989794.
- Rosenblatt F. Printsipy neirodinamiki. Pertseptrony i teoriya mekhanizmov mozga [Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms]. Moscow, Mir, 1965. 480 p. (In Russian)
- Middleton F.A., Strick P.L. The cerebellum: An overview. Trends Neurosci., 1998, vol. 21, no. 9, pp. 367–936. doi: 10.1016/s0166-2236(98)01330-7.
- Herculano-Houzel S. Coordinated scaling of cortical and cerebellar numbers of neurons. Front. Neuroanat., 2019, vol. 4, art. 12, pp. 1–8. doi: 10.3389/fnana.2010.00012.
- Herculano-Houzel S., Avelino-de-Souza K., Neves K., Porf´ırio J., Messeder D., Mattos Feij´o L., Maldonado J., Manger P.R. The elephant brain in numbers. Front. Neuroanat., 2014, vol. 8, art. 46, pp. 1–8. doi: 10.3389/fnana.2014.00046.
- Kawato M. Internal models for motor control and trajectory planning. Curr. Opin. Neurobiol., 1999, vol. 9, no. 6, pp. 718–727. doi: 10.1016/s0959-4388(99)00028-8.
- Broucke M.E. Adaptive internal models in neuroscience. Found. Trends® Syst. Control, 2022, vol. 9, no. 4, pp. 365–550. doi: 10.1561/2600000027.
- Welniarz Q., Worbe Y., Gallea C. The forward model: A unifying theory for the role of the cerebellum in motor control and sense of agency. Front. Syst. Neurosci., 2021, vol. 15, art. 644059, pp. 1–14. doi: 10.3389/fnsys.2021.644059.
- Wolpert D.M., Miall R.C. Forward models for physiological motor control. Neural Networks, 1996, vol. 9, no. 8, pp. 1265–1279. doi: 10.1016/s0893-6080(96)00035-4.
- Green A.M., Hirata Y., Galiana H.L., Highstein S.M. Localizing sites for plasticity in the vestibular system. In: Highstein S.M., Fay R.R., Popper A.N. (Eds.) The Vestibular System. Springer Handbook of Auditory Research. Vol. 19. New York, Springer, 2004, pp. 423–495. doi: 10.1007/0-387-21567-0 10.
- Kawato M., Gomi H. A computational model of four regions of the cerebellum based on feedback-error learning. Biol. Cybern., 1992, vol. 68, pp. 95–103. doi: 10.1007/BF00201431.
- Wolpert D.M., Ghahramani Z., Jordan M.I. An internal model for sensorimotor integration. Science, 1995, vol. 269, no. 5232, pp. 1880–1882. doi: 10.1126/science.7569931.
- Albus J. A new approach to manipulator control: The cerebellar model articulation controller (CMAC). J. Dyn. Syst., Meas., Control, 1975, vol. 97, no. 3, pp. 220–227. doi: 10.1115/1.3426922.
- Rosenblatt F. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Washington, DC, Spartan Books, 1962. 616 p.
- Albus J.S. A theory of cerebellar function. Math. Biosci., 1971, vol. 10, nos. 1–2, pp. 25– 61. doi: 10.1016/0025-5564(71)90051-4.
- Minsky M., Papert S. Perseptrony [Perceptrons]. Moscow, Mir, 1971. 264 p. (In Russian)
- Gonzalez-Serrano F.J., Figueiras-Vidal A.R., Artes-Rodriguez A. Generalizing CMAC architecture and training. IEEE Trans. Neural Networks, 1998, vol. 9, no. 6, pp. 1509– 1514. doi: 10.1109/72.728400.
- Tsa Y., Chu H.-C., Fang S.-H., Lee J., Lin C.-M. Adaptive noise cancellation using deep cerebellar model articulation controller. IEEE Access, 2018, vol. 6, pp. 37395–37402. doi: 10.1109/ACCESS.2018.2827699.
- Huynh T.-T., Lin Ch.-M., Le T.-L., Cho H.-Y., Pham Th.-Th.T., Le N.-Q.-K., Chao F. A new self-organizing fuzzy cerebellar model articulation controller for uncertain nonlinear systems using overlapped Gaussian membership functions. IEEE Trans. Ind. Electron., 2020, vol. 67, no. 11, pp. 9671–9682. doi: 10.1109/TIE.2019.2952790.
- Fan R., Li Y. An adaptive fuzzy trajectory tracking control via improved cerebellar model articulation controller for electro-hydraulic shovel. IEEE/ASME Trans. Mechatron., 2021, vol. 26, no. 6, pp. 2870–2880. doi: 10.1109/TMECH.2021.3094284.
- Ji D., Shin D., Park J. An error compensation technique for low-voltage DNN accelerators. IEEE Trans. Very Large Scale Integr. (VLSI) Syst., 2021, vol. 29, no. 2, pp. 397–408. doi: 10.1109/TVLSI.2020.3041517.
- Agrawal K. To study the phenomenon of the Moravec’s paradox. arXiv:1012.3148, 2010. doi: 10.48550/arXiv.1012.3148.
- Moravec H. Mind Children: The Future of Robot and Human Intelligence. Cambridge, Mass., Harvard Univ. Press, 1988. 214 p.
- Pinker S. The Language Instinct: How the Mind Creates Language. New York, William Morrow, 1994. 494 p.
The content is available under the license Creative Commons Attribution 4.0 License.