The Moral Turing Test within the Frameworks for Normalizing Attitudes towards AI in Socially Significant Future Technologies
https://doi.org/10.24833/2541-8831-2025-4-36-8-24
Abstract
The proliferation of new technologies raises the problem of applying and adapting the Turing Test to evaluate the moral decisions made by artificial intelligence (AI) systems in the context of bioethics. The relevance of this problem for the philosophy of culture lies in the need to analyze the prospects for the harmonious coexistence of humans and artificial systems, considering dominant cultural normative systems, one of which is morality. The aim of this research is to refine approaches to solving ethical problems associated with AI against the backdrop of its integration into the latest social technologies. The research objectives were as follows: 1) to identify and describe the problems associated with the spread of AI in the social sphere; 2) to clarify the specifics of the ethical questions arising from the implementation of AI in this area; 3) to systematize knowledge about existing deontological frameworks that aim to address the problem of the social normalization of AI use. The research materials include information on the latest developments in social engineering, namely technologies that apply AI to solve social tasks (in medicine and elderly care), as well as scholarly literature devoted to the use of AI in the social engineering of the future. The study is based on a culture-oriented approach. The methods used involve case analysis and SWOT analysis. Based on the analysis of scholarly literature, various modifications of the Moral Turing Test are presented: the Comparative Moral Turing Test (cMTT), the Ethical Competence Test, the Machine Ethics Safety Test, and the Turing Triage Test. As a result of the research, it is shown that the Moral Turing Test is a functional tool for demonstrating the ethical safety of artificial systems but cannot serve as proof of their possessing moral agency in the human sense, which is particularly relevant for the sensitive sphere of bioethics. The study concludes that: First, within the framework of developing the aforementioned modifications, the methodological difficulties and fundamental limitations of these approaches are described. These include the problem of imitation, the "absence of understanding" in AI, the risk of software errors, and the fundamental differences between thinking and the capacity to be a moral agent. Second, the practical significance of developing criteria for the ethical verification of AI is demonstrated, and the specific bioethical problems arising from its use are clarified (the problem of responsibility, patient autonomy, stigmatization, and equality of access). Third, philosophical approaches to the question of the possibility of creating "genuinely" moral AI are systematized; objections against this thesis are highlighted, based on the arguments of biological naturalism (J. Searle), phenomenology (H. Dreyfus), as well as the concept of the erosion of human moral skills.
About the Author
A. V. AntipovRussian Federation
Aleksei V. Antipov — PhD in Philosophy, Senior research fellow, Department of Humanitarian Expertise and Bioethics
12/1 Goncharnaya Str., Moscow, Russia, 109240
References
1. Aharoni, E. et al. (2024) ‘Attributions toward artificial agents in a modified Moral Turing Test’, Scientific Reports, 14(1), 8458. https://doi.org/10.1038/s41598-024-58087-7
2. Alekseev, A. Yu. (2013) Kompleksnyj test T’yuringa: filosofsko-metodologicheskie i sociokul’turnye aspekty [Comprehensive Turing Test: philosophical, methodological and socio-cultural aspects]. Moscow: IInteLL Publ. (In Russian).
3. Allen, C., Varner, G. and Zinser, J. (2000) ‘Prolegomena to any future artificial moral agent’, Journal of Experimental & Theoretical Artificial Intelligence, 12(3), pp. 251–261. https://doi. org/10.1080/09528130050111428
4. Arnold, T. and Scheutz, M. (2016) ‘Against the moral Turing test: accountable design and the moral reasoning of autonomous systems’, Ethics and Information Technology, 18(2), pp. 103–115. https://doi. org/10.1007/s10676-016-9389-x
5. Astakhov, S. (2020) ‘Phenomenology vs Symbolic AI: Hubert Dreyfus’s Philosophy of Skill Acquisition’, Philosophical Literary Journal Logos, 30(2), pp. 157–193. (In Russian).https://doi.org/10.22394/0869-53772020-2-157-190
6. Bohn, E.D. (2024) ‘The Moral Turing Test: a defense’, Philosophy & Technology, 37(3), 111. https://doi.org/10.1007/s13347-024-00793-1
7. Bohn, E. D. (2025) ‘In Defense of the Moral Turing Test: A Reply’, Philosophy & Technology, 38(2), 40. https://doi.org/10.1007/s13347-025-00869-6
8. Broadbent, E. et al. (2024) ‘ElliQ, an AI-Driven Social Robot to Alleviate Loneliness: Progress and Lessons
9. Learned’, The Journal of Aging Research & Lifestyle, 13, pp. 22–28. https://doi.org/10.14283/jarlife.2024.2
10. Calo, C. et al. (2011) ‘Ethical Implications of Using the Paro Robot, with a Focus on Dementia Patient Care’, in Human-Robot Interaction in Elder Care. San Francisco: Association for the Advancement of Artificial Intelligence, pp. 20–24.
11. Dela Cruz, N. L. (2025) ‘Save the Digients! On the Moral Status of AI’, in The Philosophy of Ted Chiang. Cham: Springer Nature Switzerland, pp. 195–202. https://doi.org/10.1007/978-3-031-81662-8_21
12. Dreyfus, H. L. (1992) What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge: : MIT Press.
13. Gerdes, A. and Øhrstrøm, P. (2015) ‘Issues in robot ethics seen through the lens of a moral Turing test’, Journal of Information, Communication and Ethics in Society, 13(2), pp. 98–109. https://doi.org/10.1108/ JICES-09-2014-0038
14. Hazlitt, H. (1972) The foundations of morality. Los Angeles: Nash Publ. (Russ. ed.: (2019) Osnovaniya morali. Moscow: Mysl Publ.; Chelyabinsk: Sotsium Publ.).
15. Hung, L. et al. (2019) ‘The benefits of and barriers to using a social robot PARO in care settings: a scoping review’, BMC Geriatrics, 19(1), p. 232. https://doi.org/10.1186/s12877-019-1244-6
16. Kolomiytsev, S.Yu. (2015) ‘The Turing Test and Artificial Intelligence in the Early 21-st Century’, The human being, (4), pp. 59–68. (In Russian).
17. Krzanowski, R. M. and Trombik, K. (2021) ‘Ethical Machine Safety Test’, in Transhumanism: The Proper Guide to a Posthuman Condition or a Dangerous Idea?. Cognitive Technologies. Cham: Springer, pp. 141–154. https://doi.org/10.1007/978-3-030-56546-6_10
18. Martynenko, N. P. (2025) ‘Cognitive Mechanisms of Large Language Models: Interaction with GigaChat’, Concept: philosophy, religion, culture, 9(2), pp. 30–50. (In Russian). https://doi.org/10.24833/2541-8831-20252-34-30-50
19. Merleau-Ponty, M. (1945) Phénoménologie de la perception. Paris: Éditions Gallimard. (Russ. ed.: (1999) Fenomenologiya vospriyatiya. Saint Petersburg: YUventa: Nauka Publ.).
20. Milgram, S. (1974) Obedience to Authority: An Experimental View. New York: Harper & Row. (Russ. ed.: (2023) Podchinenie avtoritetu: Nauchnyj vzglyad na vlast’ i moral’. Moscow: Alpina Non-fiction Publ.).
21. Moor, J. H. (2020) ‘The mature, importance, and difficulty of machine ethics ’, in Machine Ethics and Robot Ethics. London: Routledge, pp. 233–236. https://doi.org/10.4324/9781003074991
22. Oleynikov, Yu. V. (2021) ‘Singularity of Post-Industrial Society’, Knowledge, understanding, skill, (2), pp. 85–95. (In Russian).
23. Proudfoot, D. (2024) ‘Turing’s Test vs the Moral Turing Test’, Philosophy & Technology, 37(4), p. 134. https://doi.org/10.1007/s13347-024-00825-w
24. Razin, A. V. (2017) ‘Morality and Mind: the Ideal and the Rational’, The human being, (2), pp. 33–46. (In Russian).
25. Rezaev, A. V. and Tregubova, N.D. (2019) ‘Artificial Intelligence, On-line Culture, Artificial Sociality: Definition of the Terms’, Monitoring of Public Opinion: Economic and Social Changes, (6), pp. 35–47. (In Russian). https://doi.org/10.14515/monitoring.2019.6.03
26. Shibata, T. and Wada, K. (2011) ‘Robot Therapy: A New Approach for Mental Healthcare of the Elderly — A Mini-Review’, Gerontology, 57(4), pp. 378–386. https://doi.org/10.1159/000319015
27. Singer, P. (1975) Animal Liberation: A New Ethics for our Treatment of Animals. New York: Random House. (Russ. ed.: (2009) Osvobozhdenie zhivotnyh . Moscow: Sindbad Publ.).
28. Singer, P. (2011) The expanding circle: Ethics, evolution, and moral progress. Princeton: Princeton University Press.
29. Sparrow, R. (2004) ‘The Turing Triage Test’, Ethics and Information Technology, 6(4), pp. 203–213. https:// doi.org/10.1007/s10676-004-6491-2
30. Turing, A. (1950) ‘Computing machinery and intelligence’, Mind, 59(236), pp. 433–460. (Russ. ed.: (1960) Mogut li mashiny myslit’? Moscow: Fizmatgiz Publ.).
31. Turkle, S. (2015) Reclaiming conversation: The power of talk in a digital age. New York: Penguin Books.
32. Wallach, W. and Allen, C. (2014) ‘Hard problems: Framing the Chinese room in which a robot takes a moral Turing test ’, in 38th Annual Convention of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour (AISB 2012). Part 12. New York: Curran Associates, pp. 1–6.
Review
For citations:
Antipov A.V. The Moral Turing Test within the Frameworks for Normalizing Attitudes towards AI in Socially Significant Future Technologies. Concept: philosophy, religion, culture. 2025;9(4):8-24. (In Russ.) https://doi.org/10.24833/2541-8831-2025-4-36-8-24























