Preview

Concept: philosophy, religion, culture

Advanced search

On the Role of the Ethical Theory in the Structure of Artificial Moral Agents in the Cultural Field of the Information Society

https://doi.org/10.24833/2541-8831-2024-2-30-8-21

Abstract

This study actualizes the ethical and philosophical aspects of creating artificial intelligent systems and artificial moral agents. The relevance of the study is justified by the need to comprehend the formation of digital ethics, which in the space of modern culture occupies an increasingly dominant position. At the same time, its ambiguous nature and inchoate subject of analysis are shown. Ethical characteristics are a part of the general cultural space of embedding intellectual systems into the world of people and reflection on this process. The aim of the research is to analyze ethical theory in the structure of artificial moral agents. For this purpose, the following tasks are realized. Firstly, various strategies of ethical regulation are considered from the point of view of their formalization for use in intelligent systems. Special attention is paid to the negative manifestations of the creation of artificial moral agents, and the arguments against their appearance are analyzed. Among the latter are both well-known ones (the problem of malicious use and existential experiences of mankind as a species) and more specificly for philosophy and ethics (such as manipulation of behavior through emulation of emotions and the problem of remote access and use). Secondly, issues related to the ethics of intelligent systems are raised and the controversies surrounding their implementation are presented. Thirdly, deontology and utilitarianism are analyzed as theories suitable for formalization and use in the structure and architecture of artificial moral agents. The methodology of ethical and humanitarian expertise and case analysis are used to fulfill the outlined steps. The main material for the research is theoretical models of realization of artificial moral agents and embedding ethical theories such as deontology and utilitarianism into them. Also, based on a case study of a social robot, the differences between deontology and utilitarianism are examined in terms of case resolution. The result of the study is a discussion that the use of utilitarianism as moral arithmetic is better suited to formalization and the use of artificial moral agents in the architecture, as it is possible to represent each action and its consequences with a quantitative parameter. However, deontology allows the construction of a theory of permitted and prohibited actions that can better reflect the actual process of doing an act. The main difficulty for deontology and its formalization is the correlation of the categories and the category of permissibility of an action, as it is difficult to identify it as a separate use case since it is neither a forbidden action nor an obligatory one. Based on this, it is concluded that it is not enough to simply formalize an ethical theory, but it is necessary to make it possible for artificial agents to construct an ethical model on their own.

About the Author

A. V. Antipov
Institute of Philosophy, Russian Academy of Sciences
Russian Federation

Aleksei V. Antipov — PhD (Philosophy), Researcher, Department of Humanitarian Expertise and
Bioethics

12/1 Goncharnaya Str., Moscow, Russia, 109240 (Russia)



References

1. Allen, C., Smit, I. and Wallach, W. (2005) ‘Artificial Morality: Top-down, Bottom-up, and Hybrid

2. Approaches’, Ethics and Information Technology, 7(3), pp. 149–155. https://doi.org/10.1007/s10676-0060004-4

3. Anderson, S. L. (2011) ‘Philosophical Concerns with Machine Ethics’, in Machine Ethics. Cambridge: Cambridge University Press, pp. 162–167. https://doi.org/10.1017/CBO9780511978036.014

4. Antipov, A. V. (2023b) ‘Artificial moral agents: an analysis of the argument against them’, in Digital technologies and law. Kazan: Izdatel’stvo ‘ZnaniyePoznaniye’ Publ., pp. 15–20. (In Russian). https://doi. org/10.21202/978-5-8399-978-5-8399-0819-2_476

5. Antipov, A. V. (2023a) ‘Avtonomiya iskusstvennykh moral’nykh agentov [Autonomy of artificial moral agents]’, in Chelovek, intellekt, poznaniye [Man, intelligence, cognition]. Novosibirsk: Novosibirskiy issledovatel’skiy natsional’nyy gosudarstvennyy universitet Publ., pp. 235–237. (In Russian).

6. Cervantes, J.-A. et al. (2020) ‘Artificial Moral Agents: A Survey of the Current Status’, Science and

7. Engineering Ethics, 26(2), pp. 501–532. https://doi.org/10.1007/s11948-019-00151-x

8. Chakraborty, A. and Bhuyan, N. (2024) ‘Can artificial intelligence be a Kantian moral agent? On moral autonomy of AI system’, AI and Ethics, 4(2), pp. 325–331. https://doi.org/10.1007/s43681-023-00269-6

9. Cristani, M. and Burato, E. (2009) ‘Approximate solutions of moral dilemmas in multiple agent system’, Knowledge and Information Systems, 18(2), pp. 157–181. https://doi.org/10.1007/s10115-008-0172-0

10. Formosa, P. and Ryan, M. (2021) ‘Making moral machines: why we need artificial moral agents’, AI & SOCIETY, 36(3), pp. 839–851. https://doi.org/10.1007/s00146-020-01089-6

11. Franck, G. (2019) ‘The economy of attention’, Journal of Sociology, 55(1), pp. 8–19. https://doi. org/10.1177/1440783318811778

12. Hanna, R. and Kazim, E. (2021) ‘Philosophical foundations for digital ethics and AI Ethics: a dignitarian approach’, AI and Ethics, 1(4), pp. 405–423. https://doi.org/10.1007/s43681-021-00040-9

13. Luke, A. (2018) ‘Digital Ethics Now’, Language and Literacy, 20(3), pp. 185–198. https://doi.org/10.20360/ langandlit29416

14. Moor, J. H. (2006) ‘The Nature, Importance, and Difficulty of Machine Ethics’, IEEE Intelligent Systems, 21(4), pp. 18–21. https://doi.org/10.1109/MIS.2006.80

15. Pereira, L. M. and Lopes, A. B. (2020) ‘Artificial Intelligence, Machine Autonomy and Emerging Needs’, in Machine Ethics. Studies in Applied Philosophy, Epistemology and Rational Ethics. Cham: Springer, pp. 19–24. https://doi.org/10.1007/978-3-030-39630-5_2

16. Powers, T. M. (2006) ‘Prospects for a Kantian Machine’, IEEE Intelligent Systems, 21(4), pp. 46–51. https:// doi.org/10.1109/MIS.2006.77

17. Strasser, A. (2022) ‘Distributed responsibility in human–machine interactions’, AI and Ethics, 2(3), pp. 523–532. https://doi.org/10.1007/s43681-021-00109-5

18. Turing, A. M. (1950) ‘Computing machinery and intelligence’, Mind, LIX(236), pp. 433–460. https://doi. org/10.1093/mind/LIX.236.433

19. Ulanova, A. E. (2020) ‘The image of the opponent of technological innovation in Galley Slave by A.Asimov: modern interpretation’, Concept: philosophy, religion, culture, 4(2), pp. 135–143. (In Russian) https://doi.org/10.24833/2541-8831-2020-2-14-135-143

20. Voigt, P. and von dem Bussche, A. (2017) The EU General Data Protection Regulation (GDPR). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-57959-7

21. Whiting, R. and Pritchard, K. (2018) ‘Digital ethics’, in The SAGE Handbook of Qualitative Business and Management Research Methods: History and Traditions. London: SAGE Publications Ltd, pp. 562–577. https:// doi.org/10.4135/9781526430212

22. Zuboff, S. (2019) ‘Surveillance Capitalism and the Challenge of Collective Action’, New Labor Forum, 28(1), pp. 10–29. https://doi.org/10.1177/1095796018819461


Review

For citations:


Antipov A.V. On the Role of the Ethical Theory in the Structure of Artificial Moral Agents in the Cultural Field of the Information Society. Concept: philosophy, religion, culture. 2024;8(2):8-21. (In Russ.) https://doi.org/10.24833/2541-8831-2024-2-30-8-21

Views: 325


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2541-8831 (Print)
ISSN 2619-0540 (Online)