Transparency of Artificial Intelligence Algorithms
Abstract
In the modern era of active development of artificial intelligence (AI), lawyers are faced with the question: how to solve the “black box” matter, the incomprehensibility and unpredictability of decisions made by artificial intelligence. The development of rules ensuring transparency and explainability of AI algorithms allows artificial intelligence to be integrated into classical legal relations, eliminating the threat to the institution of legal liability. In Private Law consumer protection in front of large online platforms brings the algorithms transparency to the forefront, changing the very obligation to provide information to the consumer, which is now described by the formula: know + understand. Similarly, in Public Law, states are unable to properly protect citizens from harm caused by dependence on algorithmic applications in the provision of public services. It can only be countered by knowledge and understanding of the functioning of algorithms. A fundamentally new regulation is required to introduce the artificial intelligence use into a legal framework in which requirements for the transparency of algorithms should be formulated. Researchers are actively discussing creation of a regulatory framework for the formation of a system of observation, monitoring and preliminary permission for the AI technologies use. The paper analyzes “algorithmic accountability policies” and a “Transparency by Design” framework (problem solving throughout the entire AI development process) and the implementation of explainable AI systems. Overall, the proposed approaches to AI regulation and transparency are quite similar, as are the predictions about the mitigating role of AI algorithm transparency in matters of trust in AI. The concept of “algorithmic sovereignty” which refers to the ability of a democratic State to govern the development, deployment, and impact of AI systems in accordance with its own legal, cultural, and ethical norms, is also analyzed. This model is designed for the harmonious coexistence of different states, leading to an equally harmonious coexistence between humanity and AI. At the same time, ensuring the AI algorithms transparency is a direction of the general AI governance policy, the most important part of which is AI ethics. Despite its apparent universality, artificial intelligence ethics does not always take into account the diversity of ethical constructs in different parts of the world, as the African example demonstrates as well as fears of algorithmic colonization.
References
Adams R. (2021) Can artificial intelligence be decolonized? Interdisciplinary Science Reviews, vol. 46, pp. 176-197.
Badawy W. (2025) Algorithmic sovereignty and democratic resilience: rethinking AI governance in the age of generative AI. AI and Ethics, vol. 5, pp. 4402-4410.
Batool A. et al. (2025) AI governance: a systematic literature review. AI and Ethics, vol. 5, pp. 3265-3279.
Birhane A. (2023) Algorithmic colonization of Africa. In: S. Cave S., K. Dihal (eds.). Imagining AI: How the world sees intelligent machines. Oxford: University Press, pp. 247-260.
Buriaga V.O., Djuzhoma V.V., Artemenko E.A. (2025) Shaping an artificial intelligence regulatory model: international and domestic Experience. Legal Issues in the Digital Age, vol. 6, no. 2, pp. 50-68.
Goldstein S. (2025) Will AI and humanity go to war? AI & Society.
Han S. (2025) The question of AI and democracy: four categories of AI governance. Philosophy & Technology, vol. 38, pp. 1-26.
Kabytov P.P., Nazarov N.A. (2025) Transparency in public administration in the digital age: legal, institutional and mechanisms. Legal Issues in the Digital Age, vol. 6, no. 2, pp. 161-182.
Mohamed S. et al. (2020) Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, vol. 33, pp. 659-684.
Nihei M. (2022) Epistemic injustice as a philosophical conception for considering fairness and diversity in human-centered AI principles. Interdisciplinary Information Sciences, vol. 28, pp. 25-43.
O’Neil C. (2016) Weapons of math destruction: how big data increases inequality and threatens democracy. New York: Crown Publishers, 272 p.
Park K., Yoon Ho Y. (2025) AI algorithm transparency, pipelines for trust not prisms: mitigating general negative attitudes and enhancing trust toward AI. Humanities and Social Sciences Communities, no. 12.
Rachovitsa A., Johann N. (2022) The human rights implications of the use of AI in the digital welfare state: lessons learned from the dutch SyRI case. Human Rights Law Review, no. 22, pp. 1-15.
Saleem M. et al. (2025) Responsible AI in fintech: addressing challenges and strategic solutions. In: S. Dutta et al. Generative AI in FinTech: Revolutionizing Finance through Intelligent Algorithms. Cham: Springer, pp. 61-72.
Spina Alì G., Yu R. (2021) Artificial Intelligence between transparency and secrecy: from the EC Whitepaper to the AIA and beyond. European Journal of Law and Technology, vol. 12, no. 3, pp. 1-25.
Sposini L. (2024) The governance of algorithms: profiling and personalization of online content in the context of European Consumer Law. Nordic Journal of European Law, vol. 7, no. 1, pp. 1-22.
Thaler R. (2018) New behavioral economics. Why people break the rules of traditional economics and how to make money on it. Moscow: Eksmo, 367 p. (in Russ.)
Visave J. (2025) Transparency in AI for emergency management: building trust and accountability. AI and Ethics, no. 5, pp. 3967-3980.
Yilma K. (2025) Ethics of AI in Africa: interrogating the role of Ubuntu and AI governance initiatives. Ethics and Information Technology, vol. 27, pp. 1-14.
Copyright (c) 2025 Talapina E.V.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.












