Transparency of Public Administration in Context of Automated Decision-Making
Abstract
In context of active implementation of automated systems decision-making and artificial intelligence systems into activities of public authorities, a problem of maintaining an adequate level of transparency in public administration is becoming increasingly relevant. The issue is critical for upholding principles of rule of law and protecting fundamental rights of citizens. The work aims to conduct a comprehensive systematization and critical analysis of current approaches to solving the problem in Russian and foreign law, as well as in legal theory. The methodological basis of the research includes general research methods (analysis, synthesis, systematic approach) and specific scholar methods (comparative legal, formal legal). The article consistently examines the conceptual foundations and practical challenges of implementing transparency and explainability requirements for automated systems decision-making and artificial intelligence systems, including their role in increasing trust, maintaining accountability, preventing discrimination, and strengthening legitimacy of public administration. The main attention is paid to a detailed and critical analysis of a wide range of transparency mechanisms (classified, in particular, according to their focus on the system as a whole or on a specific decision, as well as by the timing of information provision — ex ante or ex post): disclosure of the procedure or logic of decision-making, the «right to explanation», counterfactual explanations, disclosure of data and program code/models, audit and public control, information about application, as well as use of explainable/interpretable models and other technical solutions. For each mechanism, advantages, disadvantages, and difficulties of practical implementation are identified like conflicts with intellectual property protection, technical complexity of implementation and interpretation, and the fundamental «black box» problem of artificial intelligence systems. The conclusion substantiates the insufficiency of applying individual tools and the necessity of developing a flexible, risk-oriented, and context-dependent comprehensive approach.
References
Bathaee Y. (2017) The Artificial Intelligence Black Box and the Failure of Intent and Causation. Harvard Journal of Law & Technology, no. 2, pp. 889–938.
Diver L., Schafer B. (2017) Opening the Black Box: Petri Nets and Privacy by Design. International Review of Law, Computers & Technology, no. 1, pp. 1–39.
Edwards L., Veale M. (2018) Enslaving the Algorithm: From a Right to an Explanation to a Right to Better Decisions? IEEE Security & Privacy, no. 3, pp. 46–54.
Felzmann H., Villaronga E.F. et al. (2019) Transparency You Can Trust: Transparency Requirements for Artificial Intelligence between Legal Norms and Contextual Concerns. Big Data & Society, no. 1, pp. 1–14.
Ferrario A., Loi M. (2022) The Robustness of Counterfactual Explanations over Time. IEEE Access, no. 10, pp. 82736–82750.
Goodman B., Flaxman S. (2017) European Union Regulations on Algorithmic Decision Making and a Right to Explanation. AI Magazine, no. 3, pp. 50–57.
Guidotti R., Monreale A. et al. (2019) A Survey of Methods for Explaining Black Box Models. ACM Computing Surveys, no. 5, pp. 1–42.
Guidotti R. (2022) Counterfactual Explanations and How to Find them: Literature Review and Benchmarking. Data Mining and Knowledge Discovery, vol. 38, pp. 2770–2824.
Kuner C., Bygrave L.A., Docksey C. (2019) The EU General Data Protection Regulation (GDPR): a commentary. Oxford: University Press, 1393 p.
Mittelstadt B. (2016) Auditing for Transparency in Content Personalization Systems. International Journal of Communication. no. 10, pp. 4991–5002.
Pilipenko A.N. (2019) France: towards Digital Democracy. Pravo. Journal Vysshey shkoly ekonomiki=Law. Journal of the Higher School of Economics, vol. 12, no. 4, pp. 185–207. (in Russ.)
Santosuosso A., Pinotti G. (2020) Bottleneck or Crossroad? Problems of Legal Sources Annotation and some Theoretical Thoughts Stats, vol. 3, no. 3, pp. 376–395.
Selbst A.D., Powles J. (2017) Meaningful Information and the Right to Explanation. International Data Privacy Law, vol. 7, no. 4, pp. 233–242.
Talapina E.V. (2024) Principle of Transparency in the use of Artificial Intelligence. Gosudarstvennaya vlast i mestnoe samoupravlenie=State Power and Local Self-Government, no. 7, pp. 36–39 (in Russ.)
Troisi E. (2022) Automated Decision Making and Right to Explanation. The Right of Access as ex post Information. European Journal of Privacy Law & Technologies, no. 1, pp. 182–202.
Tutt A. (2017) An Fda for Algorithms. Administrative Law Review, no. 1, pp. 83–123.
Vengerov A.B. (1979) Legal Bases of Management Automation in the National Economy of the USSR. Moscow: Vysshaya shkola, 245 p. (in Russ.)
Wachter S., Mittelstadt B., Floridi L. (2017) Why a Right to Explanation of Automated Decision-Making does not Exist in the General Data Protection Regulation. International Data Privacy Law, vol. 7, no. 2, pp. 76–99.
Wulf A.J., Seizov O. (2024) Please Understand We Cannot Provide Further Information: Evaluating Content and Transparency of GDPR-Mandated AI Disclosures. AI & SOCIETY, vol. 39, no. 1, pp. 235–256.
Copyright (c) 2025 Kabytov P.P., Nazarov N.A.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.