Challenge

How to achieve user trust in digital government services that use data and artificial intelligence algorithms, while ensuring their responsible, safe and fair use. 

Problem and current context

Artificial intelligence (AI) has become an integral part of our daily lives. Governments use it to improve the accessibility and experience of public services, optimize processes or manage public space, for example.

Algorithms are the foundation of AI systems and often act like “black boxes” for the people who use and affect them. To prevent this from happening, it is essential that the algorithms can be understood by the users, that they can understand how a certain decision has been reached and why this decision is made. Even more so in the case of the government, which can only function when there is trust between it and the public. A good administration must allow all citizens to understand the reasons for decisions and to have the possibility of challenging them.

In democratic governments, trust is guaranteed through numerous legal safeguards. Regarding the obligation to make transparent the AI ​​algorithms used in digital government services, the law establishes the right of individuals to obtain an explanation of automated decisions (RGPD) and provides for the creation of "mechanisms" so that the algorithms involved in decision-making take into account transparency and explainability criteria (Comprehensive Law 15/2022 for equal treatment and non-discrimination), but it does not establish any specific mechanism or legal instrument to do- it It only gives recommendations and leaves it up to the organizations to find the most suitable tool for:

  • provide meaningful information about the logic of the algorithms used as well as the expected consequences, which is clear and uses understandable language,
  • allow people to understand why an algorithm has given a certain result, as well as indicate the body before which they can appeal their particular decisions or contest its general operation,
  • open the algorithms to democratic control.

In view of the lack of concrete mechanisms, techniques or methods to achieve algorithmic transparency, the AOC has carried out a study in order to search for standardized forms of transparency and give priority to the understanding of the algorithms that we apply to our administration services digital

This is where the algorithmic transparency initiative we present comes from.

Proposed solution 

Transparency in the development and use of AI is a crucial issue to ensure that this technology is used responsibly and fairly towards society.

As a means to achieve algorithmic transparency, the AOC proposes the publication on the Transparency Portal of understandable reports for each algorithm applied to digital administration services.

This initiative aims to be a tool for:

  • help people understand how the algorithms used in local government work and what their purpose is,
  • provide significant transparency in a standardized way and allow comparison between different algorithms,
  • make it easy for everyone to give their opinion and participate in the creation of human-centered algorithms.

Furthermore, although the initiative focuses mainly on artificial intelligence systems, also includes transparency reports for deterministic algorithms in sensitive use cases, such as complex automation systems for wide-ranging social assistance, following the criterion that they can also have a relevant impact on people.

Content of the file

Each algorithmic transparency tab contains:

  • the problem to be solved with the algorithm in question,
  • the explanation of the characteristics of the algorithm as well as its operation,
  • the groups that may be affected by the algorithm,
  • the risk analysis or data protection impact assessment that has been carried out (as applicable) to determine the possible discriminatory biases of the algorithm,
  • the security risk analysis of the system that has been done to identify the possible contingencies of availability and security of the system that contains the algorithm,
  • la risk management made and the measures applied to guarantee compliance with fundamental rights
  • the body before which it is possible to appeal against the particular decisions of the algorithm or challenge its general operation when the law requires it,
  • the contact details of the person in charge and of the supplier of the algorithm.

→ See the model AOC algorithmic transparency sheet (pdf)

How the content of the file was determined

To determine the content of the algorithmic transparency files, the Innovation and Data team with the support of the legal services of the AOC, has carried out a study that includes:  

  1. A analysis of existing regulations which addresses the issue of transparency of automated decisions made by AI algorithms, paying particular attention to:  
  • European Data Protection Regulation (RGPD)
  • Regulation on the use of artificial intelligence of the Council of Europe (AI Act)
  • Law 40/2015 on the Legal Regime of the Public Sector (LRJSP) and Regulations for the performance and operation of the Public Sector by electronic means (RD 203/2021)
  • Comprehensive Law 15/2022 for equal treatment and non-discrimination 
  1. Documents that provide guidelines and guidance to achieve a safe and reliable AI. It should be noted: 
  • Ethical Guidelines for Trusted AI, from the European Commission's High Level Expert Group on AI (April 2019) 
  • Report "Unboxing Artificial Intelligence: 10 Steps to Protect Human Rights" by the Commissioner for Human Rights of the Council of Europe (May 2019)
  • Principles for the Responsible Management of Trusted AI from the OECD Artificial Intelligence Council Recommendation (May 2019)
  • APDCAT “Automated Decisions in Catalonia” report (January 2020)  
  • European Declaration on Digital Rights and Principles for the Digital Decade (January 2023)
  • NIST Artificial Intelligence Risk Management Framework Tool (January 2023)
  1. They have also been explored algorithmic transparency initiatives of different administrations, which are in the development phase. It should be noted: 
  • Register of Algorithms of the city of Amsterdam – implemented
  • Register of Artificial Intelligence of the city of Helsinki – implemented
  • Eurocities algorithmic transparency standard, based on the UK standard – implemented
  • Register of Municipal Algorithms of Barcelona City Council 
  • Register of Algorithms of the ICT Social Health Foundation 
  • Radar of AI algorithms of the Table of entities of the Third Social Sector of Catalonia
  • Proposal for an ethical work framework for the algorithms of the Generalitat de Catalunya

Conclusions of the study

The member states of the European Union undertake, through the European Declaration on Digital Rights and Principles for the Digital Decade, to ensure an adequate level of transparency in the use of algorithms and artificial intelligence and because people are informed and trained to use them when they interact with them.

To comply with the requirement of transparency of AI systems of public administrations, the current regulations specify the information that must be given according to the cases [1], but does not define a "concrete" mechanism, standardized format or legal instrument for doing so; although: 

  • the AI ​​Regulation establishes the requirement to keep a record of all relevant information of high-risk AI systems (art. 17);
  • the comprehensive Law 15/2022 for equal treatment and non-discrimination speaks of favoring the implementation of "mechanisms" so that the algorithms involved in decision-making take into account criteria of minimizing bias, transparency and surrender of accounts (art. 23);
  • the Regulation on the performance and operation of the Public Sector by electronic means establishes the obligation to publish on its website the list of automated administrative actions (AAA) - which may or may not involve the use of AI algorithms - , and accompany each AAA with a description of its design and operation (art. 11). 

We explored different initiatives that are being promoted by European local and regional governments, in order to find a standardized way that allows us to comply with the requirement of transparency of the AI ​​algorithms that we apply to our digital government services.

Among all the initiatives explored and in accordance with the legal obligations in force, we have come to the conclusion that the publication of algorithmic transparency reports on the Transparency Portal it's a quick and simple solution that facilitates compliance with the transparency requirement established in the regulations i allows to gain the trust of the users in the AI ​​systems of public administrations.

The study also allowed us to determine what relevant information needs to be provided about AI-based public services so that users can understand how algorithms make decisions and how those decisions are verified. Otherwise, the study has led us to develop an agile and practical methodology to identify and analyze the most likely risks involved in the use of AI algorithms in relation to fundamental rights, and connect them with the measures that will need to be applied in each case to guarantee compliance with these rights.

Risk management methodology focused on the protection of fundamental rights

To guarantee reliable, ethical and safe AI in public services, the AOC has developed its own risk management methodology. This methodology, focused on the protection of fundamental rights, addresses the possible risks that may affect citizens and details how they can be mitigated or minimized. The methodology consists of three main steps, each with a solid foundation in international regulations and guidelines.

  • Step 1: Identification of fundamental principles and rights to be protected
    To ensure an AI that respects democratic values, we have relied on the Charter of Fundamental Rights of the European Union, which helps us define the main families of rights and principles that may be affected by AI. These are:

    • A) Equality and non-discrimination
    • B) Data protection and privacy
    • C) Safety and robustness
    • D) Transparency and explainability
    • E) Retention of accounts and auditability
    • F) Sustainable development and solidarity
  • Step 2: Identification of associated risks and their relationship with the principles and rights to be protected
    To identify the most common risks in the use of AI, the AOC has adopted theArtificial Intelligence Risk Management Framework (AI RMF 1.0) from NIST, which relates the potential risks to the aforementioned principles and rights. This approach makes it possible to determine which risks can endanger each of the families of fundamental rights, thus helping to anticipate and manage problems such as discrimination, privacy or security.

  • Step 3: Determination of risk management measures
    To identify the mechanisms and safeguards that may be more appropriate and effective in each case to avoid the violation of fundamental principles and rights, we have relied on Chapter II of the Ethical Guidelines for Trusted AI of the European Commission, and in the recently approved AI Regulation (mandatory application from 2/8/2026), so that the mechanisms are determined based on the level of risk of the AI ​​system and for certain AI systems or models. For example:
    • high risk: subject to specific security and transparency requirements, and the obligation to carry out an impact assessment related to fundamental rights before deploying the system (art. 27)
    • Limited risk: Minimum transparency obligations to allow users to make informed decisions and be aware when interacting with an AI.

This methodology helps us maintain an AI system that is transparent, secure and committed to protecting fundamental rights.

→ At presentation (PPT, 7/11/2024) you can see a summary table of the three steps of the AOC's risk management methodology.

PUBLISHED REPORTS:

Status of the project

In ProductionFour algorithmic transparency reports have been published.

More information

Practical cases implemented:

Cases in development phase:

Bibliographic references:

Legal obligations:

European framework

  • Regulation (EU) 2016/679 of the European Parliament and of the Council, of April 27, 2016, relating to the protection of natural persons with regard to the processing of personal data and the free movement of such data and by which repeals Directive 95/46/EC (“GDPR"). Enllaç
  • Guidelines on automated individual decisions and profiling for the purposes of Regulation 2016/679. Adopted on 3 October 2017 and revised for the last time and adopted on 6 February 2018. Working Group on Data Protection Article 29 of Directive 95/46/EC. Enllaç
  • Regulation of the European Parliament and the Council establishing harmonized rules on artificial intelligence (AI Act) and certain legislative acts of the Union are modified. 12/7/2024. Enllaç 

State framework

  • Organic Law 3/2018, of December 5, on the protection of personal data and the guarantee of digital rights (“LOPDGDD"). Enllaç
  • Compliance with the RGPD of treatments that incorporate Artificial Intelligence. An introduction February 2020. AEPD. Enllaç
  • Law 15/2022, of 12 July, comprehensive for equal treatment and non-discrimination. Enllaç

Catalan framework

  • Law 19/2014, of 29 December, on transparency, access to public information and good governance. Enllaç
  • Decree 76/2020, of August 4, on Digital Administration. Enllaç

[1] For example, the AI ​​Regulation specifies the information that must be given in relation to certain AI systems (art. 50); the RGPD determines the information that must be provided to the interested person when there are automated decisions (art. 13); and the LRJSP establishes the obligation to report on the body responsible for an automated administrative action in the electronic headquarters, for the purposes of appeal (art. 41).