The EU postpones part of the obligations of the AI ​​Regulation: what does it mean for administrations?

The European Union reached a deal on May 7, 2026. Provisional agreement to postpone the application of some obligations of the European Regulation on artificial intelligence (AI), especially those related to the high-risk AI systems.

Specifically, the requirements applicable to AI systems used in areas such as biometrics, critical infrastructure, education, employment, law enforcement or frontline managementeres will be applied from the 2 of December of 2027.

On the other hand, AI systems used as security components and covered by European sectoral legislation will have to adapt from the August 2, 2028.

They are also delayed until 2 of December of 2026 some obligations linked to the marking and identification of AI-generated content, such as artificially created images, videos or audio.

This postponement is part of the European package “Digital Omnibus”, aimed at simplifying obligations and reducing administrative burdens to facilitate the deployment of AI in Europe.

However, this calendar change does not mean that obligations or the need to prepare disappearThe European debate on this postponement itself has shown that many Member States still do not have the structures, authorities or mechanisms necessary to fully implement the Regulation, and that this additional margin seeks to avoid legal uncertainty and facilitate a more realistic implementation.

For Catalan public administrations, this postponement means having more time to prepare, but it does not eliminate the need to advance in aspects such as transparència, human supervision, data quality, traceability or risk assessment of AI systems.

In parallel, the EU has also agreed to ban AI applications intended to generate non-consensual sexualized images (“deepnudes” or “nudifiers”) and child sexual abuse content, strengthening the protection of fundamental rights and the dignity of people.

It should be noted, however, that this agreement still needs to be formally approved by the European Parliament and the Council before it can definitively enter into force..

At the AOC we continue to work to help Catalan administrations prepare for this new regulatory framework, promoting methodologies and tools for responsible AI governance.

In this context, the reports of transparència algorithmic (TRAL) are consolidated as a strategic tool to explain in a clear and understandable way how AI systems work, what data they use, what risks they may pose, and what safeguards apply to ensure its responsible, transparent use and aligned with citizens' rights.

Published in