Many colorful puzzle pieces lie jumbled up on a table

Implementation of the AI Act: Numerous Tensions with Existing Regulations

The European Union and its member states face the challenge of implementing the AI Act in a timely and user-friendly manner. A new analysis commissioned by the Bertelsmann Foundation highlights the importance of better aligning existing European laws with the new requirements. This involves more than legal precision, as the AI Act’s implications affect key areas of the economy and society. 

Contact person

Foto Asena Soydaş
Asena Soydaş
Project Manager

Content

With the adoption of the Artificial Intelligence Regulation (AI Act) in 2024, Europe took a decisive step towards regulating AI systems to ensure their safe use in alignment with European values. As an additional piece of the puzzle, the AI Act complements previous legislative efforts to make the European Union and its internal market fit for the digital age. 

By August 2026, the task is to implement the AI Act step by step and concretely define the practical implications of its provisions. However, as with any complex regulatory framework, it is already apparent that certain elements do not yet fit seamlessly together. Inconsistencies, overlaps, and ambiguities could hinder smooth implementation and lead to legal uncertainties that should ideally be avoided. 

Identifying Conflicts Between Digital and Sectoral Regulations 

Commissioned by the reframe[Tech] – Algorithms for the Common Good project of the Bertelsmann Foundation, Prof. Dr. Philipp Hacker’s study "The AI Act between Digital and Sectoral Regulations" examines key tensions and synergies between the AI Act and existing laws. Many AI applications subject to the horizontal provisions of the AI Act are simultaneously governed by other digital and sector-specific requirements. Using examples from selected digital laws—such as the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA)—as well as sectoral legislation in the finance, healthcare, and automotive industries, the study explores current challenges related to the AI Act. 

Key findings include: 

  • The risk analysis obligations of the DSA and the AI Act may overlap, particularly for platforms integrating generative AI technologies. Coordinating platform-specific and AI-related risks is a major challenge. 

  • So far, there are no clear rules for the reuse of personal data for the training of generative AI. This makes it difficult to comply with both the provisions of the GDPR and the requirements of the AI Act. 

  • In the financial sector, differing requirements of data protection and the AI Act could lead to overlaps that complicate AI-powered risk analyses. 

  • In the automotive industry, the integration of driver assistance systems into existing product safety and liability regulations poses a dual regulatory challenge. 

  • In the healthcare sector, contradictory requirements, combined with already limited capacities in approval mechanisms, could slow the dissemination of AI-based medical applications. These include, for example, AI systems for cancer diagnosis or for drafting medical reports. 

These examples illustrate that the need for adjustment varies by sector. However, common structural measures across all digital and sectoral regulatory frameworks can be identified: 

Better alignment of existing regulatory frameworks to avoid duplication and improve efficiency. One example where this has worked well is in the financial sector: existing rules for internal organization are sufficient to meet the AI Act's quality management requirements, provided EU guidelines are followed. Similar integrations could be achieved through implementing regulations from the European Commission or guidelines from national supervisory authorities on applying the AI Act in specific sectoral contexts. 

Both national and European strategies are required to harmonize AI regulation with other legal acts and sustainably resolve regulatory contradictions. Regular reviews of the regulatory framework are also recommended to ensure that technological and societal developments are adequately reflected. 

Avoiding Fragmentation and Regulatory Arbitrage

Ultimately, this is not just a matter of legal precision. The AI Act’s impact spans critical economic and societal domains. Regulatory contradictions and uncertainties jeopardize both companies’ innovation capacity and the efficiency of regulation. They could lead to fragmented responsibilities and regulatory arbitrage, where businesses may seek to bypass stricter requirements. 

To align the diverse demands of AI regulation consistently, a dialogue involving all relevant stakeholders in Europe and its member states—including legislators, regulatory authorities, companies, and civil society—is essential. Only through an intelligently coordinated approach can the AI Act reach its full potential.