General provisions and prohibited AI practices: AI Act regulations already in force and “unacceptable” risks

30 April 2025

The EU Regulation 2024/1689 dated June 13th, 2024, on artificial intelligence (the ‘Artificial Intelligence Act’) has been applying its first set of mandatory provisions since last February 2nd, 2025, with specific reference to Chapters I and II concerning, respectively, general provisions and prohibited AI practices as linked to risks defined unacceptable for European citizens’ safety and fundamental rights.

As articulated in Preamble (4) of the Regulation under analysis, AI technologies hold the potential to enhance forecasting capabilities, optimize resource management and tailor digital services to individual needs, through promoting economic productivity, environmental responsibility and societal development.

Nevertheless, as recognized in Preambles (5) and (6), the deployment of AI systems may carry some risks along, potentially causing harm to people’s fundamental rights, systemic biases and the infliction of both material and immaterial harm, whether physical, psychological, financial, or social. These scenarios require a robust and ethically grounded legal framework that promotes a human-centric approach to the development and utilization of AI, so that these tools are used to increase human wellness, pursuant to the core values of Section 2 of the Treaty on European Union (TEU) and the Charter of Fundamental Rights of the European Union.

As anticipated, Chapters I and II of the Regulation came into force last February. The first Chapter contains the general provisions of the regulatory document which define its subject and scope of application and also include definitions of the language that shall be used from now on, such as ‘deployer’, ‘deep fake’ and ‘emotion recognition system’. These concepts will be implemented as the Regulation comes into force in its entirety.

Furthermore, Chapter II of the Regulation, which concerns ‘Prohibited AI practices’, has also entered into force. It imposes a total ban on the implementation within the European Union of AI systems considered ‘unacceptable’ (i.e., capable of producing serious adverse effects on fundamental rights, individual freedoms, security and privacy).

By way of example and as relevant as it is for the purposes of this article, the mentioned ban will apply to AI systems that rely on the following features:

  • subliminal or deliberately manipulative techniques with the goal or the effect of materially distorting the behavior of a person or a group of people, impairing decision-making autonomy, causing them to make a decision that they would not have otherwise taken, thus causing (or reasonably likely to cause) harm.
  • Exploitation of vulnerabilities linked to age, disability, or socio-economic conditions with the goal  or the effect of materially distorting the behavior of that person or a person belonging to that group thus causing (or reasonably likely to cause) that person or another person significant harm.
  • Evaluation or classification of natural persons or groups of persons for a determined period of time based on their social behaviour or personal characteristics or known, inferred or predicted personality traits, and exploitation of the resulting ‘social score’ obtained.
  • Predictive evaluation of the risk that someone commits a crime solely based on profiling of his/her physical features or personality traits.
  • Remote biometric identification ‘in real time’ in publicly accessible spaces.
  • Creation or expansion of facial recognition databases through non-targeted scraping of facial images from the internet or CCTV footage as well as systems that infer the emotions of natural persons in the workplace and educational institutions.

With reference to this last provision strictly connected to the workplace and the prohibition on using systems that monitor the emotional sides of employees, it will be interesting to assess the way the latter will fit into the current Italian regulatory framework in the future given that, although prohibited, these tools will still be permitted for medical or safety reasons.

Finally, pursuant to the general principles already applicable (e.g., Sections 4, 13 and 52) and national legislation already in force on the subject, it may be useful for countries to start applying periodic verification procedures for decision-making mechanisms supported by AI tools, to ensure that they comply with fairness and non-discrimination principles, as well as processes of preventive information for employees on the use of AI in decision-making processes.

2025 - Morri Rossetti

cross