The European Union's Artificial Intelligence Act (Regulation (EU) 2024/1689) which has entered into force on 1st of August 2024, is part of the vision “A Europe fit for the digital age” (also known as the EU digital strategy). It is a pivotal development in the regulation of AI, providing for the first time a comprehensive regulatory framework, which aims at striking a balance between obtaining the greatest benefits from AI systems and safeguarding fundamental rights to enable democratic control.
The EU AI Act is a long legislative piece comprised of 13 Chapters, each codifying separate aspects ranging from transparency obligations, governance for ensuring compliance, codes of conduct, to enforcement and penalties.
Its enforcement is taking place in separate phases, the first one having started since February 2025, the second one taking place in August 2025 and full enforcement being implemented by August 2026, with further evaluations and impact assessments in 2027, 2028 up to 2030.
This article is part of a series of publications on the ambit, targets and a deeper dive on the Chapters of the Act and focuses on the parts which are already in force and such which come into force in a few weeks, specifically on 2nd August 2025.
Scope
The EU AI Act contains a relatively extended preamble of over 40 pages, with specific descriptions on different rationales and the EU’s intentions within this AI codification. A noteworthy example is the explicit clarification that the identification or inference of emotions of natural persons as part of AI practice, such as happiness or sadness, is merely based on their biometric data.
Whilst the EU AI Act applies to a broad spectrum of stakeholders across the AI value chain, such as operators, providers, deployers, importers/distributors, product manufacturers, and authorized representatives,
territorially, its scope, extends beyond EEA borders. As such, compliance of AI Systems that may be placed in the EU market or affect EU individuals is mandatory, irrespective of their geographic origin, rendering the Act a global benchmark for AI governance.
Prohibited AI Practices
The first part of the Act already in force is Chapter II which declares the following AI practices as prohibited:
- Manipulative Practices, whereby AI systems manipulate human behaviour in a manner that could cause physical or psychological harm are strictly forbidden. This includes systems that exploit vulnerabilities of specific groups, such as children or individuals with disabilities.
- Social Scoring, such as evaluating individuals based on their social behaviour or compliance with certain norms, is prohibited, as it can lead to discrimination and violation of privacy rights.
- Real-time Remote Biometric Identification in public spaces is banned, except in specific cases such as law enforcement for serious crimes, such as abduction or threat to life/safety.
- Deepfakes and Misinformation: AI-generated content that can deceive individuals or mislead them, using untargeted scraping of facial images from the internet or CCTV footage.
- Biometric categorisation: Systems for profiling, i.e. to deduce for example race, political opinions, religious beliefs or sexual orientation, is prohibited, as it promotes discrimination and inequality. Exception to this prohibition is the use of such systems for law enforcement in serious crimes referred to above.
- Emotion-inference applications: use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions is not allowed, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.
General-purpose AI models
The further implementation of the EU AI Act as of by 2 August 2025 imposes certain obligations on providers of general-purpose AI models.
A ‘general-purpose AI model’ (GPAI) is defined as a model trained on large amount of data using self-supervision, integrated into a variety of downstream systems or applications, used in Europe and worldwide, covering a wide range of distinct tasks. GPAIs are classified into two categories depending on their impact capabilities, namely GPAI and GPAI with systemic risk.
Providers of GPAI models are required to:
- maintain comprehensive technical documentation for their models readily available to the AI Office, national authorities, and downstream providers upon request.
- implement policies to comply with Union copyright laws and publicly disclose detailed summaries of the training content used for their models.
Additionally, GPAIs with systemic risks are subject to enhanced obligations, such as performing model evaluations to identify and mitigate systemic risks and keeping track of all relevant information on serious incidents and possible corrective measures to their addressing.
The AI Office in collaboration with AI providers, national authorities, and other experts is expected to finalize the GPAI Code of Practice for providing details on consistent compliance of the above obligations by all GPAI providers.
Enforcement
The enforcement of the AI Act is phased and varies depending on the type and risk level of the AI model. Some obligations for compliance are already in effect, while others are due over a three-year period.
Member States are required by 2nd August 2025 to designate their national competent authorities and align the AI Act with their national policies, legal framework and digital infrastructure, recognizing both its regulatory imperative and strategic opportunity to foster innovation and economic growth.
Cyprus has named three authorities, namely:
- (i) the Commissioner for Personal Data Protection,
- (ii) the Human Rights Ombudsman and
- (iii) the Attorney General
in charge of overseeing compliance with the EU AI Act.
Penalties
Engaging with any of the prohibited AI practices listed in the Act can result in substantial financial penalties, reaching up to EUR 35 million or 7% of a company’s worldwide annual turnover. Similarly, the EU Commission may impose fines for intentional or negligent violation of reporting obligations of GPAI models under the Act, of up to 3% of a provider’s global annual turnover or €15 million, whichever is higher.
These penalties are designed to be effective, proportionate, and dissuasive, reflecting the EU’s commitment to responsible AI use.
Next steps
The obligation for organizations to maintain safe and responsible AI systems is not new, and given the high stakes - from financial exposure to reputational risk- there is an unquestionable need for businesses to meticulously examine their systems and practices in conjunction with the EU AI Act.
For additional information, please contact our team:
Andria Koukounis, Partner, Andria.Koukounis@cylaw.ey.com
Nicholas Yiasemis, Manager, Nicholas.Yiasemis@cylaw.ey.com
Elina Iosifidou, Associate Lawyer, Elina.Iosifidou@cylaw.ey.com