view of colleague working

The Law of the Republic of Kazakhstan “On Artificial Intelligence” has been adopted

On 17 November 2025, the President signed Law No. 230-VIII of the Republic of Kazakhstan “On Artificial Intelligence” (hereinafter referred to as the ‘Law’).

The Law creates a legal framework for the development, use and regulation of artificial intelligence (AI) systems in Kazakhstan, establishing principles of security, transparency and accountability.

The Law will come into force on 18 January 2026.

What you need to know about the new law:

Classification of AI systems and risk-based approach

The law introduces a classification of AI systems based on their degree of impact on the safety and rights of citizens. The classification of a system into a specific category is carried out by the owner or proprietor.

The disruption has minimal impact on users

Failures may result in material damage or moral harm, reduced efficiency of operations

Disruption of operations may lead to emergencies, threats to defense, security or the lives of citizens

Nota bene (important):

For high-risk systems applying for inclusion in the list of “trusted” systems, a requirement is introduced to conduct an audit by special private auditors in accordance with the rules for auditing information systems approved by Order of the Minister of Information and Communications of the Republic of Kazakhstan No. 263 dated 13 June 2018.

Prohibited practices in the use of AI

The law establishes a direct ban on the creation and operation of AI systems with the following functions:

  • The use of manipulative techniques that influence the subconscious or distort behavior.
  • Exploiting vulnerabilities (age, disability) to cause harm.
  • Social scoring — the assessment and classification of people based on their social behavior or personal characteristics.
  • Determining emotions without the subject's consent (except in legal cases).
  • Classification based on biometrics for discrimination (race, political views, etc.).

Business obligations: Transparency and labelling

To ensure transparency and trust, the Act imposes a number of obligations on owners and operators of AI systems:

Intellectual property

The law clarifies copyright issues in relation to generative neural networks:

User rights

Individuals are granted the following rights when interacting with AI:

  • Request information from the owner (or proprietor) of the AI about any decision made by the AI (obtain confirmation that the content was generated by a neural network).
  • The right to protection from automated discrimination and the right to demand that AI decisions be corrected by their owners (proprietors).
  • The right to request information about the data on which neural network decisions (responses) are based.

Governmental regulation

  1. The National Artificial Intelligence Platform (and the operator of this platform, who will manage it) will oversee the development, training, and operational experience of platform software products and AI models.
  2. The work of the National AI Platform Operator will be regulated by an authorised body (the Ministry of Artificial Intelligence of the Republic of Kazakhstan), which will determine the priority sectors of the economy for providing access to computing resources.
  3. “Data libraries” (collections of information for training neural networks) may be created by private individuals or by an authorised body (with data libraries stored by the national AI platform operator).

Practical steps for businesses (Compliance Checklist)

To ensure compliance with the new Act, companies are advised to:

Liability for non-compliance with the new AI Law

In addition to the new Law, amendments were also made to the Code of Administrative Offences of the Republic of Kazakhstan, providing for administrative liability for violations of the requirements of the Law on AI, namely:

  • Failure by AI owners or proprietors to inform users about synthetic results of AI system performance that may mislead them;
  • Failure by owners or proprietors of AI to manage the risks of high-risk AI systems, resulting in adverse effects on human health or well-being, the creation or dissemination of prohibited or false information, discrimination or human rights violations, and other harm, if such action (or inaction) does not constitute a criminal offence.

Punishable by a fine*: 

For individuals

For small businesses and non-profit organisations

For medium-sized enterprises

For large enterprises

For the initial violation

USD 127.5

USD 170

USD 255

USD 850

For a repeat offence within a year

USD 255

USD 425

USD 595

USD 1,700

*The amount of fine is set in Monthly Calculation Index (MCI).                                     

MCI in 2026 is 4,325 tenge (around USD 8.5).