EY helps clients create long-term value for all stakeholders. Enabled by data and technology, our services and solutions provide trust through assurance and help clients transform, grow and operate.
At EY, our purpose is building a better working world. The insights and services we provide help to create long-term value for clients, people and society, and to build trust in the capital markets.
On 17 November 2025, the President signed Law No. 230-VIII of the Republic of Kazakhstan “On Artificial Intelligence” (hereinafter referred to as the ‘Law’).
The Law creates a legal framework for the development, use and regulation of artificial intelligence (AI) systems in Kazakhstan, establishing principles of security, transparency and accountability.
The Law will come into force on 18 January 2026.
What you need to know about the new law:
Classification of AI systems and risk-based approach
The law introduces a classification of AI systems based on their degree of impact on the safety and rights of citizens. The classification of a system into a specific category is carried out by the owner or proprietor.
Minimum risk degree
Medium risk degree
High risk degree
The disruption has minimal impact on users
Failures may result in material damage or moral harm, reduced efficiency of operations
Disruption of operations may lead to emergencies, threats to defense, security or the lives of citizens
Nota bene (important):
For high-risk systems applying for inclusion in the list of “trusted” systems, a requirement is introduced to conduct an audit by special private auditors in accordance with the rules for auditing information systems approved by Order of the Minister of Information and Communications of the Republic of Kazakhstan No. 263 dated 13 June 2018.
Prohibited practices in the use of AI
The law establishes a direct ban on the creation and operation of AI systems with the following functions:
The use of manipulative techniques that influence the subconscious or distort behavior.
Exploiting vulnerabilities (age, disability) to cause harm.
Social scoring — the assessment and classification of people based on their social behavior or personal characteristics.
Determining emotions without the subject's consent (except in legal cases).
Classification based on biometrics for discrimination (race, political views, etc.).
Business obligations: Transparency and labelling
To ensure transparency and trust, the Act imposes a number of obligations on owners and operators of AI systems:
The distribution of synthetic results (deepfakes, generated content) is only permitted if they are labelled in a machine-readable form and the user is visually warned.
Users should be notified that they are interacting with AI or receiving services created with its help.
Owners are required to implement a continuous risk management process throughout the entire lifecycle of the AI system.
Maintaining technical documentation for the AI system depending on the degree of risk.
Intellectual property
The law clarifies copyright issues in relation to generative neural networks:
Works created with the help of AI are protected by copyright only if there is creative human input.
Textual requests (prompts) that are the result of creative activity are recognised as objects of copyright.
The use of works for training AI is permitted if the copyright holder has not established a direct prohibition in machine-readable form (opt-out).
Such use is not considered a violation of exclusive rights.
User rights
Individuals are granted the following rights when interacting with AI:
Request information from the owner (or proprietor) of the AI about any decision made by the AI (obtain confirmation that the content was generated by a neural network).
The right to protection from automated discrimination and the right to demand that AI decisions be corrected by their owners (proprietors).
The right to request information about the data on which neural network decisions (responses) are based.
Governmental regulation
The National Artificial Intelligence Platform (and the operator of this platform, who will manage it) will oversee the development, training, and operational experience of platform software products and AI models.
The work of the National AI Platform Operator will be regulated by an authorised body (the Ministry of Artificial Intelligence of the Republic of Kazakhstan), which will determine the priority sectors of the economy for providing access to computing resources.
“Data libraries” (collections of information for training neural networks) may be created by private individuals or by an authorised body (with data libraries stored by the national AI platform operator).
Practical steps for businesses (Compliance Checklist)
To ensure compliance with the new Act, companies are advised to:
Liability for non-compliance with the new AI Law
In addition to the new Law, amendments were also made to the Code of Administrative Offences of the Republic of Kazakhstan, providing for administrative liability for violations of the requirements of the Law on AI, namely:
Failure by AI owners or proprietors to inform users about synthetic results of AI system performance that may mislead them;
Failure by owners or proprietors of AI to manage the risks of high-risk AI systems, resulting in adverse effects on human health or well-being, the creation or dissemination of prohibited or false information, discrimination or human rights violations, and other harm, if such action (or inaction) does not constitute a criminal offence.
Punishable by a fine*:
For individuals
For small businesses and non-profit organisations
For medium-sized enterprises
For large enterprises
For the initial violation
USD 127.5
USD 170
USD 255
USD 850
For a repeat offence within a year
USD 255
USD 425
USD 595
USD 1,700
*The amount of fine is set in Monthly Calculation Index (MCI).