The Complete Guide to Using AI in the Government Industry in Kazakhstan in 2025
Last Updated: September 10th 2025

Too Long; Didn't Read:
Kazakhstan's 2025 government AI push pairs GovTech wins (eGov 26M+ services H1 2025; 14.8M users; 1,340 DGSC processes reengineered) with fast governance (Draft AI Law: 28 articles, first read 14 May 2025), a local LLM (148B tokens) and cyber risks (≈16M leaked records, ~30,000 incidents).
Kazakhstan's 2025 push to embed AI across the public sector is no abstract pilot - it's a coordinated, high‑stakes effort to speed services, cut costs and make government “one‑click” convenient for millions.
The Digital Government Support Center showcased practical projects at a Demo Day that reengineer workflows and shorten wait times, while the national eGov mobile channel handled more than 26 million services in H1 2025, proving scale and user demand are already here.
At the same time lawmakers are fast‑tracking an AI law, a national supercomputer and the QazTech sovereign platform, which means technical rollout must be matched by rules, audits and trained people to guard against data leaks and cyber risk - a reality that turns training into strategic infrastructure; job‑focused courses like the AI Essentials for Work bootcamp - Nucamp registration can help public teams move from policy to reliable deployment.
Metric | Value |
---|---|
eGov services accessed (H1 2025) | 26 million+ |
Registered eGov users | 14.8 million+ |
DGSC business processes reengineered (since 2021) | 1,340 |
Minor traffic accident app: insurance payout time | 40 → 5 days |
“Our projects bring real reductions in timelines, eliminate unnecessary procedures, and create convenient services.”
Table of Contents
- Regulatory environment & the Draft AI Law in Kazakhstan (2025 update)
- Data protection and privacy rules for AI in Kazakhstan
- Intellectual property, liability and evidentiary standards in Kazakhstan
- National AI infrastructure & models available to Kazakhstan government
- GovTech modernization: digital services and AI pilots in Kazakhstan
- Sectoral AI deployments and case studies in Kazakhstan (courts, finance, telecom, agri, logistics)
- Cybersecurity, trusted software and fraud mitigation for Kazakhstan public bodies
- Talent, education, visas and the AI ecosystem in Kazakhstan
- Conclusion & practical checklist for deploying AI in Kazakhstan government (2025)
- Frequently Asked Questions
Check out next:
Join a welcoming group of future-ready professionals at Nucamp's Kazakhstan bootcamp.
Regulatory environment & the Draft AI Law in Kazakhstan (2025 update)
(Up)Regulatory momentum in 2025 is turning Kazakhstan's AI ambitions into binding rules: the Mazhilis approved the Draft Law “On Artificial Intelligence” in its first reading on 14 May 2025, launching a relatively compact, 28‑article framework that explicitly borrows the EU's risk‑based approach while stressing a human‑centred legal philosophy; the draft builds in tiered oversight for high‑risk public‑sector systems, transparency and explainability obligations, restrictions on autonomous systems, and sharper liability for large‑scale or automated personal data processing (see the GRATA analysis of the Draft Law and local coverage in The Astana Times for context).
That lean structure - 28 articles versus the EU Act's 113 - is useful for speed, but it also means agencies must map each AI project to the proposed risk tiers, plan for audits and explainability, and push training and governance in parallel so practical safeguards aren't an afterthought; imagine a pocket toolkit (the draft) paired with the need to borrow specialist tools from larger international rulebooks as standards and guidance settle.
For government teams this translates into three immediate tasks: classify services against the draft's risk schema, document datasets and decision pathways for transparency, and budget for compliance‑grade engineering and legal review before wide rollout - steps that protect citizens and unlock the efficiency gains policymakers are pursuing.
Item | Detail |
---|---|
First reading | 14 May 2025 (Mazhilis) |
Draft size | 28 articles |
Core principles | legality, fairness, transparency, explainability, accountability, human‑centred |
“The bill reflects major global trends in AI regulation. Many countries have adopted systematic approaches to AI governance. The EU's AI Act serves as a model.”
Data protection and privacy rules for AI in Kazakhstan
(Up)As Kazakhstan moves to regulate AI, data protection sits at the centre of any safe public‑sector rollout: existing Law No. 94‑V (On Personal Data and Its Protection) and implementing guidance already require written consent for personal data, impose purpose‑limitation and storage rules, and oblige database owners to appoint responsible officers and keep records of consents and cross‑border transfers - practical constraints that shape how AI systems are trained and audited.
In particular, biometric information is treated as personal data and demands explicit consent before use, so anything from voice or facial scores to ID numbers needs a legal footing before being fed into models (see Rödl & Partner's analysis and Nemko's regulatory brief).
Data localisation and tight transfer rules mean government projects must plan where datasets live and how third‑party processors are approved; security incidents must be reported to the authorised state body within one business day, and enforcement can include administrative fines and even criminal liability for serious breaches.
That combination - consent, localisation, fast breach notification and officer accountability - turns privacy from a checkbox into an operational design constraint for every AI pilot and production system in Kazakhstan (see DLA Piper's Kazakhstan data protection guide for implementation details).
Rule | Detail (source) |
---|---|
Main law | Law No. 94‑V, On Personal Data and Its Protection (2013) - DLA Piper / Technology Law Dispatch |
Regulatory authority | Ministry of Digital Development (policy, procedures, security rules) - DLA Piper |
Consent | Written/recorded consent required for collection and processing; detailed consent content required - DLA Piper |
Biometric data | Classified as personal data; explicit consent mandatory for AI use - Rödl & Nemko |
Data localisation & transfers | Personal data of Kazakh nationals generally stored in Kazakhstan; cross‑border transfers allowed only with safeguards - Ius Laboris / DLA Piper |
Breach notification | Notify authorised state body within one business day of detection - DLA Piper |
Sanctions | Administrative fines (MCIs) and potential criminal liability for serious violations - Technology Law Dispatch / Ius Laboris |
Intellectual property, liability and evidentiary standards in Kazakhstan
(Up)Intellectual property, liability and evidentiary standards in Kazakhstan hinge on a clear legal intuition: only humans are authors, so AI-produced works sit in a legal grey zone until the new law clarifies ownership and inventorship - a position reflected in Rödl & Partner Kazakhstan AI regulation analysis and echoed in Nemko Kazakhstan AI regulatory brief, which note that AI cannot be listed as a co‑author under current rules.
At the same time the country has no fully settled AI liability regime: civil claims will turn on causation, fault and the chain of control (developer, operator, user), while the draft AI law and related Digital Code debates propose tiered responsibility, restrictions on fully autonomous systems and even criminal sanctions for large‑scale misuse as discussed in the Euractiv coverage of Kazakhstan's draft AI law.
Practically this means procurement teams, system designers and courts must maintain forensic logs, document training datasets and name human decision‑makers - a single AI‑generated draft in a civil case can force a judge to explain every divergence from the algorithm's reasoning, narrowing room for error or corruption and making transparency an operational necessity.
Issue | Short summary (sources) |
---|---|
Intellectual property | Author = person; AI not recognized as author; human authorship tied to IP protection (Rödl & Partner Kazakhstan AI regulation analysis, Nemko Kazakhstan AI regulatory brief) |
Liability | No bespoke regime yet; draft law seeks clearer fault/causation rules, operator responsibility, bans on full autonomy (Rödl & Partner Kazakhstan AI regulation analysis, Euractiv coverage of Kazakhstan's draft AI law) |
Evidentiary standards | Need for data provenance, audit logs and named human overseers; courts already receiving AI drafts in civil cases (Astana Times) |
“AI is a reality. Naturally, questions arise not only about responsibility for potential harm caused by AI, but also about understanding the phenomenon from a legal standpoint. Ensuring human rights in the use of AI is especially important,”
National AI infrastructure & models available to Kazakhstan government
(Up)Kazakhstan's national AI backbone is moving from concept to kit-bag: a government National AI Platform and the QazTech sovereign environment now give agencies centralized access to datasets, computing resources and ready‑made models (so teams can prototype faster without buying their own GPU farms), while the country already hosts a Kazakh‑language model trained on 148 billion tokens and open to developers - concrete building blocks that cut time‑to‑impact for public services; the state has also launched the alem.cloud supercomputer cluster and brought QazTech into industrial operation, plus nearly 2,000 civil servants have received AI training to operate these tools, so the technical muscle is arriving alongside capacity.
That said, national rollout carries governance tradeoffs: fair rules for supercomputer allocation, tighter cybersecurity and clear data‑localization on the sovereign platform are now central planning items as Kazakhstan scales pilots into production (see the Ministry's roadmap and reporting in Astana Times and the Prime Minister's office on the digital headquarters and QazTech).
For KZ public teams, the upshot is practical - shared compute and a local LLM remove a major barrier, but they also raise the bar for procurement, audit logs and cross‑agency coordination if benefits are to be sustained.
Item | Detail |
---|---|
National AI Platform | Centralized access to data, compute resources and pre‑trained models (government initiative) |
Kazakh‑language model | Launched Dec 2024; trained on 148 billion tokens (publicly available) |
Civil‑service training | Nearly 2,000 civil servants and quasi‑public employees trained |
Supercomputer | alem.cloud cluster launched (first regional supercomputer availability) |
QazTech | Sovereign platform in industrial operation (July 2025); moratorium on new systems outside QazTech from Jan 2026 |
“The next direction is the National AI Platform. This platform has been put into operation, providing access to data, computing resources, and ...”
GovTech modernization: digital services and AI pilots in Kazakhstan
(Up)Kazakhstan's GovTech modernization is shifting from ambitious roadmaps to everyday wins: the Digital Government Support Center's Demo Day showcased reengineered workflows that cut paperwork and processing time across sectors, while the national eGov platform served more than 26 million transactions in H1 2025 with roughly 45% handled on smartphones - proof that citizens want mobile‑first, AI‑augmented services rather than paper queues (see the DGSC Demo Day coverage in The Astana Times and H1 usage figures in BiometricUpdate).
Practical pilots already reduce real pain for users - a mobile accident reporting app slashed insurance payouts from 40 to 5 days - and central initiatives (a new Digital Headquarters, the national AI supercluster and open access to Kazakh LLMs) are lowering the technical bar for agencies to run pilots and scale them responsibly.
That combination of platform thinking, biometric and push‑notification upgrades in eGov Mobile, and plans to roll out dozens of intelligent virtual assistants by year‑end means procurement, cybersecurity and user‑experience teams must sync up: design services around clear user journeys, bake in privacy and audit logs, and measure impact in saved days and tenge so every pilot becomes a replicable public good (read more on Kazakhstan's broader digital push in Global CIO).
Metric | Value / Source |
---|---|
eGov transactions (H1 2025) | 26 million+ (BiometricUpdate) |
Share via mobile | ~45% / ~12 million (BiometricUpdate) |
eGov registered users | 14.8 million+ (BiometricUpdate) |
DGSC business processes reengineered | 1,340; >20 digital systems developed (Astana Times) |
Insurance payout time (mobile app) | 40 → 5 days (Astana Times) |
“Our projects bring real reductions in timelines, eliminate unnecessary procedures, and create convenient services.”
Sectoral AI deployments and case studies in Kazakhstan (courts, finance, telecom, agri, logistics)
(Up)Real-world pilots show how AI is changing public services across sectors in Kazakhstan: in courts, an Almaty resident used ChatGPT to prepare filings and even get real‑time help during trial, resulting in a judge's decision in about ten minutes (AI-assisted court victory in Almaty using ChatGPT); in enforcement, the Taraz “Robot” pilot now automatically starts enforcement proceedings - a hands‑off approach that the project owners say will spare citizens commission fees and save roughly 2 billion tenge as it scales nationwide (Taraz "Robot" enforcement pilot launches automated enforcement); and across finance, telecom, agriculture and logistics the pattern repeats: automation and smarter prompts drive speed and measurable savings, contributing to reported public‑sector efficiencies like 51.3B tenge in measurable savings reported by Nucamp.
The lesson for government teams is practical: design pilots around mobile‑first conversational flows that reach millions, keep airtight audit trails and explainability for courts and oversight, and treat small wins (a minutes‑fast ruling or a billion‑tenge saving) as the proof points that justify broader rollout.
Sector / Case | Detail |
---|---|
Courts | Almaty AI‑assisted case (Dec 2023) - ChatGPT helped prepare filings; decision issued in ~10 minutes (Zamin: AI-assisted court victory in Almaty) |
Enforcement | Taraz “Robot” pilot (launched Jan; reported Jul 2024) - auto‑starts enforcement, ~2 billion tenge savings; planned nationwide scale (TimesCA: Taraz "Robot" enforcement pilot) |
Government savings | Reported 51.3B tenge in measurable savings from AI in government companies (Nucamp Complete Software Engineering Bootcamp Path syllabus) |
Service design | eGov Mobile prompts and conversational flows essential to reach ~14.7M mobile users (Nucamp Front End Web + Mobile Development syllabus) |
“In January, the city of Taraz launched a pilot project, ‘Robot', which automatically starts enforcement proceedings without the participation of a bailiff. This will free citizens from paying commissions and save about 2 billion tenge. In August the project is planned to be scaled nationwide.”
Cybersecurity, trusted software and fraud mitigation for Kazakhstan public bodies
(Up)Cybersecurity is the unglamorous guardian of Kazakhstan's AI ambitions: a June leak that circulated a database with personal records for nearly 16 million citizens made the risk painfully concrete and coincided with a spike in attacks - about 30,000 information‑security incidents were recorded in early 2025 and botnet activity alone jumped to roughly 17,600 - so public bodies can't treat protection as an IT add‑on (read the Astana Times investigation on the leak and the Times of Central Asia coverage of 2025 incident trends).
Policy steps - higher administrative fines, mandatory breach notifications, legalisation of white‑hat participation and promises of forensic audits - are welcome, but operational controls win most battles: enforce real‑time access logging and audit trails for every dataset, mandate independent penetration tests and supply‑chain code audits for procured AI systems, connect agencies to sectoral cybersecurity centres for threat sharing, and run coordinated bug‑bounty programs alongside user education (NomadGuard checks and the CitizenSec training can reduce social‑engineering exposure).
Finally, treat AI as both a defensive tool and an emerging attack vector - invest in model‑aware monitoring, strict provenance for training data, and clear incident playbooks so a single breach doesn't undo months of service gains (see Arctic Wolf's 2025 trends for how AI reshapes security priorities).
Metric | Value |
---|---|
Leaked personal records | ≈16 million (Astana Times) |
Information‑security incidents (Jan–May 2025) | ~30,000 (TimesCA) |
Botnet‑related activity (early 2025) | ~17,600 (TimesCA) |
Public awareness of cybersecurity (2024) | 80.4% (Astana Times) |
“Any system that processes personal data will ‘always be of interest to cybercriminals,'”
Talent, education, visas and the AI ecosystem in Kazakhstan
(Up)Kazakhstan's AI ambitions are being matched by an aggressive talent push that mixes free mass training, targeted up‑skilling for civil servants and youth‑facing creative hubs: the Ministry and Astana Hub now offer wide‑reach, no‑cost courses (see the Astana Times coverage of free AI training) that include Tech Orda vouchers for hundreds of subsidised IT courses, a peer‑driven Tomorrow School, and the new TUMO centre at Alem.ai that plans to train 5,000 students a year in generative AI, game dev and robotics.
Tech Orda alone funds dozens of accredited courses (program counts and grants detailed on the Tech Orda page), with six‑month tracks in programming, big data and cybersecurity and an outstanding 88% placement rate among graduates; parallel initiatives - IT‑Áiel and Tomorrow School - are closing gender and regional gaps (Tomorrow School reports 30% women and free housing for regional students).
Public‑sector readiness is rising too: AI Qyzmet has trained over 16,000 civil servants and aims to scale further, while nationwide hackathons and startup accelerators (Decentrathon, AI Preneurs) keep pipelines active.
The practical risk is capacity: independent analysts warn a shortage of experienced ML and infra specialists could slow deployment unless training converts quickly into paid roles and retention - so the human capital agenda is as strategic as any supercomputer in the national AI stack.
Program / Metric | Value |
---|---|
Tech Orda courses / schools | 159 courses; 79 schools (AstanaHub) |
Tech Orda placement rate | 88% job placement among graduates (Astana Times) |
TUMO centre capacity | 5,000 students annually (Astana Times) |
AI Qyzmet civil‑servant training | 16,000+ trained; target ~30,000/year (Astana Times / Caspian Post) |
Decentrathon participation | ~2,500 participants (Astana Hub) |
“I have already spoken about accelerating the creation of a unified national digital ecosystem,”
Conclusion & practical checklist for deploying AI in Kazakhstan government (2025)
(Up)Wrap up any Kazakhstan AI project in 2025 by treating regulation, data and people as the deployment pillars: start by mapping each service to the draft law's risk tiers and the Mazhilis first‑reading framework (approved 14 May 2025) so high‑risk systems get audit‑grade controls; document data flows, secure explicit consent for biometric and personal data, and plan for local storage and authorised cross‑border transfers; lean on the National AI Platform and QazTech for compute and the local Kazakh LLM (trained on 148 billion tokens) but require provenance, forensic logs and named human overseers in procurement; mandate independent security testing and model‑aware monitoring before going live; measure impact in saved days and tenge and budget for compliance‑grade engineering and legal review rather than assuming policy will follow technology.
Practical entry points: run a scoped pilot on the National AI Platform, produce an explainability dossier for judges and auditors, and train operation teams now - nearly 2,000 civil servants have already completed AI training - so human oversight scales with compute.
For a quick staff up‑skilling route, consider role‑focused courses such as the AI Essentials for Work bootcamp to convert policy into reliable workflows, and keep the Nemko regulatory brief and Astana Times coverage close at hand when you align governance, ethics and procurement.
Checklist item | Why it matters / source |
---|---|
Classify by risk tier | Draft 2025 law requires stricter oversight for high‑risk public systems (Nemko / Astana Times) |
Document data & consent | Personal and biometric data need explicit consent and careful handling (Nemko) |
Use national infra with controls | National AI Platform, QazTech and Kazakh LLM available but require provenance & audits (Nemko) |
Train and name human overseers | Operational readiness: ~2,000 civil servants trained; human oversight emphasized in draft law (Nemko) |
Independent security & audits | Pen tests, model monitoring and legal review protect citizens and project continuity (Astana Times) |
“The bill reflects major global trends in AI regulation. Many countries have adopted systematic approaches to AI governance. The EU's AI Act serves as a model.”
Frequently Asked Questions
(Up)What is Kazakhstan's 2025 AI push in government and what real-world scale and impact has been achieved so far?
Kazakhstan's 2025 programme is a coordinated effort to embed AI across public services to speed delivery, cut costs and improve convenience. Key 2025 metrics: the national eGov mobile channel handled more than 26 million services in H1 2025 (≈45% via smartphones), there are over 14.8 million registered eGov users, the Digital Government Support Center has reengineered about 1,340 business processes since 2021, and pilots have shown tangible wins (for example, a mobile minor‑accident app reduced insurance payout time from 40 to 5 days).
What is the regulatory environment for AI in Kazakhstan and what must government teams do to comply?
Regulatory momentum in 2025 centers on the Draft Law “On Artificial Intelligence” (first reading approved by the Mazhilis on 14 May 2025). The draft is a compact, 28‑article, risk‑based framework that emphasizes human‑centred principles, tiered oversight for high‑risk public systems, transparency/explainability obligations and sharper liability for large‑scale or automated personal data processing. Government teams must: map each AI service to the draft's risk tiers, prepare audit‑grade documentation (datasets, decision pathways and explainability dossiers), budget for compliance‑grade engineering and legal review before wide rollout, and build governance/training in parallel so safeguards are operational rather than afterthoughts.
How do Kazakhstan's data protection and privacy rules affect AI projects?
Data protection remains central. Existing Law No. 94‑V (On Personal Data and Its Protection) requires written/recorded consent for personal data, purpose limitation, storage rules and appointment of responsible officers. Biometric data is explicitly treated as personal data and needs explicit consent before AI use. Data localisation and restricted cross‑border transfers mean projects must plan where data is stored and how processors are approved. Security incidents must be reported to the authorised state body within one business day, and enforcement can include administrative fines and, for serious breaches, criminal liability. Practically, consent, provenance records and storage planning are operational constraints for any AI system.
What national infrastructure, models and training are available to government agencies?
Kazakhstan has launched shared national AI assets to lower barriers: a National AI Platform and the QazTech sovereign environment (QazTech in industrial operation as of July 2025), the alem.cloud supercomputer cluster, and a Kazakh‑language LLM (launched Dec 2024) trained on about 148 billion tokens and available to developers. The state is also training staff - nearly 2,000 civil servants were trained to operate these tools and other programmes (e.g., AI Qyzmet) have trained many more. Agencies can run scoped pilots on these resources but must require dataset provenance, audit logs, named human overseers in procurement and clear allocation and security rules for shared compute.
What cybersecurity and operational safeguards should be in place before scaling AI in the public sector?
Cyber risk is material: a June leak exposed roughly 16 million personal records and early‑2025 saw ~30,000 information‑security incidents and elevated botnet activity. Before scaling, agencies should enforce real‑time access logging and immutable audit trails, mandate independent penetration tests and supply‑chain code audits, implement model‑aware monitoring and strict data provenance, run coordinated bug‑bounty/white‑hat programmes, connect to sectoral threat‑sharing centres, and publish incident playbooks with one‑business‑day breach notification procedures. Operationalising these controls plus naming human overseers, independent audits and legal review helps protect citizens and preserves service gains.
You may be interested in the following topics as well:
Don't miss the national impact - 51.3B tenge in measurable savings shows how AI adds up for the public balance sheet.
Find out how Alem.cloud deployment prompts and monitoring checklists help ministries run scalable, compliant AI services with rollback plans.
Policy measures such as policy actions to audit state IT and protect data sovereignty are critical to ensure AI rollouts are safe and that local workers are included in the transition.
Ludo Fourrage
Founder and CEO
Ludovic (Ludo) Fourrage is an education industry veteran, named in 2017 as a Learning Technology Leader by Training Magazine. Before founding Nucamp, Ludo spent 18 years at Microsoft where he led innovation in the learning space. As the Senior Director of Digital Learning at this same company, Ludo led the development of the first of its kind 'YouTube for the Enterprise'. More recently, he delivered one of the most successful Corporate MOOC programs in partnership with top business schools and consulting organizations, i.e. INSEAD, Wharton, London Business School, and Accenture, to name a few. With the belief that the right education for everyone is an achievable goal, Ludo leads the nucamp team in the quest to make quality education accessible