Top 5 Jobs in Government That Are Most at Risk from AI in Belgium - And How to Adapt
Last Updated: September 5th 2025

Too Long; Didn't Read:
AI threatens Belgian government jobs - administrative clerks, HR officers, tax assessors, inspectors and contact‑centre agents - with studies predicting 65% of jobs greatly impacted, 70.9% have used AI while 74% fear job loss; generative AI could add €50 billion, so role‑focused reskilling and governance are essential.
Belgian public servants should pay attention because multiple local studies show AI is already reshaping work: ING estimates 65% of Belgian jobs will be greatly impacted by AI, EY reports that 70.9% of Belgians have used AI and that 74% fear AI will reduce jobs while many also say training is inadequate, and PwC finds 40% of workers don't interact with AI at all - a dangerous skills gap for public services where trust and ethics matter.
At the same time, generative AI could add up to €50 billion to Belgium's economy and augment roughly 64% of jobs, so the choice is not between alarm and denial but between reskilling and missed opportunity; practical upskilling (for example through programs like the AI Essentials for Work bootcamp) and clear workplace policies can turn automation into better citizen services and less repetitive paperwork (see the EY Belgium survey and the €50 billion Gen AI opportunity for Belgium).
Bootcamp | AI Essentials for Work - key details |
---|---|
Length | 15 Weeks |
Courses included | AI at Work: Foundations; Writing AI Prompts; Job Based Practical AI Skills |
Cost | $3,582 early bird; $3,942 regular - paid in 18 monthly payments (first due at registration) |
Syllabus / Register | AI Essentials for Work syllabus | Nucamp | AI Essentials for Work registration | Nucamp |
“Although AI is becoming more prevalent in workplaces, a significant portion of our workforce has yet to embrace these technologies. Implementing AI tools and fostering an AI-driven culture are essential steps to harness the full potential of AI. Meanwhile, as the technology evolves rapidly, the disparity between proficient AI users and non-users continues to widen.” - Xavier Verhaeghe, PwC Belgium
Table of Contents
- Methodology: How we ranked roles and gathered Belgian evidence
- Administrative Clerks (Permits, Registrations & Back-Office Processing)
- HR Officers (Recruitment, Workforce Management & Employee Relations)
- Tax Assessors & Social Benefits Caseworkers
- Inspectors (Compliance & Permit Officers for Routine Inspections)
- Contact-Centre Agents (Municipal Customer Service & Citizen Support)
- Conclusion: Cross-cutting steps and a practical checklist for Belgian public employers
- Frequently Asked Questions
Check out next:
Discover how Belgium's EUR 4 billion generative AI opportunity could transform public services and productivity in 2025.
Methodology: How we ranked roles and gathered Belgian evidence
(Up)This ranking blended task-level vulnerability (how routine, rule‑based and high‑volume a role is) with Belgian evidence on adoption, skills and sector impact: roles were scored for automation risk, contact‑centre volume, and how easily tasks can be reskilled into higher‑value work, then cross‑checked against country data from the EY European AI Barometer 2025 report and EY's Belgium workforce snapshot.
That data shows the public sector lags behind industry on measurable gains (government/public sector reports ~35% positive financial impact) even while Belgium overall leads some regional measures (about 60% report positive effects and ~52% note productivity increases).
Worker sentiment and readiness came from Belgium‑specific research - 70.9% of respondents have used AI but only 12% say it already changes daily work, and 74% fear job loss - a gap reinforced by training shortfalls and the finding that, vivid as it sounds, three in four young consultants now use ChatGPT to draft application letters.
Finally, practical signals from EY's Mobility Reimagined work (GenAI plus targeted upskilling) guided the “how to adapt” axis: which roles can be practically retrained, and where employers must invest in governance, measurement and live training to close the perception gap.
“Consumer concerns about AI responsibility impact brand trust, placing CEOs at the forefront of these discussions. Executives must address these issues by developing responsible strategies to mitigate AI risks and being transparent about their organization's use and protection of AI.” - Raj Sharma, EY Global Managing Partner - Growth and Innovation
Administrative Clerks (Permits, Registrations & Back-Office Processing)
(Up)Administrative clerks who process permits, registrations and back‑office forms sit squarely in AI's crosshairs because their tasks are routine, rule‑based and data‑dense - exactly the patterns where document summarisation, intelligent routing and virtual agents can quickly shave hours from case queues and improve consistency EY analysis: how copilot-style tools speed document work.
In Belgium this shift comes with a governance caveat: beyond the EU AI Act, there's no standalone national AI statute, but federal advisory committees and Belgian Data Protection Authority guidelines are already shaping how public services must manage profiling, automated decisions and enforcement - see the Belgium AI regulatory tracker and regulatory guidance.
That combination of clear productivity upside and non‑trivial regulatory, continuity and people risks means employers should adopt controlled pilots, invest in data‑safe deployments and fast, role‑focused reskilling - examples include personalised reskilling plans for administrative clerks developed with local partners to move clerks into supervision, exception‑handling and audit roles.
Without deliberate governance and training, simple efficiency gains can turn into public trust problems rather than better citizen service.
“This study proposes concrete fixes to serious problems. The AI Office, up and running as of this week, must ensure that AI Act implementation respects its intended risk‑based approach – including guidelines, secondary legislation, standards, and codes of practice.” - Boniface de Champris, CCIA Europe
HR Officers (Recruitment, Workforce Management & Employee Relations)
(Up)HR officers in Belgian public services face a familiar trade‑off: AI can speed vacancy screening, candidate matching and routine workforce analytics, but it also brings bias, privacy and collective‑consultation risks that demand practical controls.
Belgian employers must treat recruitment and people‑management systems as high‑risk under the EU AI Act and respect national safeguards - including works‑council consultation under Belgium's collective bargaining frameworks - while following GDPR and data‑minimisation rules; Osborne Clarke's practical guidance for Belgian employers is a useful primer on these obligations Osborne Clarke guidance on AI and employment law in Belgium.
Real gains are possible - faster shortlists and better candidate matching highlighted by recruitment specialists - but unchecked models can reject qualified applicants for irrelevant cues (think an AI penalising a candidate because of a picture on their bookshelf), so regular audits, human final decisions, diverse training data and clear AI policies are non‑negotiable AI bias in hiring: mitigation strategies and GDPR compliance.
Belgian HR should also pilot tools with union partners, combine AI with recruiter judgement, and invest in upskilling so automation frees time for higher‑value employee relations work rather than eroding trust how AI could shape Belgium's recruitment market.
Category | Total | % of total | Successful | % of successful |
---|---|---|---|---|
Gender - female | 320 | 40% | 4 | 20% |
Ethnicity - Black / Black British / Caribbean / African | 80 | 10% | 3 | 15% |
Age - over 50 | 1 | <1% | 0 | 0% |
“When dealing with thousands of applications, for example assembly line workers in factories and delivery drivers, AI is definitely more preferable there because usually the job description is very standard and they just check whether applicants meet certain standards and then hire them,” she explained. - Nikki Sun, Oxford Martin AI Governance Initiative (quoted in Re:locate)
Tax Assessors & Social Benefits Caseworkers
(Up)Tax assessors and social‑benefits caseworkers in Belgium are already being nudged away from line‑by‑line form checking and toward exception‑handling, audit work and rights‑sensitive oversight as back‑office automation, risk scoring and profiling scale up: the Belgian Ministry of Finance long ago deployed IRIS' document capture and workflow system that once processed up to a million pages a day across scanning centres in Ghent and Namur, moving routine VAT and personal‑tax form extraction into automated pipelines (IRIS automated tax form processing case study for the Belgian Ministry of Finance).
At the same time, law and policy sharpen the stakes - legislation since 2022 (and DAC7 joint‑audit rules) expanded audit windows and cross‑border cooperation, so automated flags can trigger investigations with longer limitation periods (BDO analysis: Belgium investigative powers expanded under 2022 tax legislation).
That combination creates a clear “so what”: faster selection means fewer routine tasks but higher consequences when models err - profiling and wholly automated decisions thus collide with GDPR safeguards and Art.22 risks, especially where social‑benefits outcomes affect people's finances or access to services.
Practical response: preserve meaningful human‑in‑the‑loop checks, publish governance and appeal routes, and pair automation pilots with clear data‑protection safeguards and regular accuracy audits (Policy Review analysis on tax compliance, profiling and GDPR automated decision‑making).
Inspectors (Compliance & Permit Officers for Routine Inspections)
(Up)Inspectors and permit officers who carry out routine compliance visits are being nudged toward higher‑value oversight as drones, AI image recognition and agentic systems start to shoulder the repetitive work: Belgium's Elia BVLOS drone project for power line inspections showed autonomous craft (one test drone weighed 87 kg and flew 120 km in under two hours) can replace costly helicopter surveys, spot rust, thermographic hotspots and vegetation risks, and cut inspection costs while letting experts focus on complex decisions.
At the municipal and building level, AI agents now automate compliance reporting and tenant‑facing workflows for syndici and VMEs, reducing admin churn for permit‑related dossiers, as explored in AI agents transforming Belgium property management operations.
That efficiency gain is real, but so are the governance and accuracy stakes: more frequent, automated flags increase the consequences of false positives for citizens and businesses, so human‑in‑the‑loop checks, documented model performance and alignment with Belgium's risk‑based AI rules are essential - consistent with the national approach set out in the Belgium AI strategy and EU AI Act readiness guidance.
The practical win is clear: free inspectors from ladders and logbooks, but keep them as the accountable, rights‑sensitive arbiters when automation touches permits, safety or livelihoods.
Contact-Centre Agents (Municipal Customer Service & Citizen Support)
(Up)Municipal contact‑centre agents are frontline testbeds for AI in Belgian public services: tools can speed triage, draft replies and surface case history so agents spend less time on repetitive requests and more on complex, empathetic resolution, but the payoff depends on sound governance and trusted vendors.
Industry analyses show the technology can boost CX while often under‑delivering on promised ROI. See TelXL's look at:
“The Disruptive Power of AI in Contact Centres”
Belgian employee research warns that public‑sector staff are among the most sceptical and under‑trained - only 12% say AI already shapes their daily work and over 80% report insufficient company training (see EY Belgium's employee snapshot).
Legal and ethical guardrails matter in Belgium: transparency duties under the EU AI Act mean citizens must know when they're dealing with a bot, GPAI rules add disclosure duties, and experts stress vendors must not repurpose customer data to train models - practical non‑negotiables highlighted in Enghouse's guidance on ethics and privacy in contact centres.
Practical steps for municipalities are straightforward: select partners who lock down training data and provide explainable outputs, keep human‑in‑the‑loop checks for sensitive cases, and pair small pilots with live upskilling so AI reduces agent churn without turning a simple complaint into a public trust problem.
Conclusion: Cross-cutting steps and a practical checklist for Belgian public employers
(Up)Belgian public employers can turn risk into leverage with a short, practical checklist: start by embedding clear governance (designate competent authorities and align with the EU AI Act while following the Federal Government's data‑and‑AI roadmap), pilot low‑risk generative AI use cases to capture early wins and the EUR 4 billion potential for eGovernment transformation, and pair every deployment with human‑in‑the‑loop checks, explainability audits and GDPR‑compliant appeals processes so automated flags don't become costly false positives.
Invest in scalable, sovereign infrastructure and a “cloud‑first” data strategy, negotiate vendor commitments that forbid reuse of citizen data for model training, and formalise social‑partner engagement and transparency obligations before rolling out recruitment or benefits tools.
Finally, make reskilling non‑optional: fast, role‑focused training and modular courses will let clerks, inspectors and contact‑centre agents move from routine processing to oversight and complex casework - practical steps echoed in both the Implement Consulting Group's analysis of Belgium's eGovernment opportunity and the new federal coalition's AI priorities.
Action | Practical note | Source |
---|---|---|
Governance & oversight | Designate authorities; publish AI governance & appeal routes | EU AI Act national implementation plans |
Pilot low‑risk use cases | Prove value quickly and scale responsibly | Implement Consulting Group - The AI opportunity for eGovernment in Belgium |
Skills & reskilling | Role-based, short courses to shift staff into oversight roles | AI Essentials for Work syllabus - Nucamp |
For more on the national policy context and practical recommendations see the Implement Consulting Group report: The AI opportunity for eGovernment in Belgium and the Belgium Federal Government Agreement 2025–2029.
Frequently Asked Questions
(Up)Which government jobs in Belgium are most at risk from AI?
The article identifies five high‑risk roles: 1) Administrative clerks (permits, registrations and back‑office processing) - highly routine, data‑dense tasks susceptible to document capture, summarisation and routing; 2) HR officers (recruitment, workforce management and employee relations) - screening and matching can be automated but raise bias and privacy risks; 3) Tax assessors & social benefits caseworkers - form extraction and risk scoring move routine checks to automation, increasing the need for exception handling; 4) Inspectors (compliance and permit officers) - drones, image recognition and agents can replace repetitive inspection work; 5) Contact‑centre agents (municipal customer service) - AI can triage and draft replies, reducing repetitive workloads but requiring human oversight for sensitive cases.
How large is AI's impact in Belgium and what evidence supports these findings?
Belgian studies show both risk and opportunity: ING estimates 65% of Belgian jobs will be greatly impacted by AI; EY reports 70.9% of Belgians have used AI, 74% fear job loss, but only 12% say AI already changes daily work and over 80% report insufficient training; PwC finds about 40% of workers don't interact with AI. At the same time, generative AI could add up to €50 billion to Belgium's economy and augment roughly 64% of jobs. The article's ranking blended task‑level vulnerability (routine, rule‑based, volume) with Belgian adoption, skills and sector impact data and cross‑checked against national surveys and industry deployments.
What legal, ethical and governance safeguards must Belgian public services follow when deploying AI?
Deployments must align with the EU AI Act, GDPR and Belgian guidance from the Data Protection Authority and federal advisory committees. Recruitment and people‑management systems are likely high‑risk and require works‑council consultation under Belgian collective frameworks. Key safeguards include human‑in‑the‑loop decisions, transparency (citizens must know when they interact with a bot), explainability audits, documented model performance, data‑minimisation, published governance and appeals routes, and vendor commitments that forbid reusing citizen data for model training.
How can public servants adapt and reskill to stay relevant, and what practical training options exist?
Adaptation should combine governance with fast, role‑focused reskilling so staff move from routine processing to oversight, exception handling and audit roles. Practical steps: run controlled, low‑risk pilots; embed human‑in‑the‑loop checks; perform explainability and accuracy audits; negotiate vendor non‑reuse clauses; and formalise social‑partner engagement. An example training pathway is the AI Essentials for Work bootcamp: 15 weeks, courses include AI at Work: Foundations; Writing AI Prompts; Job‑Based Practical AI Skills. Cost is $3,582 early bird or $3,942 regular, payable in 18 monthly payments with the first due at registration. Making modular, short courses mandatory for affected roles is recommended.
What immediate actions can municipalities and agencies take to capture AI benefits while limiting risks?
A short practical checklist: 1) designate competent authorities and publish AI governance and appeal routes; 2) pilot low‑risk generative AI use cases (triage, draft replies, document extraction) to prove value; 3) require vendors to lock down training data and provide explainable outputs; 4) keep human‑in‑the‑loop checks for sensitive decisions and maintain audit trails; 5) invest in sovereign, cloud‑first infrastructure and negotiate non‑reuse of citizen data; 6) pair pilots with live, role‑focused upskilling and social‑partner consultation. These steps help capture eGovernment potential (an estimated EUR 4 billion transformation opportunity) while protecting citizens and public trust.
You may be interested in the following topics as well:
See how a procurement transparency search over Statbel and Data.gov.be uncovers high-risk contracts and streamlines audits.
Get inspired by cases where faster processing from days to hours led to better citizen experiences and lower operating costs.
Ludo Fourrage
Founder and CEO
Ludovic (Ludo) Fourrage is an education industry veteran, named in 2017 as a Learning Technology Leader by Training Magazine. Before founding Nucamp, Ludo spent 18 years at Microsoft where he led innovation in the learning space. As the Senior Director of Digital Learning at this same company, Ludo led the development of the first of its kind 'YouTube for the Enterprise'. More recently, he delivered one of the most successful Corporate MOOC programs in partnership with top business schools and consulting organizations, i.e. INSEAD, Wharton, London Business School, and Accenture, to name a few. With the belief that the right education for everyone is an achievable goal, Ludo leads the nucamp team in the quest to make quality education accessible