The Complete Guide to Using AI in the Government Industry in Switzerland in 2025

By Ludo Fourrage

Last Updated: September 6th 2025

Government official using AI dashboard in Switzerland, 2025

Too Long; Didn't Read:

Switzerland's 2025 AI approach balances innovation and safeguards: Federal Council ratified the Council of Europe AI Convention (Feb–Mar 2025), relies on the revised FADP, permits autonomous vehicles from 1 Mar 2025, targets a draft bill by end‑2026, and faces strong AI talent demand.

Switzerland's AI story in 2025 is a careful, innovation-first pivot: the Federal Council signalled a plural, sector‑specific approach - ratifying the Council of Europe's AI Convention and aiming to produce a regulatory proposal this year - while keeping most AI oversight anchored in existing laws like the revised Data Protection Act and sector rules, as covered in White & Case's AI Watch on Switzerland (White & Case AI Watch global regulatory tracker: Switzerland).

Practical moves include permitting autonomous vehicles on designated routes and funding specialised “Swiss ChatGPT” models for healthcare and science, showing Switzerland prefers targeted, tested deployments over blanket bans (see SwissInfo's roundup of 2025 changes: SwissInfo 2025 AI updates in Switzerland).

For public servants and policy teams navigating transparency, explainability and data‑protection duties, short, practical training is essential - see Nucamp's AI Essentials for Work syllabus for real‑world skills and prompt training (Nucamp AI Essentials for Work syllabus), because in Switzerland the rulebook is evolving fast but daily operations still demand reliable human oversight and clear governance.

BootcampLengthEarly bird costRegister
AI Essentials for Work15 Weeks$3,582Register for Nucamp AI Essentials for Work (15 Weeks)

“Not regulating AI would be like allowing pharmaceutical companies to invent new drugs and treatments and release them to the market without testing their safety.” - Michael Wade

Table of Contents

  • What is the AI strategy in Switzerland? (2025)
  • Does Switzerland use AI? Practical government examples (2025)
  • Is AI in demand in Switzerland? Market and skills outlook (2025)
  • General legal & regulatory framework in Switzerland (2025)
  • Data protection & automated decision‑making in Switzerland (FADP) (2025)
  • Generative AI, IP and trade secrets in Switzerland (2025)
  • Sectoral expectations & high‑risk areas in Switzerland (2025)
  • Governance, procurement and practical steps for Swiss public bodies (2025)
  • Conclusion & next steps for government AI in Switzerland (2025)
  • Frequently Asked Questions

Check out next:

What is the AI strategy in Switzerland? (2025)

(Up)

Switzerland's 2025 AI strategy aims to protect rights without killing innovation: the Federal Council has chosen a plural, sector‑specific path - deciding on 12 February 2025 to ratify the Council of Europe's AI Convention (signed 27 March 2025) and favouring targeted legislative tweaks and voluntary measures over a single, sweeping “Swiss AI Act” (see White & Case AI Watch - AI regulation in Switzerland overview for an overview of the approach).

In practice this means keeping core protections (transparency, data protection, non‑discrimination and supervision) anchored in existing laws such as the revised FADP, product‑liability and civil rules while drafting sectoral amendments where needed; federal departments are tasked with preparing a draft bill for consultation and plans for non‑legislative measures by the end of 2026, and the Federal Chancellery is preparing concrete implementation steps for the administration by the end of 2025.

The result is a pragmatic toolbox: expect targeted rules for high‑risk domains (health, finance, transport) and guidance from bodies like the CNAI and FINMA rather than one‑size‑fits‑all regulation - think autonomous vehicles on designated routes and stronger governance for medical AI, not a blanket ban on innovation (see the Swiss Federal Chancellery artificial intelligence policy page).

Fill this form to download the Bootcamp Syllabus

And learn about Nucamp's Bootcamps and why aspiring developers choose us.

Does Switzerland use AI? Practical government examples (2025)

(Up)

Does Switzerland use AI? Yes - but mostly in cautious, practical ways that help public servants work smarter without handing decisions to machines: federal and cantonal teams deploy AI for semantic search across laws, automatic summarisation of guidance, routing incoming queries to the right office and public‑facing chatbots, while targeted projects include a “Swiss ChatGPT” for healthcare and science and pilot autonomous driving under the new Ordinance on Automated Driving (permitting driverless operation on designated routes and automated parking from 1 March 2025) (see SwissInfo's 2025 roundup).

The federal administration even published a clear fact sheet on generative AI (18 Jan 2024) that encourages responsible experimentation - summaries, code suggestions and images for presentations - while warning employees never to paste confidential, secret or personal data into chatbots and to verify any AI output before use; similar practical fact sheets exist at cantonal and municipal levels (see Chambers' Artificial Intelligence 2025 guide for Switzerland).

For a living inventory of where AI is actually deployed in public bodies, civil society's AlgorithmWatch CH “Atlas of Automation Switzerland” remains the most comprehensive reference, proving the point: Switzerland prefers stepwise, supervised pilots over headline-grabbing rollouts.

“Not regulating AI would be like allowing pharmaceutical companies to invent new drugs and treatments and release them to the market without testing their safety.” - Michael Wade

Is AI in demand in Switzerland? Market and skills outlook (2025)

(Up)

Demand for AI talent in Switzerland is real and rising - but it is shaped more by sectoral need than by hype: strong research hubs (ETH Zurich, EPFL and institutes like IDSIA/IDIAP) and a dense startup scene mean Switzerland

punches above its weight

(it ranks top in AI patents per capita and has the most AI companies per citizen in Europe), so employers from healthcare to finance and transport are hunting for data engineers, NLP specialists and governance‑minded model validators rather than just prompt tinkerers (Switzerland AI legal framework - Global Legal Insights).

Public‑sector demand mirrors this pattern: ministries and cantons look for staff who can run Retrieval‑Augmented Generation pipelines, validate outputs, and embed explainability into services while preserving privacy - exactly the lifelong‑learning and reskilling goals the Federal report recommends to keep the labour force competitive (Switzerland AI strategy report - AI Watch).

Adoption signals are mixed but promising - GenAI and RAG architectures are driving rapid, cross‑sector pilots (healthcare, social security, financial compliance), so the practical skillset that sells is hybrid: technical model work plus governance, data stewardship and human‑in‑the‑loop review - in short, Swiss employers want people who can turn experimental models into trustworthy public services without handing decision‑making to a black box.

One memorable test: when a GenAI summary affects a patient pathway, the ability to trace and correct that single line can be the difference between trust and public backlash.

Fill this form to download the Bootcamp Syllabus

And learn about Nucamp's Bootcamps and why aspiring developers choose us.

General legal & regulatory framework in Switzerland (2025)

(Up)

Switzerland's 2025 legal landscape for government AI is deliberately pragmatic: there is no single “Swiss AI Act,” instead the Federal Council has chosen a technology‑neutral, sector‑specific route - signing the Council of Europe's AI Convention and directing federal departments to draft implementing amendments and non‑binding measures while relying on existing laws for immediate oversight (see White & Case's AI Watch on Switzerland for the regulatory roadmap).

In practice that means the revised Federal Act on Data Protection (FADP) already governs most AI processing of personal data, FINMA's guidance frames expectations for financial services, and product‑liability, civil and criminal rules remain the fallback tools for harms; regulators and coordinating bodies (CNAI, FDPIC, FINMA) focus on transparency, traceability and governance rather than prescriptive, one‑size‑fits‑all rules.

The upside is flexibility for innovation and federal sandboxes; the downside is potential fragmentation across sectors and cross‑border compliance work for suppliers.

For public bodies this translates into concrete duties: treat AI as an application of existing rules, run impact assessments for high‑risk uses, and remember enforcement can bite - data protection breaches carry serious consequences (including FADP‑linked sanctions).

For a concise regulatory snapshot and next steps, read White & Case's tracker and the FDPIC clarification that the FADP already applies to AI.

“A machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments. Different artificial intelligence systems vary in their levels of autonomy and adaptiveness after deployment.”

Data protection & automated decision‑making in Switzerland (FADP) (2025)

(Up)

Switzerland's revised Federal Act on Data Protection (FADP) makes data protection a practical checkpoint for any government AI rollout in 2025: public bodies must be transparent about processing, keep records of activities, embed privacy‑by‑design/default, and carry out a Data Protection Impact Assessment (DPIA) where new technologies or large‑scale profiling could risk individuals' rights; critically, data subjects can object to fully automated decisions that produce legal effects and request a manual review, so algorithms can't quietly replace human oversight.

The FADP also reaches activities with effects in Switzerland, tightens rules for sensitive data (health, biometric, genetic), requires timely breach notification to the FDPIC and affected people, and limits transfers abroad to adequate destinations or explicit safeguards - making traceability, human‑in‑the‑loop checks and contractual transfer clauses core procurement requirements for GenAI or RAG systems (see the New Federal Act on Data Protection (FADP) analysis and practical notes from SecurePrivacy: Switzerland New Federal Act on Data Protection (FADP) analysis and the clear explainer at AdNovum: Swiss Federal Act on Data Protection 2023 explainer).

TopicKey point (FADP)
Automated individual decisionsRight to object and request manual check for fully automated decisions
Data Protection Impact Assessment (DPIA)Mandatory where processing poses high risk to personality or fundamental rights
Breach notificationNotify FDPIC and affected data subjects promptly when high risk exists

Data protection should not be seen as an obstacle that slows down the company's growth. The opposite is true: data protection creates trust and security on the path of the company's digital transformation. - Yasin Kücükkaya

Fill this form to download the Bootcamp Syllabus

And learn about Nucamp's Bootcamps and why aspiring developers choose us.

Generative AI, IP and trade secrets in Switzerland (2025)

(Up)

Generative AI in Switzerland sits at a legal crossroads: training datasets that include copyrighted works can amount to unauthorised copying and trigger civil (and in some cases criminal) liability, while purely AI‑generated output generally lacks copyright protection unless a human author's creative input is clear - so exclusivity on a chatbot answer is not guaranteed (see the detailed discussion in the Artificial Intelligence 2025 practice guide).

That legal reality makes trade‑secret and contract tools essential: curated training sets, model weights and source code can be protected as business/manufacturing secrets, and NDAs or licence terms are common procurement safeguards, though injunctions cannot undo a disclosed secret and damages are often hard to prove.

For public bodies the rule is blunt and practical: do not paste confidential, secret or personal data into LLMs - guidance and fact sheets for federal staff (including LLM and generative AI factsheets) are published by CNAI and the Federal Chancellery to support safe experimentation.

The net effect for Swiss government deployers and suppliers is straightforward - combine contractual clauses and trade‑secret controls with robust review and plagiarism checks, and treat provider terms carefully to avoid unexpected IP or unfair‑competition exposure.

TopicKey point (Switzerland, 2025)
Training dataUsing copyrighted material to train models can infringe copyright; commercial exceptions are narrow
AI outputPurely AI‑generated works lack copyright unless sufficient human creative input exists
Trade secrets & contractsModels and curated datasets can be protected as secrets; NDAs and licence terms are vital but remedies are limited
Public‑sector guidanceFederal fact sheets warn: never submit confidential or personal data to public LLMs (CNAI generative AI instruction sheets for federal staff, Federal Chancellery artificial intelligence guidance)

Sectoral expectations & high‑risk areas in Switzerland (2025)

(Up)

Sectoral expectations in Switzerland for 2025 centre squarely on financial services as a high‑risk, high‑priority area: FINMA expects supervised institutions to be proactive, forward‑looking and to manage operational resilience, concentration and outsourcing risk rather than rely on one‑size‑fits‑all rules, so banks and insurers should expect intensified, risk‑based scrutiny (including more on‑site reviews) and clearer rules on group‑level exposure under the new Circular 2025/4 on consolidated supervision; at the same time, digitalisation and SupTech are explicit strategic priorities, meaning regulators will reward robust governance, traceability and contractual safeguards when critical services are outsourced to cloud providers (see FINMA guidance for supervised institutions and the FINMA compliance checklist for cloud outsourcing with Microsoft).

In practice this translates into tight expectations for outsourcing contracts, documented risk assessments for single‑vendor concentration, and concrete proofs of resilience for services that could materially affect clients - a single cloud failure or contract gap can suddenly turn a neat AI pilot into a systemic headache, so procurement and legal teams must embed FINMA‑aligned clauses from day one.

Sector / TopicKey expectation (Switzerland, 2025)
Financial servicesProactive, risk‑based supervision; stronger governance and resilience
Consolidated supervisionClarified scope under Circular 2025/4 to cover group risks and special‑purpose vehicles
Outsourcing & cloudMandatory contractual safeguards, due diligence and avoidance of single‑vendor concentration
Regulatory toolsUse of SupTech, on‑site reviews and tailored FINMA guidance (non‑binding but practical)

“This new structure promotes our goal for FINMA of preventive supervision that achieves maximum impact at the supervised institutions while continuing to supervise them in a risk‑based and proportionate manner.” - Stefan Walter

Governance, procurement and practical steps for Swiss public bodies (2025)

(Up)

Governance and procurement are the front‑line safeguards for Swiss public bodies adopting AI: FINMA's Guidance Note 08/2024 stresses the need to catalogue every AI system, set a single, accountable governance framework across development, deployment and monitoring, mandate regular testing and independent reviews, and ensure staff have AI expertise rather than decentralised, opaque ownership (FINMA Guidance Note 08/2024 - AI governance and risk management (Pestalozzi)).

Practical steps include building an AI inventory and risk‑classification intake, running DPIAs and bias checks, embedding human‑in‑the‑loop controls so that responsibility for decisions is never delegated to a model or vendor, and documenting data provenance and quality.

Procurement must demand contractual safeguards, documented SLAs, audit rights and explicit clauses to avoid single‑vendor concentration and third‑party lock‑in - measures FINMA highlights as essential for operational resilience - while drawing on established playbooks for operationalising governance and vendor due diligence (OneTrust AI governance playbook - operationalizing responsible AI).

Pair these controls with reskilling and cross‑administrative knowledge sharing recommended in the Switzerland AI strategy report so teams can move from cautious pilots to trustworthy services; remember, a single cloud outage or missing contract clause can turn a neat AI pilot into a systemic headache, so tighten oversight from day one (Switzerland national AI strategy report - AI Watch).

“We are undoubtedly in an era of radical innovation and change and there is a mounting need for AI's fast and effective governance.” - Alois Zwinggi

Conclusion & next steps for government AI in Switzerland (2025)

(Up)

Switzerland's path in 2025 is clear: preserve innovation while hardening guardrails - the Federal Council's Digital Switzerland Strategy puts AI, information/cybersecurity and open source at the top of the administration's agenda, and the DETEC/FDFA/FDJP workstreams aim to translate the Council of Europe AI Convention into sector‑specific measures rather than a single sweeping law; practical next steps include a draft bill for consultation by the end of 2026 and an implementation plan for non‑legally binding measures (see the Federal Council's announcement and the public Action Plan for concrete measures).

For government teams that must deliver services, this means two immediate priorities: tighten operational resilience (catalogue systems, embed human‑in‑the‑loop checks, demand contractual audit rights) and invest in people - reskilling and prompt‑engineering skills will matter as much as legal literacy, so hands‑on courses like Nucamp's AI Essentials for Work can help civil servants turn pilot projects into trustworthy services without surrendering control.

The payoff is practical: better public services, safer deployments and fewer surprises - because, as regulators warn, a single cloud outage or contract gap can turn a neat AI pilot into a systemic headache.

Next stepTarget / start
Draft bill to implement AI Convention (DETEC, FDFA, FDJP)By end of 2026
Strategy for use of AI systems in the Federal AdministrationStart: 2025 (Action Plan)
Implementation plan for non‑legally binding measuresStart: 2026 (Action Plan)
CFC‑Copilot for staff (pilot capacity‑building)Start: 2025 (Action Plan)

“AI holds transformative potential to address society's most pressing challenges, but unlocking this requires informed, adaptable and responsible policy-making.” - Cathy Li

Frequently Asked Questions

(Up)

What is Switzerland's AI strategy in 2025?

In 2025 Switzerland follows a pragmatic, sector‑specific AI approach: the Federal Council decided to ratify the Council of Europe's AI Convention (decision 12 February 2025; Convention signed 27 March 2025) and favours targeted legislative amendments, guidance and voluntary measures over a single omnibus “AI Act.” Core protections such as transparency, non‑discrimination and data protection remain anchored in existing laws (notably the revised FADP), while federal departments must prepare a draft bill for consultation and non‑legislative measures by the end of 2026 and concrete administrative implementation steps by the end of 2025.

How is AI being used practically by Swiss government bodies in 2025?

Swiss federal and cantonal administrations use AI in cautious, practical ways: semantic search across legislation, automatic summarisation of guidance, routing queries, public chatbots and targeted pilots such as a “Swiss ChatGPT” for healthcare and science and autonomous vehicle pilots on designated routes under the 2025 Ordinance on Automated Driving. The administration publishes fact sheets (e.g., generative AI guidance) that encourage responsible experimentation while warning staff not to paste confidential or personal data into public LLMs.

What are the main legal and data‑protection requirements for government AI under the FADP in 2025?

The revised Federal Act on Data Protection (FADP) already governs most AI processing of personal data: public bodies must be transparent, keep processing records, apply privacy‑by‑design/default, and run Data Protection Impact Assessments (DPIAs) where processing poses high risks. Data subjects can object to fully automated decisions that produce legal effects and request a manual review. The FADP tightens rules for sensitive data, requires breach notification to the FDPIC and affected individuals, and limits transfers abroad unless adequate safeguards exist.

Which sectors are treated as high‑risk and what do regulators expect (especially FINMA) in 2025?

Financial services are treated as a high‑risk, high‑priority sector: FINMA expects supervised institutions to adopt proactive, risk‑based governance, manage operational resilience, concentration and outsourcing risks, and document due diligence for cloud and vendor arrangements. Circular 2025/4 clarifies consolidated supervision expectations. Across high‑risk domains such as health, finance and transport regulators demand traceability, robust contractual safeguards, audited SLAs and avoidance of single‑vendor concentration.

What governance, procurement and practical steps should Swiss public bodies take now?

Public bodies should build an AI inventory and risk classification intake, run DPIAs and bias checks, embed human‑in‑the‑loop controls so responsibility is never fully delegated to models, document data provenance and quality, and mandate regular testing and independent reviews. Procurement must require contractual safeguards, audit rights, SLAs and clauses to avoid single‑vendor lock‑in. Immediate policy milestones to track include the draft bill due by end‑2026, implementation steps for the administration by end‑2025, and planned pilots such as the CFC‑Copilot starting in 2025.

You may be interested in the following topics as well:

N

Ludo Fourrage

Founder and CEO

Ludovic (Ludo) Fourrage is an education industry veteran, named in 2017 as a Learning Technology Leader by Training Magazine. Before founding Nucamp, Ludo spent 18 years at Microsoft where he led innovation in the learning space. As the Senior Director of Digital Learning at this same company, Ludo led the development of the first of its kind 'YouTube for the Enterprise'. More recently, he delivered one of the most successful Corporate MOOC programs in partnership with top business schools and consulting organizations, i.e. INSEAD, Wharton, London Business School, and Accenture, to name a few. ​With the belief that the right education for everyone is an achievable goal, Ludo leads the nucamp team in the quest to make quality education accessible