Top 10 AI Prompts and Use Cases and in the Government Industry in Slovenia
Last Updated: September 13th 2025

Too Long; Didn't Read:
Slovenia's government AI playbook prioritizes responsible prompts and pilots across public admin, health, smart cities and culture - backed by NpUI's ~EUR110M to 2025, EU AI Act fines up to €35M/7% turnover, ensemble forecasts ~83%/~91% wins, Vega ~6.9 PFLOPS (240 A100), and 24–41% traffic reductions.
Slovenia's national AI programme makes it plain: the public sector is meant to be a first mover in responsible AI, from
“learning by doing” with regulatory sandboxes and pilot projects
to wide-open public data and staff training - so precise, explainable prompts matter more than ever to translate research into trustworthy services (see the AI Watch synthesis).
Prompts shape how front-line staff, auditors and NGOs interact with models, supporting ethical frameworks and faster integration of AI findings across government; practical training, such as Nucamp's AI Essentials for Work, helps close the skills gap highlighted by policymakers and researchers alike.
Bootcamp | Length | Early-bird Cost | Syllabus |
---|---|---|---|
AI Essentials for Work | 15 Weeks | $3,582 | AI Essentials for Work syllabus |
Table of Contents
- Methodology: How we selected the Top 10 prompts and use cases
- Danes je nov dan - Public Sector AI Registry prompt
- AI procurement risk & regulatory compliance checklist (EU AI Act & NpUI)
- Explainable Fraud & Tax-Evasion Detection Assistant (Financial Administration)
- Epidemic Forecasting & Healthcare Planning (Jožef Stefan Institute)
- METIS Education Early-Warning & Intervention System
- Border & Passenger-Screening Explainability (Jože Pučnik International Airport)
- Slovene-language OCR & Cultural-Heritage Pipeline (CLARIN / IRCAI)
- Smart Cities Optimization (SRIP CS&C & Edge-AI)
- HPC-backed Research & Testing Job Plan (EuroHPC Vega / RIVR VEGA)
- AI Observability, Monitoring & Incident Response (Riverbed-style AIOps)
- Conclusion: Start small and scale responsibly
- Frequently Asked Questions
Check out next:
See how the Ministry of Digital Transformation coordinates policy, regulation and resources to steer Slovenia's AI rollout responsibly.
Methodology: How we selected the Top 10 prompts and use cases
(Up)Methodology: selection of the Top 10 prompts and use cases followed a practical, policy‑driven lens: priority went to scenarios that map directly to Slovenia's National Programme (NpUI) strategic objectives - especially public administration, health, Industry 4.0, and culture & language technologies - and to interventions that can be piloted within the programme's resources (the NpUI earmarks around EUR 110 million to 2025) as described in the Slovenia AI Strategy Report (AI Watch Slovenia AI Strategy Report).
Equally important were governance and regulatory fit (alignment with inter‑ministerial oversight and the NpUI working groups), infrastructure readiness (HPC and edge capabilities such as EuroHPC Vega / RIVR VEGA) and human‑capital lift (education, reskilling and lifelong learning).
Selection also favoured use cases that support trust, explainability and measurable public value - criteria framed by the OECD's description of the NpUI Government working group and Slovenia's regulatory track (see OECD NpUI Government working group), so each prompt can move quickly from sandbox to a transparent, auditable pilot that citizens can understand.
Selection criterion | Evidence / source |
---|---|
Alignment with NpUI strategic objectives | Slovenia AI Strategy Report (AI Watch) |
Priority sectors (health, Industry 4.0, culture & language, public admin) | Slovenia AI Strategy Report (AI Watch) |
Governance & regulatory fit | NpUI Government working group (OECD) |
Infrastructure readiness (HPC, Edge‑AI) | Slovenia AI Strategy Report (AI Watch) |
Human capital & upskilling potential | Slovenia AI Strategy Report (AI Watch) |
Danes je nov dan - Public Sector AI Registry prompt
(Up)Danes je nov dan's Public Sector AI Registry is a practical transparency tool that lifts the curtain on how Slovenian public institutions are deploying AI - filling a long‑standing gap in systematically collected information and giving journalists, civil society and citizens a searchable entry point to spot where automated decision‑making may affect rights and services.
The listing is intentionally modest today - only a handful of significant cases - but it already crowdsources research, public requests and verified tips to pressure institutions toward clearer reporting and accountability.
A well‑crafted query matters here: prompts must be specific and grounded to avoid vague or misleading results, so civil‑society investigators and journalists benefit from the same prompt‑discipline recommended for generative models.
In short, the registry turns diffuse concern into actionable oversight - like finding a labelled map where previously there was only a tangle of dotted lines - and gives advocates the tools to demand explanations and redress when an algorithmic decision touches a life.
AI procurement risk & regulatory compliance checklist (EU AI Act & NpUI)
(Up)Public procurement is where Slovenia's NpUI ambitions meet hard regulatory reality, so every buying decision needs a compact, risk‑first checklist: start with a full AI inventory and risk classification, bake governance roles and an AI lifecycle risk‑management plan into the tender, insist on detailed technical documentation (training data provenance, audit logs, explainability), require human‑oversight and robustness testing, and lock in cybersecurity, monitoring and post‑market obligations in contract terms.
Model clauses and procurement guidance make this practical - use the updated EU model clauses (MCC‑AI) and sectoral procurement templates to translate legal duties into measurable award criteria - and require suppliers to support downstream compliance and evidence.
Time matters: general‑purpose AI obligations kicked in on 2 Aug 2025 and high‑risk rules follow in 2026–27, and non‑compliance carries serious exposure (fines can reach €35 million or 7% of global turnover).
For ready, industry‑focused checklists that map deadlines, testing and documentation to procurement actions, see the EU AI Act compliance checklist and the practical procurement guide with model clauses, which together give public buyers the pieces needed to turn NpUI pilots into legally defensible, auditable services.
Explainable Fraud & Tax-Evasion Detection Assistant (Financial Administration)
(Up)An Explainable Fraud & Tax‑Evasion Detection Assistant for Slovenia's Financial Administration should pair powerful pattern‑recognition with clear legal and human safeguards so that algorithmic flags can be trusted, contested and corrected; CIAT's primer on XAI for tax authorities stresses that when a tax decision rests on a model “the tax authorities must be able to explain to taxpayers how this decision was made,” and that favors interpretable models, audit trails and bias testing over inscrutable “black boxes” (CIAT primer on explainable AI for tax administration).
Constitutional and GDPR principles reinforce the point: explainability, the right to human intervention (Article 22 GDPR) and access to the logic behind decisions are foundational for legitimacy - lessons driven home by European cases such as the SyRI/kinderopvang scandal that exposed the human cost when opacity meets enforcement (see the analysis of constitutional paths to XAI) (Analysis of constitutional principles paving the way to explainable AI in tax law).
Practically, Slovenia's approach should mandate explainable algorithms or post‑hoc explanations, clear escalation to human auditors, continuous monitoring and full documentation so that each automated selection is a defensible, auditable step rather than an unexplained black‑box allegation.
“With great power comes great responsibility.”
Epidemic Forecasting & Healthcare Planning (Jožef Stefan Institute)
(Up)Short‑term, probabilistic forecasts have practical value for Slovenia's hospital capacity planning and outbreak response: multi‑model ensembles - like those collated by the European Forecast Hub - consistently improved real‑time predictive performance for 1–4 week horizons and gave public health teams more reliable situational awareness than single models, especially for deaths where stability is higher over short horizons; the Hub's median ensemble outperformed roughly 83% of case forecasts and 91% of death forecasts across 32 European countries, and median averaging proved more robust to outliers than mean averaging (European Forecast Hub multi-model ensemble study).
For the Jožef Stefan Institute and Slovenian planners, the lesson is pragmatic: adopt open, ensemble pipelines, publish quantiles and scores for transparency, and prioritise 1–2 week operational forecasts for capacity decisions while treating longer horizons cautiously.
Complementary national‑scale models have also shown strong retrospective two‑week accuracy in city settings - an encouraging sign that combining local modelling expertise with ensemble methods and open data can turn fragmented signals into clear, actionable guidance for bed management, vaccine campaigns and surge staffing (retrospective two-week forecast model results).
Metric | Value / Finding |
---|---|
Ensemble vs models (cases) | Outperformed ~83% of forecasts |
Ensemble vs models (deaths) | Outperformed ~91% of forecasts |
Median ensemble WIS (median) | 0.71 (baseline = 1.0) |
Case horizon effect (1→4 weeks) | WIS worsened from 0.62 to 0.90 |
Death horizon effect (1→4 weeks) | WIS ~0.69 → 0.76 (more stable) |
“The two-week retrospective forecasts of the total number of cases and the number of active COVID-19 cases demonstrated fairly high accuracy in Moscow and Saint Petersburg. The MAPE (mean absolute percentage error) for the total number of cases at the morbidity peaks did not exceed 1% …”
METIS Education Early-Warning & Intervention System
(Up)The METIS Education Early‑Warning & Intervention System showcases a bottom‑line, school‑centred use of AI: models monitor grades and absences to flag pupils at increased risk, refer them to an education professional, and follow a tailor‑made action plan tracked through a smartphone app that sends reminders and even praises progress - a concrete path from detection to human‑led support documented on the METIS early‑warning system for detecting learning difficulties project page (METIS early‑warning system for detecting learning difficulties - project page).
METIS was deployed in three institutions with the explicit aim of scaling to any primary, secondary or tertiary school, but its pilot also underlines how fragile trust can be when budgets and data practices are squeezed: an independent review of Slovenia's ADM landscape flagged limited funding (€70,000) and methodological shortcuts that fuelled public debate about whether simple signals like falling grades require complex AI models (AlgorithmWatch Automating Society 2020 report on Slovenia's ADM landscape).
To turn promise into durable practice, METIS would benefit from the institutional safeguards and mentoring, teacher teams, and evaluated protocols recommended by European early‑warning pilots such as CroCooS (CroCooS early‑warning system guidance for preventing school dropout), pairing algorithmic alerts with clear human oversight so that an early flag reliably leads to supportive, explainable action rather than stigma.
Feature | METIS detail |
---|---|
Approach | AI/ML on educational success indicators (grades, absences) |
Intervention | Referral to educator + individual action plan tracked by smartphone app |
Deployment | Pilot at three institutions; aim to expand to all school levels |
Budget / critique | Reported €70,000 pilot budget; methodological concerns noted in reviews |
“You do not need AI to see that a pupil has got worse grades in the second semester. You can do that with Excel.”
Border & Passenger-Screening Explainability (Jože Pučnik International Airport)
(Up)For Jože Pučnik International Airport, the priority is not whether to try AI but how to make it explainable: lessons from TSA and CBP show that machine‑learning object detection and biometric flows can speed passenger processing and flag threats, but only if outputs are transparent, auditable and integrated into human workflows - examples include automated baggage‑screening and biometric pre‑checks that notify officers before a traveller reaches the counter (see the DHS overview of AI for border security).
European practice stresses that border control systems are “high‑risk” and must be interpretable, so procurement should insist on explainability, confidence scores, audit logs and clear escalation paths for human review (see explainable AI guidance for border forces).
Research on human–AI teaming - like CHIMERAS - underscores design choices that make AI a reliable teammate rather than a mysterious oracle, and operational features such as colour‑coded bounding boxes or annotated alerts (used in advanced X‑ray and anomaly detection systems) turn opaque signals into actionable, contestable evidence for officers and travellers alike; this is the practical route to faster, fairer and legally defensible screening at Slovenia's gateways.
“high-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable users to interpret the system's output and use it appropriately.”
Slovene-language OCR & Cultural-Heritage Pipeline (CLARIN / IRCAI)
(Up)The Slovene‑language OCR & cultural‑heritage pipeline stitches Europeana content, national digitisation projects and CLARIN tooling into a practical route for preserving and reusing Slovenia's written past: CLARIN's integration of Europeana Collections means researchers can discover ~135,000 cultural heritage objects (22 collections) and immediately run OCRed Slovenian texts - roughly 73,000+ items sourced largely from the Digital Library of Slovenia - through the Language Resource Switchboard and chains like WebLicht or Voyant for rapid analysis (CLARIN and Europeana integration for 135,000 cultural heritage items).
The national node, CLARIN.SI, keeps a certified repository of 600+ language resources, online concordancers and expert support that make these pipelines usable by humanities teams, libraries and public institutions (CLARIN.SI services and certified repository for Slovenian language resources), while local digitisation efforts - from Rakičan to Gorenjska - feed new scans and metadata into Europeana and national archives so cultural items are not only preserved but discoverable across Europe (Interreg report on digital heritage in Slovenia).
The result is a searchable, explainable pipeline that turns fragile paper into reusable, machine‑readable assets for research, education and civic access - imagine a searchable attic of newspapers, pamphlets and manuscripts suddenly available to students, journalists and museum curators alike.
Metric | Value / Detail |
---|---|
Collections identified | 22 |
Cultural heritage objects (CHOs) | ~135,000 |
Slovenian text resources | 73,000+ (mostly Digital Library of Slovenia) |
Tools via LRS | 9 (incl. WebLicht, Voyant) |
CLARIN.SI repository | 600+ language resources & tools |
“Through the project, our organisation has made concrete steps for the development of sustainable solutions for the conservation of our cultural heritage with the added value of connecting the Research Centre with other regional and national libraries in Slovenia.”
Smart Cities Optimization (SRIP CS&C & Edge-AI)
(Up)SRIP CS&C & Edge‑AI point the way for Slovenian cities to turn sensors and cameras into real‑time public value: edge models can cut peak‑hour congestion by 24–41%, smooth energy use by 15–30% and detect water leaks with over >92% accuracy, so planners get faster, local decisions instead of long cloud round‑trips - imagine rush hour easing almost overnight, commutes sliding from a snarled knot to a steady braid of movement.
Edge deployments also sharpen public‑safety and venue screening (embedded vision at the edge has reached ~95% threat‑detection accuracy with sub‑100ms latency), while low average latencies (reported ~4.2 ms in recent studies) and 5G/6G links make responsive traffic signalling, smart grids and waste‑route optimisation practical at municipal scale.
Start with interoperable, open middleware and pilot use cases that map to SRIP CS&C priorities - traffic, energy, safety - and use tested hardware/software stacks and procurement templates from the growing body of edge‑AI practice to manage security, scale and standards across city districts (see Edge AI for smart cities and real‑world case studies for reference).
Metric | Reported value |
---|---|
Peak‑hour traffic reduction | 24–41% (Edge AI) |
Energy optimisation | 15–30% (Edge AI) |
Leak detection accuracy | >92% (edge acoustic sensors) |
Threat detection (embedded vision) | ~95% accuracy, <100ms latency |
Average network latency reported | 4.2 ms |
HPC-backed Research & Testing Job Plan (EuroHPC Vega / RIVR VEGA)
(Up)Slovenia's EuroHPC-backed Vega cluster at IZUM in Maribor turns petascale compute into practical capacity for government and public‑interest pilots: with roughly 1,020 compute nodes, about 130,560 CPU cores and 240 NVIDIA A100 GPUs delivering a sustained ~6.9 PFLOPS, Vega supports large‑scale AI, HPDA and scientific workloads while granting access to researchers, public organisations and industry partners (the national plan even reserves capacity for companies) - see the detailed technical overview on IZUM's Vega page (IZUM Vega technical specifications).
Running fair, auditable test campaigns depends on predictable scheduling, so the system's Slurm‑based priority policy emphasises Fairshare, age and resource weights to stop any single project from monopolising the machine - practical rules and priority formulae are published in the Vega job priorities guide (Vega job priorities and scheduling guide).
Combined, the hardware, software stack and access policy make Vega an operational backbone for reproducible model testing, cross‑discipline research and legally defensible public‑sector pilots that need both scale and governance (see also the OECD overview of the national EuroHPC initiative for context).
Metric | Value |
---|---|
Compute nodes | ~1,020 |
CPU cores | ~130,560 |
GPU accelerators | 240 × NVIDIA A100 |
Sustained performance | ~6.9 PFLOPS |
Host / location | IZUM, Maribor (EuroHPC / RIVR Vega) |
“The Vega supercomputer will have indirect profound effects on our lives. It will enable scientists to invent new materials and components, it will help them model global phenomena, and develop new medicines and medical therapies…”
AI Observability, Monitoring & Incident Response (Riverbed-style AIOps)
(Up)Observability and AIOps turn anxious firefighting into predictable, auditable workflows - vital for Slovenia's public services as they scale cloud, legacy and AI stacks; think of a dashboard that spots a degrading payment gateway or a stalled vaccine‑booking pipeline before a single citizen sees a 504.
Practical moves start with OpenTelemetry‑style, vendor‑agnostic telemetry to unify logs, traces and metrics (see Elastic's guidance on OTel for the public sector), layer ML‑driven anomaly detection and automated root‑cause analysis (ScienceLogic's Skylar example shows how RCA can speed diagnosis by orders of magnitude), and close the loop with data observability and lineage so dashboards don't lie when a pipeline drifts (DQLabs documents end‑to‑end data observability for government).
For mission‑critical GenAI pilots and high‑risk services, pair these stacks with hardened monitoring playbooks and threat‑aware incident response (Booz Allen's AI observability briefing outlines the security benefits) so audits, compliance and citizen trust are baked into everyday ops.
Metric / finding | Value |
---|---|
Orgs with AIOps adoption | 82% (Dynatrace) |
Use of six or more observability tools | ~85% (SolarWinds) |
Concern: probabilistic ML limits AIOps value | 98% (Dynatrace) |
Top security worry for gov IT | 60% cite hackers & untrained staff (SolarWinds) |
“Dynatrace is central to our mission to make Pasco County the premier government services provider in the state.”
Conclusion: Start small and scale responsibly
(Up)Start small and scale responsibly: begin with tight, measurable pilots that use clear prompts (persona, task, context, format) so teams can test outcomes without risking services or trust - think a single office using a prompt to turn a 10‑page manual into a two‑minute public brief, then measuring accuracy, time saved and user feedback before wider rollout.
Use prompt best practices - clarity, contextualisation, positive/negative selection and iteration - from guides like Google Workspace Gemini prompt handbook: writing effective prompts and the government prompt starters in OpenAI Prompt‑Pack for Leaders: government prompt starters to make every query auditable and repeatable.
Pair pilots with simple governance: a one‑page risk checklist, human escalation rules, and observability so drift is visible. Build staff capacity in parallel - practical courses such as Nucamp AI Essentials for Work course: prompt writing and workplace AI skills teach prompt writing and workplace use cases, helping civil servants move from curiosity to confident, accountable use of generative tools.
Bootcamp | Length | Early‑bird Cost | Syllabus |
---|---|---|---|
AI Essentials for Work | 15 Weeks | $3,582 | AI Essentials for Work syllabus (Nucamp) |
“Outputs are drafts - not final policy.”
Frequently Asked Questions
(Up)What are the top AI prompts and use cases for Slovenia's government?
The article highlights 10 practical, policy‑driven prompts/use cases aligned to Slovenia's National Programme (NpUI): a Public Sector AI Registry for transparency; an AI procurement risk & regulatory compliance checklist; an Explainable Fraud & Tax‑Evasion Detection Assistant; epidemic forecasting and healthcare planning (multi‑model ensembles); the METIS education early‑warning & intervention system; explainable border and passenger screening at Jože Pučnik Airport; Slovene‑language OCR and cultural‑heritage pipelines (CLARIN/Europeana); smart cities optimisation using edge AI; EuroHPC Vega‑backed research and testing jobs; and AI observability/AIOps for monitoring and incident response. These map to priority sectors (public administration, health, Industry 4.0, culture & language) and emphasise explainability, auditability and measurable public value.
How were the Top 10 prompts and use cases selected?
Selection followed a practical, policy‑driven lens: priority was given to scenarios that directly map to NpUI strategic objectives, have governance/regulatory fit, are feasible with Slovenia's infrastructure (HPC and edge capabilities) and support human‑capital uplift. Criteria included alignment with the Slovenia AI Strategy Report and OECD NpUI working‑group guidance, infrastructure readiness (e.g. EuroHPC Vega, edge stacks), and potential for trust, explainability and rapid sandbox‑to‑pilot transition. The approach also considered available funding (the NpUI earmarks around EUR 110 million to 2025) and evidence from national and European pilots.
What procurement and compliance steps should public buyers follow under the EU AI Act and NpUI?
Public procurement should be risk‑first: maintain a full AI inventory and risk classification; include governance roles and an AI lifecycle risk‑management plan in tenders; require detailed technical documentation (training data provenance, audit logs, explainability); mandate human oversight, robustness testing, cybersecurity controls, monitoring and post‑market obligations in contracts. Use model clauses and sectoral procurement templates to turn legal duties into measurable award criteria. Deadlines of note: general‑purpose AI obligations began 2 Aug 2025, while stricter high‑risk rules phase in 2026–27. Non‑compliance exposures include fines up to €35 million or 7% of global turnover.
What infrastructure and tooling in Slovenia support large‑scale AI pilots and reproducible testing?
Slovenia's EuroHPC Vega (hosted at IZUM, Maribor) provides petascale compute for public pilots: roughly 1,020 compute nodes, ~130,560 CPU cores, 240 NVIDIA A100 GPUs and sustained performance around 6.9 PFLOPS, with Slurm scheduling and published priority policies for fairshare. Complementary infrastructure includes edge‑AI stacks for smart cities, CLARIN.SI and Europeana integrations for language and cultural‑heritage pipelines, and observability/AIOps tooling (OpenTelemetry, ML‑driven anomaly detection, data lineage) to ensure reproducible, auditable deployments.
How should government teams start AI pilots while maintaining trust and accountability?
Start small and scale responsibly: run tight, measurable pilots tied to a single use case; craft clear, auditable prompts (persona, task, context, format); measure accuracy, time saved and user feedback before scaling. Pair pilots with simple governance - one‑page risk checklists, human escalation rules, observability/monitoring to detect drift - and invest in staff training (for example, practical courses such as Nucamp's AI Essentials for Work: 15 weeks, early‑bird cost listed in the article). Prioritise explainability, documentation, and iterative improvement so outputs remain drafts for human review rather than final policy.
You may be interested in the following topics as well:
Routine office work won't disappear overnight, but jobs like automation of clerical tasks like permits and registrations are already prime targets.
Explore practical wins from automation for public administration, where form processing and document analysis save time and reduce errors.
Ludo Fourrage
Founder and CEO
Ludovic (Ludo) Fourrage is an education industry veteran, named in 2017 as a Learning Technology Leader by Training Magazine. Before founding Nucamp, Ludo spent 18 years at Microsoft where he led innovation in the learning space. As the Senior Director of Digital Learning at this same company, Ludo led the development of the first of its kind 'YouTube for the Enterprise'. More recently, he delivered one of the most successful Corporate MOOC programs in partnership with top business schools and consulting organizations, i.e. INSEAD, Wharton, London Business School, and Accenture, to name a few. With the belief that the right education for everyone is an achievable goal, Ludo leads the nucamp team in the quest to make quality education accessible