The Complete Guide to Using AI in the Government Industry in Brazil in 2025
Last Updated: September 6th 2025

Too Long; Didn't Read:
In 2025 Brazil's government AI landscape centers on Bill No. 2,338/2023, ANPD‑coordinated SIA and LGPD alignment; PBIA brings BRL23 billion, municipal pilots like Smart Sampa scale tests, while BRL50 million/2% turnover fines, AI impact assessments and 15‑week training enforce governance.
Brazil's public sector in 2025 sits at a fast-moving crossroads: a Senate-approved general AI bill (Bill No 2,338/2023) is under review and - alongside the PBIA's BRL23 billion programme - promises clearer rules on transparency, human oversight and high‑risk AI in government, while the ANPD is gearing up to coordinate national oversight and data‑protection alignment (LGPD) for automated decision‑making and algorithmic impact assessments (Bill No 2,338/2023 and the 2025 landscape).
At the same time municipal pilots like São Paulo's Smart Sampa and plans for a hyperscale data‑centre surge (backed by huge infrastructure pledges) underline both opportunity and public concerns about surveillance, bias and procurement risk - so public agencies must pair ambition with strict governance.
For civil servants and project owners, practical, job-focused training can bridge the gap: the AI Essentials for Work bootcamp teaches promptcraft, tool use and applied AI skills for administrators and non‑technical staff (AI Essentials for Work bootcamp), while infrastructure and investment debates continue to shape where cloud and compute capacity land in the country (data‑center boom and policy challenges).
Program | AI Essentials for Work |
---|---|
Description | Practical AI skills for any workplace: tools, prompts, and applied use (no technical background) |
Length | 15 Weeks |
Cost | $3,582 (early bird) / $3,942 afterwards; 18 monthly payments |
Registration | Register for AI Essentials for Work bootcamp |
“Brazil is very well positioned for AI because of its renewable energy, competitive prices, and strong connectivity.” - Fernando Jaeger
Table of Contents
- What type of government does Brazil have in 2025 and why it matters for AI
- What is the new AI law in Brazil? (Bill No 2,338/2023 and 2025 updates)
- Data protection, LGPD and implications for AI in Brazil
- Brazil's digital government strategy and PBIA (2024–28)
- How is AI used across Brazilian government agencies in 2025?
- Procurement, contracting and liability for AI in Brazil
- Operational best practices and risk management for Brazilian agencies
- Sectoral specifics and regulator expectations in Brazil (health, finance, transport, justice)
- Conclusion and practical checklist for government agencies in Brazil (next steps)
- Frequently Asked Questions
Check out next:
Take the first step toward a tech-savvy, AI-powered career with Nucamp's Brazil-based courses.
What type of government does Brazil have in 2025 and why it matters for AI
(Up)Brazil in 2025 is a federative presidential republic - the president is both chief of state and head of government and is elected by a two‑round absolute‑majority system - so decisions about AI land in a multilayered political and legal landscape rather than in a single, central office.
That federal structure (the Union, 26 states plus the Federal District and more than 5,500 municipalities) shapes where data, procurement and oversight happen, creating a patchwork of buying rules, regional priorities and interoperability headaches for national AI projects; see the concise government overview at Brazil country profile - globalEDGE for the constitutional basics and election rules.
At the same time Brazil's separation of powers, an independent judiciary and a fragmented, coalition‑driven legislature mean AI policy, liability standards and procurement law must navigate court challenges, regulator mandates and multi‑authority review - factors explored in contemporary public‑administrative analysis - so agencies planning pilots should budget extra time for legal vetting, public consultations and cross‑jurisdiction agreements.
In short: federalism and strong institutional checks make Brazil a place where careful governance, compliance with public procurement law and clear data‑sharing accords are the difference between a scalable AI win and a stalled, litigated pilot; practical planners should treat the country's sprawling municipal map as a strategic reality, not an afterthought.
Feature | Fact (source) |
---|---|
Government type | Federative Republic, Presidential system (Brazil country profile - globalEDGE) |
Head of state & government | President Luiz Inácio Lula da Silva (Brazil country profile - globalEDGE) |
Constitution | 1988 Constitution (Brazil country profile - globalEDGE) |
Subnational units | 26 states + Federal District; >5,500 municipalities (World Bank - Brazil country data) |
What is the new AI law in Brazil? (Bill No 2,338/2023 and 2025 updates)
(Up)The headline for government IT teams is clear: Bill No. 2,338/2023 - the Senate‑approved, risk‑based draft AI framework - is moving through Congress and is expected to shape how public agencies buy, deploy and govern AI in 2025; see the Senate approval and bill overview at Artificial Intelligence Act's Brazil page (Brazil Artificial Intelligence Act - Bill No. 2338/2023 overview) and the legislative timing and practical framing in the Global Practice Guide (Chambers Practice Guide - Brazil AI 2025 trends and developments).
The proposal adopts a familiar EU‑style risk taxonomy - from prohibited
“excessive‑risk”
systems (including illegitimate social scoring and most real‑time biometric ID in public spaces) to high‑risk uses that trigger mandatory algorithmic impact assessments, logging, documentation and tighter governance - and it assigns clear roles to developers, distributors and deployers while aligning obligations with the LGPD. Enforcement is meaningful: the draft empowers the ANPD to coordinate a national AI governance system (SIA), require corrective measures or even suspend systems, with fines up to R$50,000,000 or 2% of turnover for serious violations.
For municipal pilots and federal programmes alike, the practical takeaway is immediate: build risk classification, human oversight and audit trails into procurement and project plans now, because the law turns governance details into compliance checkpoints rather than optional best practices.
Item | Summary (source) |
---|---|
Senate approval | Approved 10 Dec 2024 (moves to Chamber of Deputies) (Brazil Artificial Intelligence Act - Bill No. 2338/2023 (Senate approval)) |
Regulator | ANPD to coordinate the National System for AI Governance (SIA) (Chambers Practice Guide - Brazil AI 2025: ANPD and SIA regulator role) |
Penalties | Fines up to R$50,000,000 or 2% of group revenue; corrective powers include suspension/restriction (Brazil Artificial Intelligence Act - penalties and enforcement details) |
Data protection, LGPD and implications for AI in Brazil
(Up)Data protection sits at the heart of any government AI programme in Brazil: the LGPD applies across the whole lifecycle of generative and predictive systems, so agencies must treat data lawfulness, transparency and minimisation as built‑in features, not add‑ons.
Brazil's ANPD has framed this clearly in its ANPD Preliminary Study on Generative AI and data protection (FPF), which stresses that publicly available material used for training still falls under the LGPD, and that careful pre‑processing (anonymisation, purpose limits and only collecting what's necessary) plus clear user information are essential.
The same practical lesson came through in the ANPD's Meta proceedings: heavy reliance on
legitimate interests
without meaningful transparency or simple opt‑outs can trigger suspension of processing, especially where minors or sensitive data are involved (ANPD Meta case takeaways on AI training and data protection (FPF)).
For public sector pilots that means documented lawful bases, algorithmic impact or DPIA‑style assessments for high‑risk systems, robust logging and human oversight, and realistic remediation plans - because once personal information is baked into model parameters it cannot be plucked out like a file from a cabinet; mitigation and
unlearning
are the pragmatic path.
In short: design for transparency, bake in necessity and minimise data from day one, and be prepared for the ANPD to expect clear chains of responsibility and user‑facing remedies before projects scale.
Issue | Practical takeaway (source) |
---|---|
Web scraping & training data | Public data still covered by LGPD; use lawful basis, anonymise and document processing (ANPD Preliminary Study on Generative AI and data protection (FPF)) |
Transparency & opt-outs | Clear, accessible notices and simple opt‑out mechanisms are required; failure can lead to suspension (ANPD Meta case takeaways on AI training and data protection) |
Deletion & model unlearning | Right to delete exists but full model deletion is impractical - focus on mitigation, retraining or unlearning strategies (Chambers Practice Guide: Artificial Intelligence 2025 - Brazil trends and developments) |
DPIAs / impact assessments | Required for high‑risk systems; document risks, mitigation and governance before deployment (ANPD guidance) |
Brazil's digital government strategy and PBIA (2024–28)
(Up)Brazil's digital government playbook in 2024–28 links a clear national strategy with active local experimentation: the E‑Digital roadmap (2022–2026) frames infrastructure, inclusion and trust while OGP‑driven platforms like Fala.BR and Participa + Brasil have already scaled participatory channels and open data commitments (Brazil E‑Digital Roadmap 2022–2026 national digital government strategy; Brazil Open Government Partnership OGP commitments and open government journey).
That policy backbone is producing practical outcomes - the gov.br portal and digital IDs now connect services at national scale and municipal GovTech pilots (from multichannel vaccine booking to participatory budgeting) show how interoperability and citizen‑centric design reduce friction in real life (Brazil digital public infrastructure and GovTech development for socio-economic progress).
Layered on this, the PBIA (2024–28) narrative has pushed funding and governance attention toward digital public infrastructure and GovTech pilots, so agencies should treat procurement, data standards and user journeys as core components of any AI rollout - otherwise the promise of faster, fairer services will stall at legacy silos rather than reach the 150‑million people already on the national portal.
"CADE has stood out for innovative solutions in consumer-centred services, with significant gains in speed, accessibility, and security." - Ms Bruna Cardoso
How is AI used across Brazilian government agencies in 2025?
(Up)Across Brazil in 2025, AI is woven into the everyday machinery of government: federal, state and municipal agencies use predictive analytics and automation to speed tax and social‑welfare casework, deploy chatbots and virtual assistants on the gov.br portal to handle front‑line citizen queries, and run predictive maintenance and data‑driven monitoring for infrastructure and health planning; courts and justice bodies are experimenting with research and document‑automation tools (and the STF has piloted systems such as Victor, Rafa 2030, Vitória and Maria), while police and city governments test biometric and facial‑recognition systems - most visibly São Paulo's Smart Sampa, whose tender now includes a 90% match threshold and trained human reviewers after privacy pushback during Carnival 2025.
These adoptions are reinforced by national programmes and funding flows that aim to scale GovTech, digital public infrastructure and PBIA‑backed R&D, yet agencies still prioritise human oversight, algorithmic impact assessments and transparent procurement to manage bias, liability and LGPD risks; see the practical regulatory framing in the Chamber's Brazil AI practice guide and case studies of interoperable, citizen‑facing platforms in Brazil's GovTech rollout at the World Economic Forum, and note why predictive analytics and cybersecurity top the public‑sector use cases highlighted in broader government productivity research.
“Technology alone won't unlock productivity in government,” said Jennifer Robinson.
Procurement, contracting and liability for AI in Brazil
(Up)Public‑sector buyers must treat AI procurement as risk management first and technology second: lessons from the World Economic Forum's AI Procurement in a Box pilots - notably São Paulo Metrô's predictive‑maintenance tender for a network serving 3.7 million passengers a day and Hospital das Clínicas' data‑lake roadmap - show that clear scope, stakeholder engagement and early Algorithmic Impact Assessments turn complex bids into manageable contracts (World Economic Forum AI Procurement in a Box case studies).
Contracts should therefore bake in performance warranties, measurable accuracy parameters, human‑in‑the‑loop obligations, audit and explainability rights, data‑provenance guarantees and incident‑notification clauses; require vendors to warrant lawful data use and provide indemnities for IP or LGPD breaches; and allocate responsibility for bias testing, ongoing monitoring and remediation.
That commercial scaffolding matters because liability in Brazil sits on dual rails - negligence under the Civil Code and strict product/service liability under the Consumer Defence Code - while draft national rules (Bill No.
2,338/2023) and ANPD oversight create regulatory exposure including suspension powers and fines up to BRL50 million or 2% of turnover, so procurement teams should also insist on contractual insurance and clear post‑termination data and IP clauses.
For a practical procurement playbook that links these legal levers with modern buying processes and the 2021 procurement reform, see guidance on Brazil's procurement law and digital contracting trends (Chambers AI practice guide: Brazil 2025 - procurement law and digital contracting trends) and recent commentary on procurement modernisation (Analysis of Law No. 14.133/2021 and AI-powered procurement).
Operational best practices and risk management for Brazilian agencies
(Up)Operational best practices for Brazilian agencies start with treating every AI purchase as a risk-management exercise: map systems into the risk taxonomy used by Bill No.
2,338/2023, require Algorithmic Impact Assessments and DPIA-style documentation for high‑risk tools, and bake LGPD‑aligned data‑provenance and logging requirements into contracts so regulatory exposure is contractual exposure too (see the practical procurement playbook and legal framing in the Chambers 2025 Brazil AI practice guide - trends and developments: Chambers 2025 Brazil AI practice guide - Brazil trends and developments).
Operational controls should include clear human‑in‑the‑loop thresholds, continuous monitoring and bias‑testing regimes, incident‑response runbooks and mandated vendor audit rights; the World Economic Forum's “AI Procurement in a Box” pilots show how early stakeholder engagement, scoped performance metrics and pre‑defined remediation clauses turn complex tenders (metro predictive‑maintenance, hospital data‑lake) into enforceable outcomes (see World Economic Forum AI Procurement in a Box Brazil case studies: World Economic Forum - AI Procurement in a Box: Brazil case studies).
Equally essential is data hygiene: audits of INSS automation reveal that CNIS inconsistencies led to more automatic denials - even infamous six‑minute rejections - so agencies must invest in data quality, transparency to affected users and accessible appeal channels before scaling automation (read the Policy Review analysis of INSS automation and public‑interest AI lessons: Policy Review - INSS automation and public‑interest AI analysis).
Finally, establish an internal AI governance committee, publish simple user‑facing notices, mandate retraining/unlearning strategies where deletion is impossible, and fund targeted civil‑servant training so systems deliver faster services without trading away rights or trust.
“Ongoing business transformation is the top priority for Brazilian enterprise leaders.” - Bob Krohn
Sectoral specifics and regulator expectations in Brazil (health, finance, transport, justice)
(Up)Sectoral regulation in Brazil in 2025 is already highly practical rather than theoretical: healthcare is the clearest example - ANVISA's SaMD rules (RDC 657/2022 and the broader RDC 751/2022 framework) mean any diagnostic or treatment software faces class-based notification/registration, clinical validation, visible in‑software labelling and prompt change‑reporting, while the LGPD treats health data as sensitive and Bill No.
2,338/2023 forces preliminary risk assessments and algorithmic impact work for systems that may affect physical integrity (IBA analysis of AI in healthcare in Brazil).
Professional regulators are stepping in too: the Federal Pharmacy Council's Resolution 10/24 authorises pharmacists to use and even technically oversee AI-driven digital therapeutics so long as clinical safety, interoperability and patient‑facing guidance are ensured (CFF Resolution 10/24 on pharmacists and digital therapeutics in Brazil).
Expectations go beyond pre‑market clearance: ANVISA requires robust validation, cybersecurity, traceability and explicit user instructions for SaMD, and firms must notify significant algorithmic changes - an operational detail that turns model updates into regulatory filings (ANVISA SaMD guidance on software-as-a-medical-device validation).
At the same time the CNJ has already laid down judiciary governance parameters and the national AI bill and ANPD's SIA can designate new high‑risk scenarios, so transport, finance and justice projects should treat impact assessments, human oversight and post‑market monitoring as standard features of any pilot rather than optional extras - imagine an AI update that must be paused while a regulator reviews a clinical validation dossier: that pause is the new reality, and planning for it is non‑negotiable.
Sector | Regulator expectations (source) |
---|---|
Health (SaMD) | Class-based notification/registration, clinical validation, in‑software labelling, change reporting (ANVISA RDC 657/751) |
Pharmacy / Digital Therapeutics | Pharmacists may use/oversee AI if clinical safety, interoperability and patient guidance are ensured (CFF Resolution 10/24) |
Judiciary & other sectors | CNJ governance parameters; Bill No. 2,338/2023 + ANPD SIA require AI risk assessments, AIA/DPIA and human oversight for high‑risk systems |
Conclusion and practical checklist for government agencies in Brazil (next steps)
(Up)Bottom line for Brazilian agencies: treat 2025 as the year to move from ambition to disciplined delivery - classify every system against the risk taxonomy in Bill No.
2,338/2023, bake LGPD‑aligned DPIAs and data‑provenance logging into procurements, and insist on human‑in‑the‑loop thresholds, vendor warranties, audit rights and incident‑reporting clauses before any pilot scales (the Chambers practice guide outlines the legal frame and investment scale to watch, including PBIA's BRL23 billion and BRL13 billion of AI investment expectations in 2025: Chambers AI 2025 Brazil practice guide).
Use practical procurement templates such as the World Economic Forum's “AI Procurement in a Box” to convert policy into enforceable contracts (World Economic Forum AI Procurement in a Box), and plan for the operational reality that model updates may need clinical‑style validation or regulatory pauses - build update‑controls and unlearning/retraining routes from day one.
Finally, close the skills gap with targeted, job‑focused courses: the AI Essentials for Work bootcamp (15 weeks) teaches promptcraft, tool use and applied AI skills for non‑technical public servants and is a pragmatic next step to make governance stick (Register for Nucamp AI Essentials for Work).
Checklist item | Action / resource |
---|---|
Risk classification & DPIA | Apply Bill No. 2,338/2023 taxonomy and document Algorithmic Impact Assessments (Chambers AI 2025 Brazil practice guide) |
Responsible procurement | Use World Economic Forum's AI Procurement in a Box templates to require warranties, audit rights and remediation (World Economic Forum AI Procurement in a Box) |
Staff capacity | Train civil servants with the AI Essentials for Work bootcamp - 15 weeks; register: Register for Nucamp AI Essentials for Work |
Frequently Asked Questions
(Up)What is Bill No. 2,338/2023 and how will it affect government AI projects in Brazil in 2025?
Bill No. 2,338/2023 is a Senate-approved, risk-based AI framework moving through Congress that adopts an EU-style taxonomy (prohibited, excessive-risk and high-risk uses). For public agencies it means mandatory risk classification, algorithmic impact assessments for high-risk systems, strong requirements for human oversight, logging and documentation, and new regulator powers: the ANPD will coordinate a National System for AI Governance (SIA) with corrective measures including suspension and fines (up to R$50,000,000 or 2% of group revenue). Practically, procurement, project design and compliance work must build these controls in from day one.
How does the LGPD and ANPD guidance affect data used to train AI systems in the public sector?
The LGPD applies across the full lifecycle of AI used by government, including publicly available or scraped data. Agencies must document lawful bases, apply data minimisation and anonymisation where possible, provide clear user-facing notices and opt-outs, and run DPIA-style assessments for high-risk processing. The ANPD has signalled that heavy reliance on broad legitimate-interest arguments without transparency can trigger suspension; deletion of data from trained models is impractical, so plans should focus on mitigation, retraining or unlearning strategies and robust logging and accountability.
What procurement and contracting safeguards should public buyers require when procuring AI?
Treat AI procurement as risk management: require clear scope and risk classification, measurable performance and accuracy metrics, human-in-the-loop obligations, audit and explainability rights, vendor warranties on lawful data use and provenance, incident-notification and remediation clauses, indemnities and contractual insurance, and post-termination data/IP handling. Insist on contractual rights to run bias testing and ongoing monitoring, and make Algorithmic Impact Assessments/DPIAs and LGPD-aligned logging contractual prerequisites.
Which government use cases and sectoral regulatory expectations are most relevant for 2025?
Common public-sector AI uses in 2025 include chatbots and virtual assistants on gov.br, predictive analytics for tax and welfare casework, predictive maintenance for transport infrastructure, health software as a medical device (SaMD) and targeted biometric pilots like São Paulo's Smart Sampa. Sector regulators impose specific rules: ANVISA requires class-based notification/registration, clinical validation, in-software labelling and change reporting for SaMD; professional councils add clinical or professional oversight obligations; the CNJ and ANPD expect AIAs, human oversight and post-market monitoring in justice and other sectors.
What practical steps should agencies take now to deploy AI safely and compliantly?
Follow a checklist: classify systems under the bill's taxonomy and run Algorithmic Impact Assessments/DPIAs; bake LGPD-compliant data-provenance, minimisation and logging into projects; require human-in-the-loop thresholds, vendor warranties, audit rights and incident-runbooks in contracts; prioritise data hygiene and appeals for affected users; set up an internal AI governance committee and continuous monitoring/bias-testing regimes; plan for regulatory pauses on major updates; and close skills gaps with targeted training such as a 15-week AI Essentials for Work bootcamp for non-technical civil servants.
You may be interested in the following topics as well:
Get actionable advice with practical starter steps for AI projects in Brazil aimed at beginners launching low-risk pilots.
Deliver 24/7 support with empathetic, accurate responses by deploying Citizen service virtual assistants that integrate knowledge bases and case systems.
See how data science pathways for routine analysts can turn vulnerability into career growth.
Ludo Fourrage
Founder and CEO
Ludovic (Ludo) Fourrage is an education industry veteran, named in 2017 as a Learning Technology Leader by Training Magazine. Before founding Nucamp, Ludo spent 18 years at Microsoft where he led innovation in the learning space. As the Senior Director of Digital Learning at this same company, Ludo led the development of the first of its kind 'YouTube for the Enterprise'. More recently, he delivered one of the most successful Corporate MOOC programs in partnership with top business schools and consulting organizations, i.e. INSEAD, Wharton, London Business School, and Accenture, to name a few. With the belief that the right education for everyone is an achievable goal, Ludo leads the nucamp team in the quest to make quality education accessible