The Complete Guide to Using AI as a Legal Professional in Brazil in 2025

By Ludo Fourrage

Last Updated: September 5th 2025

Legal professional using AI in Brazil in 2025 with Brazilian flag and law books

Too Long; Didn't Read:

By 2025 Brazilian lawyers must master AI: Bill No. 2,338/2023 (Senate‑approved, pending House/president) mandates risk classification, Algorithmic Impact Assessments, LGPD/ANPD duties and fines up to R$50,000,000 or 2% turnover; PBIA commits BRL 23 billion. Run DPIAs, follow OAB/CNJ ethics and get 15‑week practical training.

For Brazilian lawyers in 2025, mastering AI is no longer optional: the Federal Senate approved Bill No. 2,338/2023 (Brazil's proposed AI Regulation) in December 2024 but its final text and timetable remain uncertain, so practitioners must navigate a shifting, risk‑based regime while complying today with LGPD, OAB ethical guidance and CNJ transparency rules; see the White & Case Brazil AI regulation tracker on the bill's status and expected obligations.

Expect mandatory preliminary risk classification, algorithmic impact assessments for high‑risk systems and meaningful governance duties - plus enforcement teeth (fines and operational restrictions) - meaning routine tasks like legal research, drafting and client intake will need overseen AI workflows.

To build practical, workplace-ready skills - prompt design, tool selection and compliance-minded use - consider training such as the 15‑week AI Essentials for Work program (practical courses and the Nucamp AI Essentials for Work syllabus) while monitoring the evolving legal framework in the Chambers Brazil AI regulatory guide.

BootcampDetails
AI Essentials for Work 15 weeks; practical AI skills for any workplace, courses: AI at Work: Foundations, Writing AI Prompts, Job Based Practical AI Skills; Early bird $3,582, regular $3,942; syllabus: Nucamp AI Essentials for Work syllabus

Table of Contents

  • What is the new AI law in Brazil? (Bill No 2,338/2023) - 2025 update
  • What is the Brazilian strategy for artificial intelligence? (EBIA & PBIA) in Brazil
  • How is AI used in Brazil? Sector examples and trends relevant to lawyers in Brazil
  • Current legal and regulatory landscape in Brazil (LGPD and sector rules)
  • Professional ethics and practice rules for lawyers using AI in Brazil (OAB & CNJ guidance)
  • Data protection, generative AI and practical LGPD compliance in Brazil
  • Risk classification, Algorithmic Impact Assessments and procurement checklist for AI in Brazil
  • Liability, enforcement, litigation readiness and insurance for AI use in Brazil
  • How to become an AI expert in 2025 - practical steps for legal professionals in Brazil & Conclusion
  • Frequently Asked Questions

Check out next:

What is the new AI law in Brazil? (Bill No 2,338/2023) - 2025 update

(Up)

Bill No. 2,338/2023 - Brazil's proposed, risk‑based AI law approved by the Federal Senate on 10 December 2024 - is the game‑changer lawyers must watch: it still needs a Chamber of Deputies vote and the president's signature, so the final text and timetable remain uncertain, but the draft already sets mandatory preliminary risk classification by developers/deployers, algorithmic impact assessments for high‑risk and systemic general‑purpose systems, and wide territorial coverage that reaches foreign providers operating in Brazil; see the detailed tracker from White & Case Brazil AI regulatory tracker.

The bill would prohibit excessive‑risk uses (think manipulative social scoring and most untethered real‑time biometric ID), impose extra governance, logging and explainability duties for high‑risk systems (education, employment, healthcare, public security, critical infrastructure), and place ANPD at the centre of national AI governance with enforcement powers that include naming violations, ordering reclassification, and even suspending or banning systems - penalties can reach R$50,000,000 or 2% of group turnover.

Practical takeaway for practitioners: treat the draft as real compliance work today (risk matrices, DPIAs/AI impact assessments, supplier warranties and human‑in‑the‑loop rules) while tracking evolving commentary such as the clear practitioner guide at Brazil AI Act practitioner guide, because the regime combines LGPD obligations with fresh AI‑specific duties that can literally let regulators flip the switch on a deployed model.

“excessive‑risk” uses

“flip the switch”

ItemSummary
StatusSenate approved (10 Dec 2024); pending House vote and presidential approval
ScopeBroad territorial and sectoral scope - applies to development, deployment, use
Risk approachExcessive‑risk prohibited; high‑risk listed uses; preliminary risk classification required
Regulator & enforcementANPD coordinates SIA; powers include fines, publicisation, suspension of systems
Max penaltyUp to R$50,000,000 or 2% of group revenue

Fill this form to download the Bootcamp Syllabus

And learn about Nucamp's Bootcamps and why aspiring developers choose us.

What is the Brazilian strategy for artificial intelligence? (EBIA & PBIA) in Brazil

(Up)

Brazil's AI strategy blends a broad, principle‑driven national roadmap (the Estrategia Brasileira de Inteligência Artificial, EBIA) with a heavily funded implementation plan (the PBIA/Plano Brasileiro de Inteligência Artificial 2024–28), so lawyers should read the two as complementary: EBIA sets ethical guardrails, multi‑stakeholder governance and 74 strategic actions to steer research, inclusion and international cooperation (Brazilian AI Strategy (EBIA) overview – OECD.AI), while PBIA commits BRL 23 billion (about USD 4 billion) to infrastructure, talent programs, public‑private innovation hubs and even a Santos Dumont supercomputer to anchor Portuguese‑language models and national capacity (PBIA “AI for the Good of All” plan announcement – Brazilian Government); together they push for regulatory improvement, sandboxes and exportable standards but also raise real questions about digital sovereignty and cloud dependency that can affect public procurement, data‑localisation clauses and counsel's vendor‑risk checklists (see the detailed practice guide on Brazil's AI landscape for practical compliance pointers: Chambers Artificial Intelligence 2025: Brazil practice guide).

The practical takeaway for legal teams: treat EBIA's principles and PBIA's investments as the new baseline for advising clients on AI procurement, IP strategy and LGPD‑aligned governance - imagine a courtroom filing that cites a government supercomputer project as evidence of national capacity, not just policy rhetoric; that concreteness is where strategy becomes litigation and contract leverage.

InstrumentKey facts
EBIA (2021)National AI strategy: ethical principles, research, capacity building, 74 strategic actions (guidance & governance)
PBIA (2024–28)BRL 23 billion (~USD 4 billion) for infrastructure, talent, innovation hubs, supercomputing, regulatory improvements
Main concernsDigital sovereignty and growing cloud/data‑centre dependence affecting public procurement and data policy

“AI for the Good of All.”

How is AI used in Brazil? Sector examples and trends relevant to lawyers in Brazil

(Up)

AI is already woven into Brazil's healthcare and legal workflows in ways that matter to counsel: hospitals and startups use AI for administrative automation, triage and diagnostics (reading electronic records and imaging to generate clinical findings), drug‑discovery models, virtual health assistants and clinical decision support, which raises privacy, bias and misinformation risks under the LGPD and demands heightened safeguards for health data as sensitive information - see the Lefosse overview of AI in healthcare for the regulatory frame and practical risks.

Medical‑grade software (SaMD) now sits squarely under ANVISA's gaze via RDC 657/2022, so lawyers advising developers or hospitals must map product classification (Classes I–IV), labeling and cybersecurity obligations, post‑approval change reporting and clinical evidence requirements described in ANVISA guidance.

At the same time, commercial teams and law firms are automating intake, summarisation and contract drafting with workplace AI (for example, practice management AI like MyCase IQ), which creates procurement, vendor‑warranty and oversight work (DPIAs/AI impact assessments) that mirror the duties in Bill No.

2,338/2023; practical legal advice now blends regulatory submission strategy, procurement clauses for model updates and human‑in‑the‑loop safeguards - imagine a SaMD that drafts an MRI report overnight: the client‑facing question isn't whether it works, but who bears the recall, reporting and liability when it changes clinical care.

SectorCommon AI usesKey legal touchpoints / sources
Healthcare / MedTechDiagnosis & imaging reports, triage, drug discovery, virtual assistants, clinical decision supportLefosse (IBANET) overview of AI in healthcare and regulatory risks in Brazil; ANVISA SaMD rules (RDC 657/2022)
Software as a Medical Device (SaMD)Regulated apps, embedded algorithms, interoperability and cybersecurityANVISA RDC 657/2022 regulatory analysis and SaMD guidance; class I–IV registration/notification regimes
Law firms / Legal opsDocument summarisation, intake automation, contract drafting toolsNucamp AI Essentials for Work bootcamp - practical AI skills for the workplace (registration); LGPD implications for client data

Fill this form to download the Bootcamp Syllabus

And learn about Nucamp's Bootcamps and why aspiring developers choose us.

Current legal and regulatory landscape in Brazil (LGPD and sector rules)

(Up)

Brazil's data‑protection framework is the practical starting point for any lawyer advising on AI: the LGPD (in force since 2020) applies broadly - including to processing that happens in Brazil or targets people in Brazil - and the Autoridade Nacional de Proteção de Dados (ANPD) has teeth, inspection powers and an active regulatory agenda that now squarely addresses generative AI; see a concise LGPD overview at DLA Piper Brazil LGPD data protection guide.

Recent enforcement drama crystallises the stakes: ANPD's July 2024 preventive suspension of Meta's use of “publicly available” data for AI training flagged that relying on legitimate interests without clear, user‑friendly opt‑outs, robust transparency and child‑protective measures is perilous - the agency later allowed a partial resume but kept strict limits on minors' data (details and timeline at the FPF analysis of ANPD's Meta case and timeline).

Complementing enforcement, ANPD's Preliminary Study on Generative AI maps practical obligations for developers and deployers - web‑scraped “public” data still triggers LGPD duties, anonymisation and necessity principles matter, and risks like model inversion or persistent prompt data complicate deletion and transparency obligations (see the study FPF summary of ANPD Preliminary Study on Generative AI).

The upshot for counsel: document processing chains, be ready for DPIAs, tighten breach and opt‑out flows, and draft contracts that allocate responsibility across the “chain of agents” so a regulator can't flip a switch and leave a client exposed.

“The data subject shall have the right not to be subject to a decision based solely on automated processing […] which produces legal effects concerning him or her or similarly significantly affects him or her.”

Professional ethics and practice rules for lawyers using AI in Brazil (OAB & CNJ guidance)

(Up)

Ethics for Brazil's bar now treats AI like another powered tool that still demands the same professional duties: the OAB's recommendations (published in late 2024) require lawyers to supervise AI‑generated outputs, safeguard client confidentiality when feeding data into models, and keep human oversight and professional responsibility front‑and‑centre, while CNJ resolutions (e.g., Res.

332/2020 and Res. 615/2025) reinforce transparency, accountability and the duty to explain automated steps used in courts and public filings; see the OAB guidance summary at IDS: Brazilian Bar Association AI recommendations and the Chambers Practice Guide: AI trends and developments in Brazil (2025) for how these rules fit into LGPD and practice standards.

Practical implications are concrete: document the chain of AI decisions, run DPIAs or impact assessments where models touch sensitive data, include vendor warranties and audit rights in procurement, and never present an AI draft to a client or tribunal without documented review - because courts and regulators now expect a named lawyer to be able to explain why an AI output was relied on.

Treat supervision, logging and client‑consent workflows as routine file hygiene, not optional add‑ons, and build checklists so a quick audit won't turn into a reputational crisis.

Issuing bodyCore professional duties
OAB AI recommendations (IDS summary) Supervise AI outputs; ensure confidentiality when using AI; comply with OAB Code of Ethics
CNJ and Chambers practice guide on AI in Brazil (2025) Promote transparency, human oversight and explainability for AI in judicial processes

Fill this form to download the Bootcamp Syllabus

And learn about Nucamp's Bootcamps and why aspiring developers choose us.

Data protection, generative AI and practical LGPD compliance in Brazil

(Up)

Generative AI raises practical LGPD questions that lawyers must treat as compliance work: Brazil's LGPD applies extraterritorially to processing that targets people in Brazil and demands a lawful basis, data‑minimisation and transparency, so teams should map whether training data relied on consent, a defensible legitimate interest (with a documented balancing test) or another ground and keep processing records as required - see the concise DLA Piper Brazil LGPD guide for the basics (DLA Piper Brazil LGPD guide (LGPD basics for Brazil)).

The ANPD's Preliminary Study on Generative AI emphasises that “publicly available” scraped data still falls within LGPD duties, flags model‑inversion and membership‑inference risks, and stresses necessity, anonymisation and explainability across the GenAI lifecycle, so practitioners should build DPIAs, logging and prompt‑data controls into procurement and vendor clauses (FPF summary: ANPD preliminary study on generative AI and LGPD).

The Meta case proved enforcement is real: ANPD issued a preventive suspension in July 2024 over opaque AI‑training practices and weak opt‑out paths, and regulators expect special care for minors and sensitive data; practically speaking, document processing chains, plan deletion and remedial steps (breach notifications within three business days), and allocate responsibility across the “chain of agents” so a regulator can't flip a switch and leave the client exposed.

Practical stepWhy / source
Map lawful basis and keep recordsDLA Piper Brazil LGPD guide (lawful basis & recordkeeping)
Run DPIAs and document GenAI processing chainsFPF summary: ANPD preliminary study on generative AI (DPIA guidance)
Apply special safeguards for children and sensitive dataFPF analysis: ANPD Meta case on AI training data (children & sensitive data)
Prepare breach‑response (notify ANPD & data subjects within 3 business days)DLA Piper Brazil LGPD guide (breach notification requirements)

“The data subject shall have the right not to be subject to a decision based solely on automated processing […] which produces legal effects concerning him or her or similarly significantly affects him or her.”

Risk classification, Algorithmic Impact Assessments and procurement checklist for AI in Brazil

(Up)

Risk classification is the first compliance gate in Brazil's emerging regime: developers and deployers must run a documented preliminary risk classification (excessive‑risk, high‑risk or other) and, for high‑risk or systemic general‑purpose systems, prepare iterative Algorithmic Impact Assessments (AIAs) before market entry and throughout the lifecycle - the assessments must address foreseeable harms, benefits, testing results, bias mitigation, monitoring and a residual‑risk rationale and are expected to be produced by qualified, independent teams (see the White & Case AI regulatory tracker for Brazil at White & Case Brazil AI regulatory tracker and the bill overview at Brazil AI Act bill overview - ArtificialIntelligenceAct.com).

Procurement must therefore treat AI like regulated kit: require full model documentation and provenance, LGPD‑aligned lawful‑basis warranties and DPIAs, bias and robustness test results, audit rights and logging, clear human‑in‑the‑loop obligations, incident‑reporting SLAs, security/anti‑poisoning guarantees and explicit liability allocation - a public register of high‑risk systems is also envisaged so expect transparency obligations to travel with the contract (practical checklist guidance: Securiti practical checklist for Brazil AI regulation and law).

The practical „so‑what”: regulatory powers can compel reclassification, mitigation or even suspension of a live system and fines up to R$50 million or 2% of turnover, so contracts and AIAs must let a buyer and counsel respond overnight rather than learn of non‑compliance in the morning.

Procurement itemWhy it matters / source
Preliminary risk classificationRequired by law to determine obligations (White & Case)
Algorithmic Impact Assessment (AIA)Mandatory for high‑risk/systemic systems; iterative and public in part (ArtificialIntelligenceAct.com / Securiti)
Model & data provenanceSupports LGPD compliance, bias testing and audit rights (Chambers / Securiti)
Audit, logging & explainability clausesNeeded for governance, regulatory inspection and reclassification risks (White & Case)
Incident reporting & remediation SLAEnables timely response to security incidents and regulator notifications (Securiti)
Liability, indemnities & exit rightsAllocates risk given strict liability for high‑risk systems and heavy fines (ArtificialIntelligenceAct.com)

Liability, enforcement, litigation readiness and insurance for AI use in Brazil

(Up)

Liability in Brazil's AI landscape is no abstraction: the draft national AI law and sectoral practice now impose a protective, risk‑based civil regime that can make suppliers and operators strictly liable for harms caused by high‑risk systems, while non‑high‑risk cases shift evidentiary burdens toward victims - so lawyers must treat every deployment as potentially litigable (see the bill analysis at Access Partnership analysis of Brazil's AI bill).

Regulators already have real teeth (ANPD and sector agencies coordinate oversight) and courts are reshaping platform exposure after the STF's June 26, 2025 ruling that allows liability without a prior court order for certain serious content, plus new duties like a local legal representative and transparency reporting that increase operational exposure (coverage in Courthouse News coverage of the STF platform liability ruling).

Practical readiness means more than good contracts: maintain up‑to‑date Algorithmic Impact Assessments and DPIAs, preserve model provenance and logs, build incident‑reporting playbooks, require supplier warranties, audit rights and human‑in‑the‑loop clauses, and rehearse rapid responses because enforcement can include fines, naming and shaming, or even suspension of systems - penalties under the proposed regime and related frameworks can reach BRL 50 million or 2% of turnover (summary at Chambers AI 2025: Brazil trends and developments).

Picture a weekend call: a regulator demands proof of an AIA and a mitigation plan for a live model - being able to produce those records and trigger contractual remedies that same hour is what separates manageable risk from an urgent compliance crisis.

TriggerWho may be liableEnforcement / Penalty
High‑risk system harmSupplier or operator (strict liability)Fines, suspension, prohibition; up to BRL 50M or 2% turnover
Non‑high‑risk harmLiability may be attributed with burden shift toward operatorCivil claims under Civil Code/Consumer Protection Code
Serious illicit content on platformsPlatforms (systemic failure / lack of moderation)Liability without court order; legal rep and transparency duties

“The Supreme Court took something that was simple and direct, and that maybe deserved review by lawmakers, and turned it into something complex and even hard to grasp.”

How to become an AI expert in 2025 - practical steps for legal professionals in Brazil & Conclusion

(Up)

Becoming an AI expert in Brazil in 2025 means blending data‑protection literacy, practical technical skills and courtroom‑ready documentation: start by mastering the LGPD's core principles (see primers like the IAPP “Understanding the LGPD”) and study the ANPD's generative‑AI framework (summarised in the FPF review of the ANPD Preliminary Study) alongside academic assessments of platform transparency such as the FGV study on generative AI platforms - these resources show why lawyers must be able to map lawful bases, run DPIAs and produce Algorithmic Impact Assessments (AIAs) that ANPD and courts will expect.

Pair that regulatory grounding with hands‑on training (prompt design, model‑provenance checks, vendor audit clauses and incident playbooks): practical courses such as the Nucamp AI Essentials for Work teach workplace prompt skills, tool selection and DPIA workflows so legal teams can convert abstract duties into contract clauses and audit evidence.

Build multidisciplinary fluency by practicing with legal‑specific tools (e.g., document summarisation and matter‑intake AIs), pursue targeted certifications (foundation LGPD training like the LGPDF course), and rehearse rapid responses so an ANPD inspection or enforcement call can be answered with logs, AIA records and a mitigation plan - because in Brazil today, readiness is what separates advisory from liability management.

BootcampKey facts
AI Essentials for Work 15 weeks; courses: AI at Work: Foundations, Writing AI Prompts, Job Based Practical AI Skills; Early bird $3,582, regular $3,942; syllabus: Nucamp AI Essentials for Work syllabus

“Brazilian advocacy is being challenged by the advancement of AI, and the OAB is ready and prepared to handle these transformations.”

Frequently Asked Questions

(Up)

What is Bill No. 2,338/2023 and what does it mean for lawyers in Brazil in 2025?

Bill No. 2,338/2023 is Brazil's proposed risk‑based AI law, approved by the Federal Senate on 10 December 2024 and still pending a vote in the Chamber of Deputies and presidential signature. The draft requires preliminary risk classification (excessive‑risk, high‑risk, other), mandatory Algorithmic Impact Assessments (AIAs) for high‑risk and certain systemic general‑purpose systems, and enhanced governance, logging and explainability duties. ANPD is central to enforcement with powers to name violations, order reclassification or suspend systems. Penalties in the draft can reach R$50,000,000 or 2% of group turnover. Practical takeaway for lawyers: treat the draft as operational compliance work today - run risk matrices, prepare DPIAs/AIAs, negotiate supplier warranties and human‑in‑the‑loop controls, and keep monitoring the bill's final text and timetable.

How does the LGPD and ANPD enforcement affect use of generative AI and data practices?

The LGPD applies broadly, including extraterritorially to processing that targets people in Brazil. ANPD enforcement has already shown it will scrutinize AI training and transparency (for example, the July 2024 preventive suspension of Meta's use of publicly available data). Key obligations include documenting lawful basis for processing, data‑minimisation, transparency and special safeguards for minors and sensitive data. ANPD's Preliminary Study on Generative AI confirms that web‑scraped “public” data still triggers LGPD duties and highlights model‑inversion, membership‑inference and deletion challenges. Practically, legal teams must map lawful bases, run DPIAs, keep processing records, build prompt‑data and logging controls, prepare breach response (notification duties, typically within three business days), and allocate responsibilities across the chain of agents.

What professional ethics and practice rules must lawyers follow when using AI in Brazil?

OAB guidance (late 2024) and CNJ resolutions require lawyers to supervise AI outputs, protect client confidentiality when using AI, and retain human oversight and responsibility for work product. CNJ rules (e.g., Res. 332/2020 and Res. 615/2025) stress transparency and explainability where automated steps are used in judicial processes. In practice, lawyers should document AI decision chains, run DPIAs where models touch sensitive data, include vendor warranties and audit rights in procurement, and never present AI‑generated drafts to clients or courts without documented review and a named responsible lawyer able to explain reliance on the model.

What procurement and technical controls should legal teams require when buying or deploying AI?

Treat AI procurement like regulated kit: require a documented preliminary risk classification and iterative AIAs for high‑risk or systemic systems; demand model and data provenance, bias and robustness test results, DPIAs, audit and logging rights, explainability documentation, human‑in‑the‑loop obligations, incident‑reporting SLAs, security/anti‑poisoning guarantees, and clear liability/indemnity and exit rights. Contracts should enable rapid access to AIAs, logs and mitigation plans because regulators can compel reclassification, mitigation or suspension of live systems and impose fines up to R$50M or 2% of turnover.

How can legal professionals build practical AI skills and prepare for litigation or enforcement in 2025?

Combine regulatory literacy (LGPD, ANPD generative‑AI guidance, EBIA/PBIA strategic context) with hands‑on skills: learn prompt design, tool selection, DPIA/AIA workflows, model‑provenance checks and incident playbooks. Maintain up‑to‑date AIAs, logs and provenance records, rehearse rapid response playbooks for regulator queries, and require supplier audit rights. Practical training options include Nucamp's AI Essentials for Work - a 15‑week, practical cohort covering AI at Work: Foundations, Writing AI Prompts and Job‑Based Practical AI Skills (early bird $3,582; regular $3,942). Ongoing monitoring of law, agency guidance and sector rules (ANVISA for SaMD, CNJ/OAB for court practice) completes preparedness.

You may be interested in the following topics as well:

  • Write defensible briefs that link every assertion to its source using Clearbrief, ideal for ANPD transparency and court-ready provenance.

  • Adopt AI + human workflows to gain efficiency while maintaining professional responsibility and quality control.

  • Reduce hallucination risk by following the RAG-friendly verification and citation steps included with each legal research prompt.

N

Ludo Fourrage

Founder and CEO

Ludovic (Ludo) Fourrage is an education industry veteran, named in 2017 as a Learning Technology Leader by Training Magazine. Before founding Nucamp, Ludo spent 18 years at Microsoft where he led innovation in the learning space. As the Senior Director of Digital Learning at this same company, Ludo led the development of the first of its kind 'YouTube for the Enterprise'. More recently, he delivered one of the most successful Corporate MOOC programs in partnership with top business schools and consulting organizations, i.e. INSEAD, Wharton, London Business School, and Accenture, to name a few. ​With the belief that the right education for everyone is an achievable goal, Ludo leads the nucamp team in the quest to make quality education accessible