The Complete Guide to Using AI in the Government Industry in Washington in 2025
Last Updated: August 31st 2025

Too Long; Didn't Read:
Washington, DC agencies in 2025 must align AI projects with six values (benefit, safety, accountability, transparency, sustainability, privacy), follow Mayor's Order timelines, use NIST AI RMF 2.0, and plan pilots, procurement guardrails, and workforce training; Stargate pledges $500B infrastructure.
Washington, DC newcomers to public service should pay attention: federal guidance now treats AI as a practical toolkit for improving mission delivery, not a mysterious tech buzzword - see the GSA AI Guide for Government (printable) for a clear, evolving playbook on organization, responsible deployment, and workforce strategy (GSA AI Guide for Government (printable)).
Practical primers like the Code for America AI in Government cheat sheet decode terms (LLMs, entity resolution, supervised learning) and show how simple pilots - think using entity resolution to untangle messy records - can free staff for higher‑value work (Code for America AI in Government cheat sheet).
For beginners who want hands‑on skills, a short, applied course like Nucamp's AI Essentials for Work registration teaches prompt writing, tool use, and real workplace projects so DC teams can move from cautious curiosity to responsible experimentation without needing a PhD (Nucamp AI Essentials for Work registration).
Attribute | Information |
---|---|
Description | Gain practical AI skills for any workplace; learn AI tools, effective prompts, and apply AI across business functions. |
Length | 15 Weeks |
Cost | $3,582 (early bird), $3,942 afterwards; 18 monthly payments available |
Courses included | AI at Work: Foundations; Writing AI Prompts; Job Based Practical AI Skills |
Syllabus | Nucamp AI Essentials for Work syllabus |
Register | Register for Nucamp AI Essentials for Work |
Table of Contents
- What is the WA government AI policy? Washington, DC's AI Values and Orders
- What is the U.S. AI regulation in 2025? Federal landscape explained
- What is the AI Act 2025? Proposed laws and Congress activity
- Standards and frameworks agencies use in Washington, DC and the U.S.
- Procurement, contracting, and data use for DC government AI projects
- IP, copyright, and legal risks for AI in Washington, DC and the United States
- Workforce, training, and community engagement in Washington, DC government AI
- AI industry outlook for 2025: What Washington, DC stakeholders should expect
- Conclusion: Getting started with AI in the Washington, DC government in 2025
- Frequently Asked Questions
Check out next:
Upgrade your career skills in AI, prompting, and automation at Nucamp's Washington location.
What is the WA government AI policy? Washington, DC's AI Values and Orders
(Up)Washington, D.C.'s policy moves the conversation from “can we use AI?” to “how should we use it?” by requiring agencies to verify alignment with six clear AI Values before any deployment: Clear Benefit to the People; Safety & Equity; Accountability; Transparency; Sustainability; and Privacy & Cybersecurity - details and reporting tools live in Mayor's Order 2024‑028 and the District's AI Values and Strategic Plan (Mayor's Order 2024‑028 and AI Values).
The Order doesn't leave agencies to guess: it mandates documented alignment reviews, agency‑specific AI strategic plans, privacy/cybersecurity review processes and an AI procurement handbook on fixed timelines, and it creates two governance bodies - a public Advisory Group on AI Values Alignment and an internal AI Taskforce to vet technical questions and coordinate benchmarks through 2026.
Practical safeguards are baked in too: label AI‑generated content, preserve human accountability, and test tools in controlled environments (the strategy even includes a SCIF “testing sandbox” for secure experimentation) (DC's AI testing sandbox and SCIF), so agencies must show measurable benefits to residents, not speculative promises, before a single model goes live.
AI Value | What the Order Requires |
---|---|
Clear Benefit to the People | Document who benefits and alternatives considered |
Safety & Equity | Assess and mitigate risks that could cause harm or worsen inequities |
Accountability | Preserve human oversight; plan for testing and validation |
Transparency | Disclose AI use and avoid unattended release of AI outputs |
Sustainability | Consider environmental impact and workforce effects |
Privacy & Cybersecurity | Conduct privacy and security reviews before deployment |
“We are going to make sure DC is at the forefront of the work to use AI to deliver city services that are responsive, efficient, and proactive. With these guiding values, we will make sure that when we use AI, we are responsible and use it in a way that aligns with our DC values.” - Mayor Bowser
What is the U.S. AI regulation in 2025? Federal landscape explained
(Up)In 2025 the federal landscape is moving fast and will directly shape how Washington, D.C. agencies adopt AI: Executive Order 14179 set the tempo for a deregulatory, innovation-first playbook and OMB's implementing memoranda (M‑25‑21 and M‑25‑22) turn that playbook into concrete steps - expect Chief AI Officers, formal governance boards, mandatory procurement guardrails, tighter data‑rights clauses, and explicit guidance to avoid vendor lock‑in when buying AI (OMB memos M‑25‑21 and M‑25‑22 implementing Executive Order 14179).
The White House's July AI Action Plan and three linked Executive Orders amplify three national priorities - accelerate AI innovation, build secure AI infrastructure (data centers and permitting), and lean into international AI diplomacy - while adding procurement rules that will ripple through the vendor market (White House AI Action Plan and Executive Orders overview).
Agencies using “high‑impact” AI face minimum risk‑management practices (pre‑deployment testing, documented impact assessments, ongoing monitoring, human oversight) and deadlines for governance and reporting, so District teams should map local AI Values and Mayor's Order requirements to these federal timelines and contract terms to avoid surprises when procuring or operating AI (Federal memorandum on requirements for high‑impact AI used by agencies); think of it as treating AI projects like mission‑critical IT upgrades - CAIOs, sandboxes, and testbeds are now part of the job.
Executive Order | Primary Federal Focus |
---|---|
Preventing Woke AI in the Federal Government | Procurement standards for ideological neutrality and truthfulness in federal LLM buys |
Promoting the Export of the American AI Technology Stack | Coordinated effort to export U.S. AI hardware/software packages |
Accelerating Federal Permitting of Data Center Infrastructure | Streamlined permitting and incentives for data center buildout |
DEI described as “one of the most pervasive and destructive” ideologies in AI contexts
What is the AI Act 2025? Proposed laws and Congress activity
(Up)Congress did not produce a single, comprehensive AI Act in 2025, but several competing blueprints and the White House's pro‑innovation push have shaped what a federal law might look like - and why the District should watch closely.
Lawmakers circulated risk‑based proposals (the SAFE Innovation Framework, a Bipartisan U.S. AI Act draft, and a National AI Commission proposal) that range from transparency and licensing regimes to a congressional commission charged with crafting a national risk framework, while the White House's AI Action Plan presses a deregulatory, infrastructure‑first agenda and even signals withholding discretionary funds from states with burdensome AI rules (a move that could affect local initiatives) (see the University of Chicago comparison of U.S. proposals to the EU AI Act and the Jenner client alert on state AI regulation).
With Congress divided and federal bills slow to land, states have filed a flood of local measures - more than a thousand AI bills in early 2025 - so for DC officials the immediate questions are practical: which federal guardrails (if any) will preempt city rules, how procurement and funding guidance from the White House will affect DC grants, and whether a future U.S. AI law will mirror the EU's risk‑based model or favor the U.S. emphasis on innovation and national security.
Proposal | Core feature |
---|---|
SAFE Innovation Framework | Guiding principles: security, accountability, foundations, explainability, innovation |
Bipartisan U.S. AI Act draft | Transparency, consumer protections, possible licensing and oversight body |
National AI Commission Act (CREATE/NAIRR‑adjacent) | 20‑member commission to develop a risk‑based regulatory roadmap and reports |
Standards and frameworks agencies use in Washington, DC and the U.S.
(Up)Washington, D.C. agencies should anchor AI programs in the National Institute of Standards and Technology's work: NIST's AI Risk Management Framework (updated as AI RMF 2.0) has become the go‑to playbook for mapping, measuring, managing and governing risks across the AI lifecycle, and companion releases in 2024 give practical, testable steps for generative models and secure development.
Key NIST outputs - the Generative AI Profile and secure software development practices (SP 800‑218A), plus tooling like the Dioptra security testbed and the ARIA model‑evaluation pilot - turn abstract principles into activities (red‑teaming, field testing, provenance and measurement) that agencies can adopt before buying or deploying systems; see WilmerHale's summary of the July 26, 2024 releases for the testbed and guidelines (WilmerHale summary of NIST risk mitigation guidelines and Dioptra testbed) and Diligent's plain‑language guide to why AI RMF 2.0 is effectively a government‑aligned “gold standard” for practical governance (Diligent guide to the NIST AI Risk Management Framework).
The guidance is voluntary but influential - treat it as the checklist that makes pilots auditable, vendors accountable, and red‑team exercises feel less like theory and more like infrastructure protection.
NIST Output | Purpose / Note |
---|---|
AI RMF 2.0 (Feb 2024) | Lifecycle risk management framework - map, measure, manage, govern |
Generative AI Profile (GAI) | Profiles GAI‑specific risks (confabulation, data leakage, etc.) |
SP 800‑218A (SSDPs) | Secure software development practices for GAI and foundation models |
Dioptra testbed | Security testbed for red‑teaming and measuring AI response to attacks |
ARIA / ARIA 0.1 | Model evaluation program (model testing, red‑teaming, field testing) |
Plan for Global Engagement (AI 100‑5) | Roadmap for international standards and terminology alignment |
Procurement, contracting, and data use for DC government AI projects
(Up)Buying AI in the District now starts long before a purchase order: Mayor's Order 2024‑028 forces agencies to prove an AI project's “clear benefit to the people,” complete privacy and cybersecurity reviews, and follow OCTO's procurement playbook (including the mandatory AI procurement handbook and agency alignment reports) before a tool is deployed (Mayor's Order 2024‑028: DC AI Values and Strategic Plan).
Practically, that means solicitations should demand documented alignment with DC's values, clear data‑use rules, human‑in‑the‑loop obligations, and exit or “decommissioning” terms to guard against surprise price hikes or vendor lock‑in - federal procurement trends are already pressing for expanded government rights over models, datasets, and localization requirements, so city contracts must be drafted with those pressures in mind (Government contracting and AI trends 2025: rights, datasets, and localization).
Procurement teams should modernize workflows (digital procurement platforms, automated bid evaluation, supplier‑diversity and sustainability scoring) and bake in continuous monitoring and red‑team testing as conditions of award; DC's pilot with MIT GOV/LAB and Stanford DEL for deliberation.io shows how procurement can pair careful public testing with community input before scaling (OCTO deliberation.io pilot with MIT GOV/LAB and Stanford DEL and public listening plan).
The upshot: treat AI buys like mission‑critical IT upgrades - insist on auditable data rights, privacy controls, cybersecurity attestations, and contract terms that protect residents if a vendor changes price, policy, or access.
IP, copyright, and legal risks for AI in Washington, DC and the United States
(Up)Intellectual‑property and legal risk management are now core parts of any DC agency AI plan: the U.S. Patent and Trademark Office has sharpened subject‑matter eligibility rules for AI inventions (effective July 17, 2024) and separate inventorship guidance that explains when a human contributor is required for a patent, so teams drafting procurements or collaborating with innovators should demand clear inventor attribution and claims that show concrete technical improvements rather than abstract data processing (USPTO AI subject‑matter eligibility guidance (July 17, 2024) and related inventorship guidance).
Practitioners must also treat AI‑assisted filings and vendor tools with the same duty of candor, signature verification, and confidentiality that apply to any legal submission - oversharing training data or client files with third‑party models can trigger privacy, export‑control, or even national‑security complications, so procurement language should lock down data flows and audit rights (USPTO guidance on using AI-based tools for legal filings and procurement).
On copyright, the Copyright Office's multipart study (Parts 1–3, with Part 2 on output copyrightability and a pre‑publication Part 3 on training) maps how generative outputs and training sets are being evaluated - an essential read for DC teams deciding what materials can be used to train models or published as agency content (U.S. Copyright Office study: Copyright and Artificial Intelligence).
Bottom line for District leaders: bake inventorship, patent‑eligibility tests, confidentiality safeguards, and clear training‑data rules into contracts up front - even one stray upload to a public chatbot can undo privacy protections and procurement gains.
“The USPTO remains committed to fostering and protecting innovation in critical and emerging technologies, including AI,” said Kathi Vidal, Under Secretary of Commerce for Intellectual Property and Director of the USPTO.
Workforce, training, and community engagement in Washington, DC government AI
(Up)Washington, D.C. is treating AI workforce development as an operational priority, not an afterthought: Mayor's Order 2024‑028 directed the Department of Human Resources and the Department of Employment Services to deliver an integrated recruitment and workforce development plan and comprehensive staff training materials by August 8, 2024, while OCTO and the Advisory Group are running public listening sessions and trainings to connect residents and practitioners to policy and practice (see the DC AI Values and Strategic Plan DC AI Values and Strategic Plan).
City efforts are paired with practical learning pathways - local pilots like the DC Public Library's free AI Upskilling Cohort (cohorts of five to seven adults, workshops and hands‑on projects at the Martin Luther King Jr.
Memorial Library) give residents portfolio‑building, career‑relevant experience this summer, and tailored options such as the Public Health Foundation's AI workshops help agencies embed safe, problem‑solving uses of AI into everyday public‑service work (DC Public Library AI Upskilling Cohort details and enrollment, Public Health Foundation AI technical assistance and training information).
The result: small, practical learning loops - think a five‑person cohort that finishes a real project and hands a vetted prompt library to an agency team - help retain staff, reduce fear of automation, and make the “so what?” tangible: faster, audited services that still keep humans in the loop.
Program / Benchmark | Detail |
---|---|
Workforce plan & training materials | Due Aug 8, 2024 - integrated recruitment and comprehensive training for District staff |
AIVA public listening sessions | Sept 14, 2024 (virtual) and Sept 25, 2024 (Marion Barry Building) |
DC Public Library cohort | Pilot starts early Aug 2025; weekly in‑person sessions at MLK Jr. Memorial Library through Oct 2025 |
“AI literacy is the next essential skill people need to succeed in today's workforce, and this cohort delivers training in a way that works for busy adults.” - Chelsea Kirkland, DC Public Library
AI industry outlook for 2025: What Washington, DC stakeholders should expect
(Up)Washington, D.C. stakeholders should plan for an industry shakeup driven by huge private investment in AI infrastructure: the Stargate Project - announced as a new company led by SoftBank and OpenAI with Oracle and MGX among initial backers - promises up to $500 billion over four years (with $100 billion to start) to build U.S. AI data centers and related power and compute capacity, beginning with a campus in Abilene, Texas (Stargate Project official press release).
That scale matters for the District because it will reverberate through procurement, workforce planning, and energy siting - expect tougher competition for GPUs, skilled construction and operations talent, and sharper scrutiny on grid impacts and sustainability as analysts flag supply‑chain and power constraints that could crowd out other projects (DatacenterDynamics analysis of Stargate project risks and industry implications).
Regulators and agency buyers in DC should watch near‑term capacity numbers (OpenAI and partners are targeting multi‑gigawatt builds, with reports of 5+ GW under development and a 10 GW goal), prepare contract language that guards against vendor lock‑in, and link workforce training to real construction and operations jobs - if Stargate or similar builds scale, the “so what” is concrete: entire regional grids and vendor markets will feel the pull of AI demand, and city teams that pair procurement discipline with targeted upskilling will capture both the benefits and the jobs.
Attribute | Detail (from reporting) |
---|---|
Total investment | $500 billion over four years (Stargate announcement) |
Initial deployment | $100 billion immediately |
Lead partners | SoftBank (financial), OpenAI (operational), Oracle, MGX |
Tech partners / suppliers | Arm, Microsoft, NVIDIA, Oracle |
Initial site | Abilene, Texas (Stargate I) |
Capacity goals | Committed goal ~10 GW; reporting notes 5+ GW under development / 4.5 GW additions |
Jobs impact | OpenAI/partners claim hundreds of thousands; reporting cites 100,000+ jobs from early phases |
“This infrastructure will secure American leadership in AI, create hundreds of thousands of American jobs, and generate massive economic benefit for the entire world.” - OpenAI / Stargate announcement
Conclusion: Getting started with AI in the Washington, DC government in 2025
(Up)Getting started with AI in the District means pairing bold curiosity with the guardrails DC has already written into law: map every pilot to DC's AI Values and Strategic Plan so projects demonstrate a clear benefit to residents, preserve human accountability, and pass OCTO's privacy and cybersecurity reviews (DC AI Values and Strategic Plan).
Use the Mayor's Order milestones and the newly formed AI Taskforce as a roadmap - start with small, accountable pilots that deliver measurable service improvements, involve the Advisory Group's public listening sessions, and treat procurement like a mission‑critical IT upgrade so contracts lock down data rights and decommissioning terms (see the Mayor's press release outlining these actions) (Mayor's press release on DC's AI Values).
For teams and individual staffers who need practical skills now, short applied learning - such as a 15‑week AI Essentials for Work course - turns policy into practice: think a five‑person cohort that finishes a real project and hands an audited prompt library to an agency team, lowering fear of automation and producing faster, more equitable services; explore options and register to build those real‑world skills (Register for Nucamp AI Essentials for Work).
Attribute | Information |
---|---|
Description | Gain practical AI skills for any workplace; learn AI tools, effective prompts, and apply AI across business functions. |
Length | 15 Weeks |
Courses included | AI at Work: Foundations; Writing AI Prompts; Job Based Practical AI Skills |
Cost | $3,582 (early bird), $3,942 afterwards; 18 monthly payments available |
Syllabus | Nucamp AI Essentials for Work syllabus |
Register | Register for Nucamp AI Essentials for Work |
“We are going to make sure DC is at the forefront of the work to use AI to deliver city services that are responsive, efficient, and proactive. With these guiding values, we will make sure that when we use AI, we are responsible and use it in a way that aligns with our DC values.” - Mayor Bowser
Frequently Asked Questions
(Up)What does Washington, D.C.'s AI policy require for city agency deployments in 2025?
Mayor's Order 2024-028 requires agencies to demonstrate alignment with six AI Values (Clear Benefit to the People; Safety & Equity; Accountability; Transparency; Sustainability; Privacy & Cybersecurity) before deployment. Agencies must perform documented alignment reviews, privacy and cybersecurity assessments, create agency-specific AI strategic plans, follow the AI procurement handbook, label AI-generated content, preserve human oversight, test tools in controlled environments (including a secure testing sandbox), and report on measurable benefits to residents.
How should District teams map local requirements to federal AI guidance and procurement rules?
District teams should treat AI projects like mission-critical IT upgrades: align Mayor's Order milestones with federal timelines and OMB/Executive Order guidance (e.g., Chief AI Officers, governance boards, pre-deployment testing, documented impact assessments, monitoring, human oversight). Solicitations must demand documented values alignment, data-use restrictions, human-in-the-loop obligations, auditable data rights, cybersecurity attestations, and decommissioning/exit clauses to avoid vendor lock-in and comply with evolving federal procurement guardrails.
Which standards and tools should Washington agencies use to govern and test AI systems?
Agencies should anchor programs on NIST outputs - AI RMF 2.0 for lifecycle risk management, the Generative AI Profile for GAI-specific risks, SP 800-218A secure software development practices, and testing tools like the Dioptra security testbed and ARIA model-evaluation pilots. These voluntary but widely accepted resources provide testable steps (red-teaming, field testing, provenance, measurement) to make pilots auditable and vendors accountable.
What legal, IP, and data risks must DC agencies manage when using AI?
Agencies must manage patent inventorship and subject-matter eligibility per USPTO guidance, lock down training-data rules and confidentiality to avoid privacy, export-control, or national-security issues, and follow Copyright Office guidance on output and training set copyrightability. Contract language should require inventor attribution, confidentiality safeguards, restricted data flows, audit rights, and explicit limits on using agency or resident data with third-party models.
How can Washington agencies build workforce capacity and run responsible pilots practically in 2025?
Follow the District's workforce plan and training materials (due Aug 8, 2024), run small applied cohorts and pilots (e.g., five-person cohorts that deliver audited prompt libraries), partner with local upskilling programs (DC Public Library, Public Health Foundation), use OCTO and the Advisory Group for public listening and testing, and require pilots to demonstrate measurable service improvements while preserving human accountability and passing privacy/cybersecurity reviews.
You may be interested in the following topics as well:
Discover how Mayor's Order 2024-028 and AI values are setting the standards for ethical, cost-saving AI across Washington, D.C.
Explore how DMS question answering across iManage, SharePoint, and Drive surfaces context-aware answers with source citations.
As permit backlogs balloon in major cities, permits and licensing specialists face workflow automation that could cut processing times dramatically.
Ludo Fourrage
Founder and CEO
Ludovic (Ludo) Fourrage is an education industry veteran, named in 2017 as a Learning Technology Leader by Training Magazine. Before founding Nucamp, Ludo spent 18 years at Microsoft where he led innovation in the learning space. As the Senior Director of Digital Learning at this same company, Ludo led the development of the first of its kind 'YouTube for the Enterprise'. More recently, he delivered one of the most successful Corporate MOOC programs in partnership with top business schools and consulting organizations, i.e. INSEAD, Wharton, London Business School, and Accenture, to name a few. With the belief that the right education for everyone is an achievable goal, Ludo leads the nucamp team in the quest to make quality education accessible