The Complete Guide to Using AI in the Government Industry in Chicago in 2025
Last Updated: August 16th 2025

Too Long; Didn't Read:
Chicago should align local AI policy with the White House's July 2025 America's AI Action Plan to retain federal grants. Publish an ADS inventory, run one measurable pilot, use GSA USAi tests, and train staff - 15‑week bootcamps cost about $3,582.
Chicago government leaders should pay attention to the White House's July 2025 “America's AI Action Plan” because it links rapid federal AI adoption, streamlined data‑center permitting, and federal procurement rules to state regulatory climates - meaning Illinois agencies could face new eligibility questions for federal grants and contracts if local rules diverge from federal priorities; see the White House summary for the Plan and a legal breakdown of procurement and funding impacts.
Fast‑tracked permitting for large AI data centers and new “ideological neutrality” procurement tests for LLMs can reshape local infrastructure, vendors, and workforce needs, so practical upskilling - such as Nucamp's 15‑week AI Essentials for Work bootcamp - offers a quick path for staff to manage vendor reviews, procurement compliance, and pilot deployments.
White House AI Action Plan: summary of America's AI Action Plan (July 2025), Legal analysis of federal procurement and state impacts (Orrick), Nucamp AI Essentials for Work bootcamp registration.
Program | Length | Early Bird Cost | Registration |
---|---|---|---|
AI Essentials for Work | 15 Weeks | $3,582 | Nucamp AI Essentials for Work bootcamp registration |
“America's AI Action Plan charts a decisive course to cement U.S. dominance in artificial intelligence.”
Table of Contents
- What is AI regulation in the US in 2025? A Chicago-focused summary
- Common state-level requirements and how they affect Chicago agencies
- What is AI used for in 2025? Government use cases in Chicago
- Federal resources and guidance for Chicago agencies: GSA AI Guide and more
- How to start with AI in 2025: practical steps for Chicago government beginners
- Procurement, legal, and vendor considerations for Chicago agencies
- Workforce, education, and equity in Chicago's AI government rollout
- Testing, evaluation, and governance: building trustworthy AI in Chicago government
- Conclusion: The future of AI in Chicago government - opportunities and next steps
- Frequently Asked Questions
Check out next:
Get involved in the vibrant AI and tech community of Chicago with Nucamp.
What is AI regulation in the US in 2025? A Chicago-focused summary
(Up)In 2025 the federal approach to AI centers on the White House's “America's AI Action Plan,” which prioritizes three pillars - Accelerating Innovation, Building American AI Infrastructure, and International Diplomacy & Security - and pushes a deregulatory, procurement‑driven strategy that directly affects Illinois agencies: OMB and other agencies have been directed to weigh a state's AI regulatory climate when awarding discretionary AI funding and the Administration is updating procurement rules for LLMs to require “ideologically neutral” systems, while also fast‑tracking large data‑center permitting to support domestic infrastructure (White House America's AI Action Plan (July 2025) - official announcement).
The Plan instructs NIST and federal agencies to revise voluntary guidance (including removing references to DEI from the NIST AI RMF) and signals that the FCC and OMB may scrutinize state AI laws for conflicts with federal policy - so Chicago should expect a mix of federal incentives for rapid adoption, stronger export/permitting support for AI infrastructure, and potential funding consequences if local rules are deemed “burdensome” (Seyfarth legal analysis of the AI Action Plan and executive orders - implications for employment and procurement).
At the same time, state legislatures remain active - NCSL's tracker shows Illinois has multiple AI bills under consideration (state AI governance, meaningful human review, video‑interview rules) - so Chicago leaders must align procurement, transparency, and impact‑assessment practices now to preserve federal funding options and avoid vendor surprises (NCSL 2025 state AI legislation tracker - Illinois AI bills and summaries); the practical takeaway: update agency AI inventories and grant‑application narratives this year to show consistency with federal priorities or risk losing competitive federal support for pilots and infrastructure.
“America's AI Action Plan charts a decisive course to cement U.S. dominance in artificial intelligence.”
Common state-level requirements and how they affect Chicago agencies
(Up)Illinois agencies should plan for the same state‑level requirements now surfacing across the country - public inventories of automated decision systems, documented impact assessments for high‑risk tools, clear disclosure when applicants or employees interact with AI, and statutory “meaningful human review” clauses - because Illinois lawmakers have introduced multiple bills that map directly to those categories (disclosure, oversight, impact assessment, and employment protections); see the NCSL 2025 state AI legislation tracker for Illinois for bill details and categories.
Practically, Chicago departments will need to update procurement templates to require vendor‑supplied impact assessments, add an ADS inventory entry for every pilot, and include human‑in‑the‑loop certification language while balancing limited public‑records exemptions for sensitive admin/technical data (S.2640).
A concrete milestone to target this year: publish an agency ADS inventory and a one‑page impact‑assessment checklist before issuing any AI RFPs - doing so protects residents and preserves eligibility for grants tied to transparency or auditability.
For staffing and retraining ideas to operationalize these changes in Chicago, review practical workforce augmentation and retraining strategies for government agencies in the Nucamp AI Essentials for Work syllabus.
Bill | Focus / Category |
---|---|
H 35 / S 1425 | AI Systems and Health Insurance (Health Use; Oversight/Governance) |
H 496 / S 456 / S 971 | AI Video Interview Act amendments (Employment / Private Sector Use) |
H 3529 | AI Governance Principles and Disclosure Act (Responsible Use; Notification) |
H 3567 / H 3720 | Meaningful Human Review of AI Act / AI Act (Government Use; Impact Assessment) |
S 1366 | State Government AI Act (Government Use; Impact Assessment) |
S 2203 | Preventing Algorithmic Discrimination Act (Responsible Use) |
“There's a new world. Let's open the door and then start restricting in a narrow, detailed way, not like other states that are basically trying to ban everything.”
What is AI used for in 2025? Government use cases in Chicago
(Up)By 2025 Chicago's most tangible AI uses for government-adjacent services are in clinical and public‑health settings: ambient AI note‑taking and transcription tools now draft visit notes, summarize conversations in multiple languages, and cut clinician after‑hours “pajama time,” freeing staff for patient care and public‑facing duties; Rush's summer 2024 pilot with Suki reported 74% of clinicians saying the tool reduced burnout and 95% wanting to continue, with clinicians using 25 non‑English languages and 35% of providers running Spanish visits in one month (Rush ambient AI pilot - Healthcare IT News).
City health systems and safety‑net partners have scaled similar tools - UChicago and Northwestern reported hundreds of clinicians on ambient platforms and documented double‑digit reductions in after‑hours charting - so municipal IT, procurement, and public‑health leaders should prioritize integration, consent/privacy review, and vendor security checks when funding pilots (Chicago Tribune report on AI note‑taking in area clinics); peer‑reviewed surveys of health systems show adoption is accelerating but that barriers remain around workflow fit and evaluation metrics (Peer‑reviewed survey: adoption of AI in healthcare).
The so‑what: practical city wins are immediate - less clinician burnout and faster documentation translate into more clinic capacity for Chicago's public clinics and lower overtime costs for municipal health staff.
Use case | Example / vendor | Reported impact |
---|---|---|
Ambient documentation | Rush (Suki); Abridge at Endeavor/UChicago | 74% reduced burnout; faster chart closure; multi‑language support (25 languages) |
Systemwide clinician rollout | Northwestern, UChicago, Advocate/Aurora | Hundreds to ~1,300 providers onboarded; 15–17% reductions in after‑hours charting reported |
AI‑powered on‑demand care | Rush Connect (Fabric partnership) | Service expansion announced July 10, 2025 |
“Our goal in embracing ambient AI was to improve provider wellness, including their work‑life balance, improved documentation and a more personalized approach to patient care.”
Federal resources and guidance for Chicago agencies: GSA AI Guide and more
(Up)Chicago agencies have two immediate federal playbooks to reduce risk and accelerate pilots: the GSA's AI Guide for Government - a living, non‑technical roadmap (printed 8/8/2025) that explains organization, workforce, data governance, MLOps, and procurement best practices - and GSA's new USAi evaluation suite, a secure, no‑cost generative‑AI sandbox (USAi.gov) where agencies can test chatbots, code generation, and summarization tools against bias, security, and performance checks before buying; using the Guide to update ADS inventories and then running candidate models through USAi's logged, FedRAMP‑aligned tests gives Chicago a concrete way to show federal alignment on transparency and safety when competing for grants or procuring LLMs. For how to use these tools operationally, start by mapping one high‑priority use case, run vendor claims through USAi, and document results in the AI Guide's recommended lifecycle so procurement and grant narratives demonstrate the “responsible adoption” reviewers expect.
Resource | What it provides |
---|---|
GSA AI Guide for Government - printable roadmap for federal AI governance | Framework for organization, workforce, data governance, AI lifecycle, and procurement guidance |
GSA USAi evaluation suite - secure generative AI sandbox for testing chat, code, and summarization | Secure, shared environment to test generative AI (chat, code, summarization) with logging, bias checks, and FedRAMP compliance |
“USAi means more than access - it's about delivering a competitive advantage to the American people.”
How to start with AI in 2025: practical steps for Chicago government beginners
(Up)Start simple, legal, and measurable: designate an AI Lead or small center of excellence (as the City recommends) to own policy, vendor vetting, and staff guidance; create a one‑page automated‑decision‑system (ADS) inventory and a short impact‑assessment checklist tied to a single, high‑value use case; then stand up an Integrated Product Team (IPT) with embedded mission staff, a technical lead, and legal/security support so decisions stay mission‑driven while tapping centralized tools.
Use the GSA AI Guide for Government to structure roles and lifecycle checkpoints, follow Chicago's AI Roadmap recommended actions on issuing clear short‑form staff guidance and vendor vetting, and leverage Cook County's Bureau of Technology workstream (they've already cataloged 49 potential AI use cases) to pick a pilot that minimizes data movement and maximizes measurable service outcomes.
The practical payoff: one scoped pilot plus an ADS entry and impact checklist protects residents, makes procurement reviews faster, and creates the evidence reviewers want when pursuing state or federal AI grants.
GSA AI Guide for Government - roles, lifecycle, and procurement checklists, City of Chicago AI Roadmap - recommended actions for municipal AI governance, Cook County Bureau of Technology Strategic Plan - generative AI roadmap and 49 potential use cases.
Step | Action | Quick resource |
---|---|---|
1. Governance | Appoint AI Lead / COE; publish staff guidelines | City of Chicago AI Roadmap - recommended actions for governance and staff guidance |
2. Inventory & select | Create ADS inventory; choose one pilot from existing use‑case list | Cook County Bureau of Technology Strategic Plan - generative AI use‑case catalog |
3. Build & test | Form IPT + IAT, follow GSA lifecycle and procure with impact tests | GSA AI Guide for Government - AI lifecycle, procurement, and impact assessment |
Procurement, legal, and vendor considerations for Chicago agencies
(Up)Chicago procurement teams should treat any AI purchase as a legal and technical package: update RFP templates to require vendor‑supplied ADS inventories and written impact assessments, FedRAMP or SOC 2 evidence, clear IP licensing (who owns model outputs and training‑set rights), robust breach‑notification timelines, and contractual indemnities for product‑liability and consumer‑protection risks so the city can both manage harm and preserve eligibility for federal grants tied to transparency and safety; Sidley's AI practice flags privacy, cybersecurity, IP ownership, products‑liability, national‑security, and regulatory risks as core contract issues to address up front (Sidley artificial intelligence legal guidance for government procurement).
To reduce technical risk and speed reviews, require vendors to reproduce key claims in a logged test environment (for example, run candidate models through the GSA USAi evaluation sandbox and attach the results to the RFP response) so procurement staff can compare bias, security, and performance artifacts before awarding a contract (GSA USAi evaluation sandbox and suite for AI procurement testing).
Ask for a one‑page “safety & IP snapshot” from each bidder - if it's missing, disqualify the proposal to avoid downstream legal fights and federal alignment problems.
Contract clause | Why it matters (legal/operational) |
---|---|
ADS inventory & vendor impact assessment | Transparency, auditability, and eligibility for grants tied to disclosure |
FedRAMP / SOC 2 / security evidence | Meets federal security expectations and reduces breach risk |
IP ownership / output licensing | Prevents downstream copyright or licensing disputes over model outputs |
Indemnity & liability limits | Allocates product‑liability and consumer‑protection risk |
Breach reporting & incident response timing | Ensures timely notice for compliance and resident protection |
National‑security / export screening | Addresses CFIUS/national security and procurement eligibility concerns |
Workforce, education, and equity in Chicago's AI government rollout
(Up)Chicago's AI rollout must pair technology pilots with an explicit workforce and equity plan: use targeted AI prompts to surface federal and state funding opportunities for public‑health and retraining programs so departments can finance upskilling without diverting operating budgets (AI prompts for public health grant funding in Chicago government), adopt workforce‑augmentation and retraining strategies that protect jobs while increasing on‑the‑job productivity and reducing overtime, and prioritize training for frontline staff whose workflows will change first (workforce augmentation and retraining strategies for Chicago government staff).
Plan concrete pathways from training to role changes - short technical certificates, shadowing with vendor pilots, and reassignment ladders informed by the “top‑5 jobs at risk” analysis - so equity goals are measurable and residents see service continuity, not disruption (preparing Chicago public servants for AI job shifts and top-5 at-risk roles).
The so‑what: tying grant‑sourced funds to retraining lets Chicago scale pilots responsibly while keeping experienced public servants in mission‑critical roles.
Testing, evaluation, and governance: building trustworthy AI in Chicago government
(Up)Testing and evaluation must be built into every step of an AI project so Chicago can deploy useful systems without sidelining resident safety or federal funding eligibility: follow the GSA's lifecycle advice to embed model‑level, system‑level, operational, and ethical test & evaluation (T&E) into DevSecOps pipelines, run iterative Plan‑Do‑Check‑Act cycles, and require vendors to supply reproducible validation artifacts (benchmarks, changelogs, and conflict‑of‑interest disclosures) before award; when tool documentation is missing, disqualify the bid.
Pair these lifecycle checks with continuous monitoring for model drift, automated alerts, and clear human accountability so a drifting recidivism score or an opaque vendor update can be caught and rolled back quickly.
Also, avoid blind reliance on off‑label governance metrics - independent reviews like the World Privacy Forum warn that popular explainability tools (SHAP, LIME) and one‑size fairness heuristics can mislead unless accompanied by context, peer review, and adversarial testing.
Practically: require a vendor “validation report” (benchmarks + test code + update log) and catalog it in procurement files to shorten audits and preserve FedRAMP/contract eligibility.
Use federal playbooks and tool catalogs to standardize tests and evidence requirements across departments for faster, safer adoption (GSA AI Guide for Government: AI lifecycle and test & evaluation guidance, World Privacy Forum report: Risky Analysis assessing AI governance tools, FedRAMP Marketplace: authorized cloud services listing).
Test type | Purpose |
---|---|
Model‑level evaluation | Verify generalization, accuracy, and robustness on holdout/benchmark datasets |
System & operational testing | Validate end‑to‑end behavior, latency, security, and drift monitoring in production |
Ethical & impact evaluation | Detect bias, privacy harms, and downstream equity impacts; require mitigation plans |
“AI Governance Tools - Socio-technical tools for mapping, measuring, or managing AI systems and their risks in a manner that operationalizes or implements trustworthy AI.”
Conclusion: The future of AI in Chicago government - opportunities and next steps
(Up)Chicago's path forward is pragmatic: align local policy with federal expectations, prove safety with one scoped pilot and a published ADS inventory, and invest in workforce upskilling so departments can both protect residents and remain competitive for federal AI funding; use the GSA AI Guide for Government to structure lifecycle checkpoints and testing, join peers and regulators at Chicago AI Week 2025 conference for responsible AI to learn practical governance patterns, and train staff quickly with targeted courses like Nucamp's Nucamp AI Essentials for Work (15-week AI for work bootcamp) so procurement, legal, and program teams can evaluate vendor impact reports and USAi-style test artifacts before award.
The so‑what: publishing an ADS entry and running a single, measurable pilot this year both shortens procurement reviews and preserves eligibility for discretionary federal grants tied to transparency and safety.
Program | Length | Early Bird Cost | Registration |
---|---|---|---|
AI Essentials for Work | 15 Weeks | $3,582 | Register for Nucamp AI Essentials for Work (15 Weeks) |
“By uniting strategic leadership, groundbreaking technology, and human centered design, we're accelerating Chicago's emergence as a global leader in trustworthy AI.”
Frequently Asked Questions
(Up)How does the White House's "America's AI Action Plan" (July 2025) affect Chicago government agencies?
The Plan ties federal AI adoption, accelerated data‑center permitting, and updated procurement rules to state regulatory climates. Illinois and Chicago agencies could face new eligibility questions for discretionary federal grants and contracts if local rules (for example, transparency, impact assessments, or procurement clauses) are deemed inconsistent with federal priorities. Practically, agencies should update ADS inventories, align procurement language with federal guidance (including ideological neutrality tests for some LLM procurements), and document compliance in grant narratives to preserve funding eligibility.
What immediate policy and procurement steps should Chicago departments take to stay aligned with federal expectations?
Publish an automated decision systems (ADS) inventory and a one‑page impact‑assessment checklist before issuing AI RFPs; update RFP templates to require vendor‑supplied impact assessments, FedRAMP/SOC 2 or equivalent security evidence, IP/output licensing clarity, breach‑notification timelines, and indemnities. Require vendors to reproduce key claims in a logged test environment (for example, the GSA USAi sandbox) and attach validation artifacts to proposals to speed technical review and protect federal grant eligibility.
What practical AI use cases are highest priority for Chicago government in 2025?
High‑priority, high‑impact uses include ambient clinical documentation and transcription (reducing clinician after‑hours charting and burnout), systemwide clinician rollouts for public health partners, and AI‑powered on‑demand care expansions. These deliver immediate operational wins - more clinic capacity and lower overtime - while requiring focused attention on privacy, consent, workflow integration, and vendor security checks.
How should Chicago agencies handle workforce training and equity when adopting AI?
Pair pilots with a workforce and equity plan: designate an AI Lead or COE, provide targeted short upskilling (e.g., Nucamp's 15‑week AI Essentials for Work or short technical certificates), use shadowing and role‑reassignment ladders, and tie grant funds to retraining. Focus training on frontline staff first and create measurable pathways from training to role changes so pilots scale responsibly without displacing experienced public servants.
What testing, evaluation, and federal tools should Chicago use to demonstrate trustworthy AI adoption?
Follow the GSA's AI Guide for Government lifecycle checkpoints and use the GSA USAi evaluation sandbox to run logged, FedRAMP‑aligned tests (bias, security, performance) on candidate models. Require vendor validation reports (benchmarks, test code, update logs), embed model/system/operational/ethical testing into DevSecOps pipelines, monitor for model drift, and catalog evidence in procurement files to shorten audits and strengthen grant applications.
You may be interested in the following topics as well:
Learn how predictive maintenance for municipal assets prevents costly failures and extends infrastructure life across Chicago.
Learn human-centered community facilitation techniques that help outreach specialists adapt with Human-centered community facilitation techniques tailored to Chicago neighborhoods.
Use AI for vendor benchmarking for AECOM-like firms to compare contractors and contract histories for CDOT projects.
Ludo Fourrage
Founder and CEO
Ludovic (Ludo) Fourrage is an education industry veteran, named in 2017 as a Learning Technology Leader by Training Magazine. Before founding Nucamp, Ludo spent 18 years at Microsoft where he led innovation in the learning space. As the Senior Director of Digital Learning at this same company, Ludo led the development of the first of its kind 'YouTube for the Enterprise'. More recently, he delivered one of the most successful Corporate MOOC programs in partnership with top business schools and consulting organizations, i.e. INSEAD, Wharton, London Business School, and Accenture, to name a few. With the belief that the right education for everyone is an achievable goal, Ludo leads the nucamp team in the quest to make quality education accessible