The Complete Guide to Using AI in the Government Industry in New York City in 2025
Last Updated: August 23rd 2025

Too Long; Didn't Read:
NYC's 2025 AI roadmap mandates 37 actions (NYC AI Action Plan) amid Local Law 144 and proposed NY AI Act requiring disclosures and bias audits. $109.1B U.S. AI investment (2024) and falling inference costs (↓280×) make vendor controls, testing, procurement clauses and staff upskilling essential.
New York City in 2025 sits at the crossroads of rapid AI adoption and tightening oversight: the NYC AI Action Plan maps 37 targeted actions for governance, procurement and workforce upskilling, while Local Law 144 and state bills like the proposed NY AI Act push disclosure, independent bias audits and new remedies for harmed residents - so transparency and vendor controls matter as much as innovation.
A recent New York State Comptroller audit found agencies lack an effective AI governance framework and inconsistent agency readiness, a stark reminder that pilot projects must pair with testing, transparency and staff training to avoid costly errors.
With NYC's $2 trillion metro economy and deep AI talent base, practical public‑sector training such as Nucamp's 15‑week AI Essentials for Work can help city staff and vendors turn policy into accountable, day‑to‑day practice; learn more about the plan at the NYC Office of Technology and the Comptroller's audit and explore the bootcamp syllabus for hands‑on skills.
Bootcamp | Length | Courses Included | Early Bird Cost | Registration |
---|---|---|---|---|
AI Essentials for Work | 15 Weeks | AI at Work: Foundations; Writing AI Prompts; Job Based Practical AI Skills | $3,582 | AI Essentials for Work - Registration (Nucamp) | AI Essentials for Work - Syllabus (Nucamp) |
“an umbrella term without precise boundaries, that encompasses a range of technologies and techniques of varying sophistication that are used to, among other tasks, make predictions, inferences, recommendations, rankings, or other decisions with data, and that includes topics such as machine learning, deep learning, supervised learning, unsupervised learning, reinforcement learning, statistical inference, statistical regression, statistical classification, ranking, clustering, and expert systems.”
Table of Contents
- What is AI and how governments use it in New York City
- What is the AI regulation in the US 2025 and New York specifics
- AI industry outlook for 2025 and what it means for New York City
- How is AI used in the US government and New York City agencies
- Procurement, vendor requirements, and buying AI in New York City government
- Governance model and operational maturity for New York City agencies
- Workforce, training and public engagement in New York City
- Risks, incidents, and infrastructure considerations for New York City
- Conclusion: Practical next steps for New York City governments starting with AI in 2025
- Frequently Asked Questions
Check out next:
Learn practical AI tools and skills from industry experts in New York City with Nucamp's tailored programs.
What is AI and how governments use it in New York City
(Up)At its simplest, AI is technology that helps machines sense, reason and act, and in 2025 New York City agencies are translating that capability into practical services: generative AI can create text, images, code or summaries on demand, while machine learning powers predictions like fraud detection or maintenance needs; as Google Cloud's explanation of AI versus machine learning explains, AI is the broader concept and ML is one of its workhorse techniques.
Cities use these tools to automate routine paperwork, accelerate case review, and run always‑on customer service - think chatbots answering resident questions at 3 a.m.
and drafting routine letters so human staff focus on complex cases - benefits highlighted across industry guides. The mechanics matter for trustworthy deployment: foundation models are trained, tuned (for example with RLHF) and sometimes paired with retrieval‑augmented generation to keep outputs current, and IBM's primer on generative AI and its tradeoffs outlines the tradeoffs - efficiency and creativity versus risks like hallucinations, bias and privacy concerns that city procurement and governance must mitigate.
For NYC practitioners, practical use cases include streamlined constituent communications, automated document workflows, code generation for app modernization, and predictive analytics for infrastructure - all promising big efficiency gains if paired with clear vendor controls, testing and staff upskilling such as plain‑language prompt design and oversight trainings described in local guidance and Nucamp resources.
"You might hear people use artificial intelligence (AI) and machine learning (ML) interchangeably, especially when discussing big data, predictive analytics, and other digital transformation topics."
What is the AI regulation in the US 2025 and New York specifics
(Up)Federal AI policy in 2025 is a fast-moving patchwork that New York City agencies must track closely: a 2023 White House directive (EO 14110) set an ambitious playbook - charging NIST with standards, requiring red‑teaming and reporting for powerful models, and pushing agency AI governance - while a GAO review found agencies had completed key management and talent requirements from that order by mid‑2024, laying foundational governance expectations (GAO implementation review GAO-24-107332).
At the same time, two January 2025 presidential actions reshuffled priorities: an Executive Order on AI Infrastructure (Jan 14, 2025) emphasizes siting, grid impacts, and that the administration warned of rising compute and energy needs for AI, creating new permitting, clean‑energy and lab‑security obligations for any large data‑center projects, while a separate Executive Order Removing Barriers to American Leadership in AI (Jan 23, 2025) directs a fresh AI action plan and rescinds or re‑evaluates steps taken under EO 14110 - so agencies and vendors should expect both continuity (standards, workforce and procurement guidance) and near‑term changes as federal memoranda and OMB guidance are revised.
The bottom line for NYC: federal rules now combine risk‑management expectations with infrastructure, permitting and energy provisions that will shape procurement, vendor requirements and how city agencies scale trustworthy AI.
Policy / Report | Date | Core takeaway for agencies |
---|---|---|
GAO implementation review (selected EO requirements) | Sep 2024 | Agencies implemented key management and talent requirements - foundation for agency AI governance |
EO: Advancing U.S. Leadership in AI Infrastructure | Jan 14, 2025 | Focus on AI data centers, permitting, lab security, and matching clean energy to rising electricity demand |
EO: Removing Barriers to American Leadership in AI | Jan 23, 2025 | Revokes or revises prior EO 14110 actions and directs a new AI action plan; OMB guidance to be updated |
“safe, secure, and trustworthy”
“AI's electricity and computational needs are vast, and they are set to surge”
AI industry outlook for 2025 and what it means for New York City
(Up)The 2025 industry outlook shows why New York City can no longer treat AI as an experiment: U.S. private AI investment surged to $109.1 billion in 2024 and generative AI drew $33.9 billion worldwide, while inference costs plunged (over 280‑fold between late 2022 and Oct 2024), rapidly lowering barriers to advanced capabilities - find the full data in Stanford HAI's 2025 AI Index.
At the same time builders are doubling down: many companies now allocate 10–20% of R&D to AI and prioritize cross‑functional talent, model portfolios, and cloud/inference spend, trends summarized in ICONIQ's State of AI 2025 report.
For New York City that translates into tangible pressures and opportunities: expect stronger demand for GPUs, cloud capacity and data‑center partnerships, sharper competition for AI/ML engineers, and a shift in budget mix from hiring to operational cloud and governance costs - so practical investments in workforce retraining and AI supervision matter, as outlined in local Nucamp resources on retraining city workers for AI supervision.
The upshot is clear and vivid: as core costs fall and capital concentrates, cities that pair pragmatic procurement and governance with targeted upskilling will be best positioned to turn industry momentum into reliable public services rather than costly pilots gone wrong.
Metric | Value / Trend | Source |
---|---|---|
U.S. private AI investment (2024) | $109.1 billion | Stanford HAI 2025 AI Index Report |
Generative AI private investment (global) | $33.9 billion | Stanford HAI 2025 AI Index Report |
Inference cost change (Nov 2022–Oct 2024) | ↓ over 280‑fold | Stanford HAI 2025 AI Index Report |
Share of R&D for AI (2025 builders) | 10–20% of R&D budgets | ICONIQ Capital State of AI 2025 Report |
How is AI used in the US government and New York City agencies
(Up)New York City agencies are putting AI into everyday service delivery - most visibly through the New York City MyCity Chatbot (Microsoft Azure pilot) New York City MyCity Chatbot (Microsoft Azure pilot), a pilot trained to answer official city questions, respond in the languages required by Local Law 30, and surface content from thousands of business and service pages to speed tasks like childcare signups, parking info and licensing guidance; but real‑world use has revealed the tradeoffs between convenience and accuracy, with investigative reports documenting dangerous confabulations - one high‑profile test even produced an answer saying landlords didn't have to accept Section 8 vouchers - so handoffs to human reviewers, strong vendor controls, and continuous testing are non‑negotiable as detailed in the Ars Technica report Ars Technica report on NYC chatbot confabulations and errors.
The city's AI Action Plan and pilot program design show the practical posture agencies are taking: pilot widely but pair deployments with governance, public verification and staff upskilling; for example, investing in plain‑language prompt design and constituent‑communication best practices helps turn faulty chat output into reliable service pathways - see the guide on plain‑language constituent communications Plain-language constituent communications for government AI implementations.
The bottom line for New York: chatbots and predictive tools can shrink friction for residents and businesses, but the image of a municipal bot confidently lying about tenants' rights is a vivid reminder that oversight, transparency and human review must ride shotgun on every AI rollout.
“While artificial intelligence presents a once-in-a-generation opportunity to more effectively deliver for New Yorkers, we must be clear-eyed about the potential pitfalls and associated risks these technologies present.”
Procurement, vendor requirements, and buying AI in New York City government
(Up)Buying AI for New York City government means navigating both New York State's well‑trod procurement rules and a new layer of AI‑specific obligations: the NYS Office of Information Technology Services requires all tech procurements to comply with state procurement laws, MWBE/SDVOB supplier diversity rules, restricted‑period lobbying guidance and even Appendix C‑AI (Standard Clauses for AI Purchases, Apr 2025) that standardizes contract terms and required forms - start with the ITS vendor resources for the checklist and required attestations NYS ITS Procurement vendor resources and checklist.
At the federal level, the White House's “New AI Guidelines” (M‑25‑21 / M‑25‑22) push agencies to demand minimum risk‑management for high‑impact AI, stronger IP and data‑rights clauses, vendor‑lock‑in protections, and explicit prohibitions on training public models with nonpublic agency data - provisions that will shape contract language and apply to solicitations awarded or renewed after Oct 1, 2025 (White House AI procurement guidance summary (M-25-21/M-25-22) by Ropes & Gray).
Cities can turn procurement into a governance tool by requiring vendor “AI FactSheets,” auditability, human‑in‑the‑loop controls and continuous monitoring so promises become verifiable obligations; NYC's AI Action Plan even calls for AI‑specific procurement standards and shared contracting templates to streamline cross‑agency buying and oversight (Coverage of New York City AI Action Plan procurement standards on Route Fifty).
The practical upshot is simple and vivid: a well‑drafted clause can be the difference between a vendor merely promising “bias mitigation” and a binding, auditable commitment that keeps nonpublic city data out of commercial model training and guarantees ongoing testing and remedies for residents.
Governance model and operational maturity for New York City agencies
(Up)Operational maturity for AI in New York's public sector means moving quickly from isolated pilots to a citywide lifecycle that ties policy, procurement and people together: the Office of the State Comptroller's April 2025 audit makes this urgent - finding New York State “does not have an effective AI governance framework,” that sampled agencies vary widely, and that none required procedures to test AI outputs for accuracy or bias - clear signals that ITS must strengthen its Acceptable Use policy and lead coordinated training and oversight (New York State OSC AI governance audit (April 2025)).
Practical next steps already recommended by thought leaders include publishing and maintaining AI use‑case inventories so residents and auditors can see when and how systems are used, appointing responsible‑AI officers, creating a centralized risk registry, requiring third‑party audits, and building post‑market monitoring into contracts - practices summarized in the Responsible AI Institute's roadmap and the Center for Democracy & Technology's brief on inventories (Responsible AI Institute overview of NYC AI Action Plan 2023–2025, Center for Democracy & Technology best practices for public sector AI use-case inventories).
With over 20 algorithmic tools already in use across roughly a dozen agencies, the emphasis must be on repeatable processes - clear inventories, audit-ready contracts, staff training, and lifecycle testing - so governance becomes an operational muscle rather than an afterthought.
Issue | Key OSC Finding / Recommended Action |
---|---|
Statewide framework | Not effective; ITS should amend AI policy and provide guidance |
Agency practices | Varied maturity; agencies must implement AI-specific policies and coordinate with ITS |
Testing & training | None sampled required procedures to test outputs; develop statewide training |
“The actions completed thus far will continue to inform our work going forward.” - Alex Foard
Workforce, training and public engagement in New York City
(Up)Building an AI-ready workforce in New York City means combining free, practical learning with state-backed investments and hands-on experiences so staff can safely run, oversee and explain AI to the public - practical training pathways include InnovateUS's no‑cost, at‑your‑own‑pace courses such as Responsible AI for Public Professionals and Building AI That Works, which teach prompt design, risk mitigation and project selection for public servants (InnovateUS Responsible AI and Building AI That Works free courses for public sector professionals); statewide programs and grants from the Office of Strategic Workforce Development back on‑ramps into high‑demand roles (a $350M investment and dedicated grant rounds that have already funneled $63M to 77 projects, training more than 14,000 New Yorkers) and help scale employer‑driven retraining (New York OSWD workforce development grants and employer-driven retraining programs).
The New York State Department of Labor's innovations - from immersive virtual‑reality career exploration to an AI‑powered Virtual Career Center - pair well with city upskilling so displaced or at‑risk staff can move into AI supervision and oversight roles; practical local guides and retraining pathways show how prompt engineering or plain‑language constituent communications can turn a risky pilot into a reliable service, and targeted investments (courses + grants + on‑the‑job practice) make that transition concrete - imagine a staffer using a VR simulator to explore a new job pathway, then applying an InnovateUS workshop lesson the next day to de‑risk a live chatbot rollout (retraining city workers for AI supervision and oversight in New York City).
Program | Metric | Value |
---|---|---|
InnovateUS | Learners / Agencies served | 90,000+ learners; 150+ agencies |
OSWD (ESD) | Investment / Awards | $350M investment; $63M awarded to 77 projects; 14,000+ trained |
NYSDOL | Training innovations | VR career exploration; AI-powered Virtual Career Center |
Risks, incidents, and infrastructure considerations for New York City
(Up)The risks New York City must plan for are no longer hypothetical: high‑profile failures - from NYC's MyCity chatbot confidently telling business owners to underpay staff or serve food “nibbled by rodents,” to chatbots fabricating legal precedents and airlines being held liable for bad advice - show how hallucinations quickly become reputational, legal and financial disasters.
Security teams also face prompt‑injection, jailbreaking and data‑exfiltration risks that can expose sensitive records or bypass access controls, so municipal projects need offensive testing, least‑privilege data access and continuous monitoring as standard practice.
Mitigation is practical: require Retrieval‑Augmented Generation (RAG) or domain‑restricted sources, build human‑in‑the‑loop review, enforce fact‑checking and post‑market monitoring, and mandate pentesting and OWASP‑style defenses before public rollouts; these operational steps help turn a “plausibility engine” into a reliable city service rather than a minefield for residents and agencies.
Conclusion: Practical next steps for New York City governments starting with AI in 2025
(Up)Practical next steps for New York City agencies in 2025 are tactical and incremental: start with a focused, high‑impact pilot that has clear success metrics and a cross‑functional team (an AI champion, legal, IT and operations) so risks are caught early - see Kanerika's step‑by‑step AI pilot guide for measurable pilot design and evaluation.
Parallel to pilot selection, check infrastructure and data readiness before moving to agentic automation - Kellton's AI Agents playbook explains how to match agent types to use cases, secure compute and harden controls.
Finally, lock procurement and governance basics into contracts (auditability, no‑training clauses for nonpublic data) and invest in pragmatic staff training so humans can write prompts, supervise outputs and handle exceptions; Nucamp's 15‑week AI Essentials for Work bootcamp teaches prompt design and job‑based AI skills to turn pilots into dependable services.
Taken together - small pilots, infrastructure checks, binding vendor controls, and targeted upskilling - this sequence turns policy into practice and helps avoid the vivid worst case of a municipal bot confidently mistelling residents' rights.
Next step | Resource |
---|---|
Design a controlled AI pilot with clear KPIs | Kanerika guide: How to Launch a Successful AI Pilot Project |
Assess agent & infrastructure readiness | Kellton playbook: AI Agents and Smart Business Automation |
Train staff in prompts & supervision | Nucamp AI Essentials for Work - 15‑week syllabus |
Frequently Asked Questions
(Up)What AI rules and regulations should New York City agencies follow in 2025?
NYC agencies must comply with a mix of federal directives (including EO actions and evolving OMB/NIST guidance), New York State procurement and ITS requirements (including Appendix C‑AI clauses), and local NYC policies such as the NYC AI Action Plan and Local Law 144. Key obligations include risk‑management practices, disclosure and audit requirements, vendor restrictions on using nonpublic city data for model training, MWBE/SDVOB procurement rules, and demands for transparency, bias testing, and remedial pathways for harmed residents.
What practical governance and procurement steps should city agencies take before deploying AI?
Agencies should adopt a lifecycle approach: publish an AI use‑case inventory; require AI FactSheets and enforceable contract clauses (no‑training with nonpublic data, auditability, monitoring, vendor remedies); appoint responsible‑AI officers; maintain a centralized risk registry; mandate third‑party bias/impact audits and continuous post‑market testing; and ensure human‑in‑the‑loop review for high‑risk outputs. Procurement checklists from ITS and Appendix C‑AI templates are recommended starting points.
What are the main operational risks of municipal AI and how can they be mitigated?
Primary risks include hallucinations (false or misleading outputs), bias, privacy/data exfiltration, prompt‑injection, and model misuse. Mitigations include using retrieval‑augmented generation or domain‑restricted sources, strict least‑privilege data access, offensive/red‑team testing and pentests, human review and escalation workflows, continuous monitoring and fact‑checking, enforceable vendor obligations for incident response, and pre‑deployment accuracy/bias testing.
How should NYC agencies build workforce capacity to operate and supervise AI safely?
Combine practical short courses, hands‑on bootcamps, and state programs: provide role‑based trainings (prompt design, oversight, plain‑language constituent communications), leverage free/low‑cost courses like InnovateUS Responsible AI modules, pursue grants and OSWD/ESD programs for retraining, and create on‑the‑job practice (simulations, supervised pilots). Programs such as Nucamp's 15‑week AI Essentials for Work help frontline staff learn prompt engineering and job‑based AI skills needed for accountable deployments.
What are recommended first steps for a controlled AI pilot in NYC government?
Start small with a high‑impact use case and clear KPIs; form a cross‑functional team (AI champion, legal, IT, operations); run predeployment tests for accuracy, bias, and security; confirm infrastructure and data readiness (compute, RAG sources, least‑privilege access); embed human‑in‑the‑loop review and post‑market monitoring; and lock vendor commitments into contracts (auditability, no‑training clauses). Iterate with measured evaluations before scaling citywide.
You may be interested in the following topics as well:
Updating municipal procurement rules for responsible AI is essential to protect public accountability and labor standards.
Boost efficiency with administrative automation for city staff that drafts emails, agendas, and meeting summaries.
Learn why federal guidance shaping local AI strategy matters for every New York City agency planning pilots today.
Ludo Fourrage
Founder and CEO
Ludovic (Ludo) Fourrage is an education industry veteran, named in 2017 as a Learning Technology Leader by Training Magazine. Before founding Nucamp, Ludo spent 18 years at Microsoft where he led innovation in the learning space. As the Senior Director of Digital Learning at this same company, Ludo led the development of the first of its kind 'YouTube for the Enterprise'. More recently, he delivered one of the most successful Corporate MOOC programs in partnership with top business schools and consulting organizations, i.e. INSEAD, Wharton, London Business School, and Accenture, to name a few. With the belief that the right education for everyone is an achievable goal, Ludo leads the nucamp team in the quest to make quality education accessible