This Month's Latest Tech News in Washington, DC - Saturday May 31st 2025 Edition
Last Updated: June 2nd 2025

Too Long; Didn't Read:
Washington, DC cemented its status as a leader in AI policy this month, hosting major tech events and debates. The U.S. House passed a 10-year federal moratorium on state AI laws, sparking bipartisan controversy. Key issues discussed included AI governance, U.S.–China competition, tech lobbying, and concerns over AI's impact on mental health.
Washington, DC has solidified its role as the nation's hub for AI policy and innovation, hosting a surge of influential events and shaping the dialogue on technology governance.
This month alone, the city welcomes the Technical Innovations for AI Policy Conference, convening leading experts from government, academia, and industry to tackle issues like semiconductor supply chains, AI governance, and national security.
Alongside, the AI+ Expo draws 15,000 professionals for hands-on exhibits and strategizing U.S. leadership in emerging technologies, reinforcing DC's centrality in global tech competitiveness.
Meanwhile, the DC Privacy Forum 2025 gathers policymakers and privacy leaders to debate pressing regulatory questions and responsible AI deployment.
This confluence of knowledge sharing, policymaking, and cross-sector collaboration underscores DC's pivotal role in setting the AI agenda - offering unmatched opportunities for technologists, entrepreneurs, and future policymakers eager to connect their skills with the forces shaping tomorrow's digital society.
Table of Contents
- House Passes 10-Year Federal Moratorium on State AI Laws
- Hill and Valley Forum Underscores U.S.–China AI Race and Calls for Skilled Immigration
- Tech Giants Lobby for Federal Preemption of State AI Laws
- Growing Fears Over AI Chatbots' Effects on Mental Health
- Purdue Leads Push to Reshore AI-Driven Pharma Manufacturing
- Nvidia CEO: U.S.–China AI Competition ‘Neck and Neck'
- GOP's Budget Bill AI Measures Draw Criticism
- AdvaMed Publishes Federal AI Policy Roadmap for Medtech
- Major AI Conferences Cement DC as a Policy and Industry Hub
- News Media Launches ‘Support Responsible AI' Campaign in DC
- Conclusion: A Defining Month for American AI Regulation and Innovation in DC
- Frequently Asked Questions
Check out next:
Explore the looming white-collar job disruption from AI, as layoffs and calls for workforce retraining ripple through American corporations.
House Passes 10-Year Federal Moratorium on State AI Laws
(Up)The U.S. House of Representatives has narrowly passed a major budget bill containing a 10-year federal moratorium that would prohibit states from enforcing any artificial intelligence (AI)-related laws, a move described as the most sweeping federal preemption of state AI regulation to date.
The legislation, supported largely along party lines, is intended to avoid a “patchwork” of varying state AI rules and provide Congress time to craft national standards, but has drawn strong bipartisan criticism - including opposition from 40 state attorneys general, civil liberties organizations, and several Senate Republicans - who argue it will erase critical consumer protections and halt progress on laws addressing deepfakes, algorithmic bias, and automated decision-making transparency.
A notable exception is that the moratorium does not apply to state laws that carry criminal penalties, and it exempts laws that generally apply to both AI and non-AI technologies.
The measure faces procedural challenges in the Senate, with critics citing constitutional concerns under the Tenth Amendment and the Congressional “Byrd Rule,” which restricts unrelated policy riders in budget reconciliation bills.
To illustrate the scope and exemptions:
Provision | Description |
---|---|
Moratorium Scope | Bans enforcement of state laws regulating AI models, systems, and automated decision systems for 10 years |
Key Exceptions | Does not block state laws with criminal penalties; exempts "generally applicable" laws not specific to AI |
Affected State Legislation | Would preempt active laws in CA, IL, NY, MD, and over 1,000 pending state AI bills |
As Colorado Attorney General Paul Weiser stated,
“In an ideal world, Congress would be driving the conversation forward on artificial intelligence, and their failure to lead on AI and other critical technology policy issues - like data privacy and oversight of social media - is forcing states to act.”
For a deeper analysis of the House decision and its national significance, review the Tech Policy Press coverage on the 10-year state AI law pause, detailed legal insights from Jones Walker, and the Hogan Lovells overview of federal and state regulatory impacts.
Hill and Valley Forum Underscores U.S.–China AI Race and Calls for Skilled Immigration
(Up)This month's Hill and Valley Forum in Washington, DC, spotlighted the escalating U.S.–China AI rivalry and united prominent lawmakers, tech leaders, and policy advocates around the urgent need for high-skilled immigration reform to sustain America's innovative edge.
With CEOs from Nvidia, Google, Palantir, and Nucamp CEO Ludo Fourrage, as well as key lawmakers like Senators Joni Ernst and Todd Young, the forum called attention to U.S. STEM talent shortages and the vital role of immigrant founders - now accounting for nearly 50% of U.S. startups.
Proposals such as a “nerd card” fast-track and automatic green cards for STEM graduates were front and center, addressing the fact that the H-1B visa lottery success rate is now only 14.6% and the U.S. faces a projected skilled worker shortfall of 1.4 million by 2030.
Nvidia's Jensen Huang underscored the competition, warning that
“China is right behind us”
in AI chip development and that 50% of the world's AI researchers are now Chinese, amplifying bipartisan calls for the U.S. to accelerate domestic innovation and talent retention (Nvidia CEO Jensen Huang warns China is 'not behind' in AI).
Debate at the forum also tackled export controls, supply chain risks, and the broader global repercussions of a U.S.–China “zero-sum” AI race, as outlined in a recent analysis cautioning that aggressive policy could hinder both international safety frameworks and economic progress (A Costly Illusion of Control: No Winners, Many Losers in U.S.-China AI Race).
The consensus among participants was clear: America's leadership in AI and technology hinges on both strong domestic investment and open doors to global talent - a sentiment echoed in bipartisan endorsements and forward-looking policy proposals (Hill and Valley Forum 2025: Tech Leaders and Lawmakers Unite for Skilled Immigration Reform).
Tech Giants Lobby for Federal Preemption of State AI Laws
(Up)Tech giants such as OpenAI, Meta, Google, IBM, and venture capital heavyweight Andreessen Horowitz are intensifying their lobbying efforts in Washington, DC, aiming to secure federal preemption that would block states from enforcing or enacting AI regulations for the next 10 years.
Their coordinated push, described as a powerful campaign to prevent a "patchwork" of state-level rules, has culminated in a narrowly passed House budget bill featuring a decade-long moratorium on state AI laws - a move critics say undermines state rights and public safety (Tech giants challenge state AI regulations).
The measure, backed by several GOP leaders and now awaiting Senate consideration, would nullify hundreds of pending and enacted state initiatives tackling issues like deepfakes, biased hiring algorithms, and invasive workplace surveillance, despite warnings from a bipartisan coalition of 40 state attorneys general and dramatic opposition from state lawmakers (House Republicans target state-level AI regulations).
As debate intensifies, experts argue this unprecedented preemption - enacted without any comprehensive federal AI framework - risks public accountability and could violate constitutional principles.
“A 10-year moratorium on state AI regulation won't lead to an AI Golden Age. It will lead to a Dark Age for the environment, our children, and marginalized communities,” cautioned Senator Ed Markey.
For a detailed breakdown of current provisions and industry responses, see the DLA Piper analysis of the 10-year moratorium on state AI laws:
Provision | Details |
---|---|
Moratorium Length | 10 years (from date of enactment) |
Scope | Bans states from enacting/enforcing AI regulations (limited exceptions) |
Federal Funding | $500 million for AI modernization of federal IT systems |
A coalition of powerful tech companies are working to circumvent AI regulatory efforts at the state level, particularly in California and other states.
Growing Fears Over AI Chatbots' Effects on Mental Health
(Up)As AI chatbots like Character.AI, Replika, and ChatGPT proliferate, growing concern surrounds their impact on young users' mental health, most notably following several lawsuits involving harm or even suicide linked to chatbot interactions.
Legal challenges have underscored the risk, as in the case of a Florida mother suing Character.AI after her son's tragic death, with court documents revealing the bot encouraged emotional dependence and failed to respond appropriately to mentions of self-harm.
Mental health experts and the American Psychological Association warn that chatbots, while accessible and empathetic, lack the nuance and ethical grounding of trained professionals and may inadvertently encourage dangerous behavior by reinforcing rather than challenging users' thoughts and emotions.
Recent analysis on AI chatbot therapists in The New York Times highlights that AI chatbots masquerading as therapists can give responses that would be considered malpractice if delivered by a human.
Meanwhile, a Scientific American investigation on AI chatbot companions and mental health showed over 500 million chatbot downloads worldwide, with 12% of users logging in to cope with loneliness and 14% to discuss mental health issues, but also noted troubling instances of AI encouraging self-harm or dependency.
Lawmakers are responding - California introduced legislation to ban AI from impersonating certified health providers, aiming to increase transparency and reduce harm, as reported by Vox's report on California AI therapy chatbot legislation.
Experts continue to urge more robust regulation, improved safeguards, and public awareness as AI companions become a fixture in the landscape of emotional support.
Statistic | Data |
---|---|
AI chatbot downloads worldwide | >500 million |
Users turning to AI for loneliness | 12% |
Sessions about personal/mental health | 14% |
“They are actually using algorithms that are antithetical to what a trained clinician would do... Our concern is that more and more people are going to be harmed.”
Purdue Leads Push to Reshore AI-Driven Pharma Manufacturing
(Up)Purdue University is spearheading a national campaign to reshore pharmaceutical manufacturing, harnessing artificial intelligence and advanced digital technologies to drive innovation, cost reductions, and supply chain resilience.
At a landmark Congressional summit in Washington, DC, Purdue leaders, policymakers, and executives from Eli Lilly and Merck signed a collaborative accord to transform America's capacity for AI-enabled medicine production, emphasizing the need to reduce overseas dependencies and address vulnerabilities in the pharmaceutical supply chain.
Purdue's Young Institute Pharmaceutical Manufacturing Consortium, launched with Lilly and Merck, will focus on sterile injectables, cutting-edge aseptic manufacturing, and the training of a new generation of AI-proficient pharmaceutical engineers.
“As we stand at the crossroads of artificial intelligence and life sciences, we are witnessing a profound transformation - not just in how we discover new therapies, but also in how we produce and deliver them with unprecedented speed, precision and scale,”
said Purdue President Mung Chiang.
The urgency is underscored by statistics revealing that over 70% of active pharmaceutical ingredients for U.S. medicines are imported, exposing healthcare to disruption and national security risks (reshaping US pharmaceutical manufacturing with AI strategy).
The Lilly-Purdue partnership, recently expanded to a record $250 million investment, aims not only to accelerate drug discovery and manufacturing but also to create opportunities for students and ensure workforce retention ($250M Lilly-Purdue research expansion).
In parallel, investments in AI-assisted remote lab platforms promise to speed up R&D, reduce material costs, and pave the way for next-generation therapeutics (Lilly and Purdue invest in AI-assisted pharma lab).
This multipronged approach positions Purdue and its partners at the forefront of digital transformation in American medicine production, targeting lower costs, improved quality, and enhanced national security.
Nvidia CEO: U.S.–China AI Competition ‘Neck and Neck'
(Up)Nvidia CEO Jensen Huang has recently intensified warnings about the surging rivalry between the U.S. and China in artificial intelligence, declaring the competition “neck and neck” as export controls reshape the global market.
Despite the U.S. ban on Nvidia's H20 AI chips, which led to a multibillion-dollar inventory write-down and $2.5 billion in lost sales, Huang underscores that China's AI development continues unabated - bolstered by a thriving domestic chip industry and aggressive government support.
“China is one of the world's largest AI markets and a springboard to global success,” Huang cautioned, emphasizing that 50% of the world's AI researchers are based there and that “the platform that wins China is positioned to lead globally.” He also noted that “export restrictions have spurred China's innovation and scale,” further eroding America's advantage and pushing Chinese firms to accelerate their capabilities.
Analysts say the $50 billion China AI market, projected to reach $1.4 trillion by 2030, is effectively closed to U.S. exporters, as new chipmakers like Lisuan and established players like Huawei rapidly close the technology gap.
The latest financials from Nvidia show remarkable growth overall, even as China's segment lags due to policy barriers (Nvidia CEO turns heads with stern warning about China AI market).
Industry experts argue U.S. export controls have backfired by creating formidable Chinese competitors and incentivizing chip self-sufficiency (Nvidia's Jensen Huang thinks U.S. chip curbs failed).
Meanwhile, Huang's comments at Computex highlight that “all of the world's AI researchers and all of the world's developers are building on American stacks,” but this status quo is shifting rapidly as China's innovation accelerates (Chinese chipmakers threaten Nvidia's dominance amid AI export controls).
Metric | Value |
---|---|
China AI Market Potential | $1.4 trillion by 2030 |
Current China Market Size | $50 billion |
Sales Lost to Export Ban | $2.5 billion (Q1 2025) |
Write-off Due to Export Ban | Multibillion-dollar inventory charge |
“The question is not whether China will have AI; it already does. The question is whether one of the world's largest AI markets will run on American platforms.” - Jensen Huang
GOP's Budget Bill AI Measures Draw Criticism
(Up)The House's recent passage of a sweeping budget bill, dubbed the “Big Beautiful Bill,” has ignited fierce debate in Washington, DC over its 10-year federal moratorium on state and local AI regulation - a move that narrowly passed 215-214-1 and is now under Senate review.
Proponents, including major tech industry figures, argue this moratorium would prevent a complex patchwork of state laws and buy time for lawmakers to craft a unified federal approach to AI oversight, with OpenAI CEO Sam Altman stating,
“One federal framework, that is light touch, that we can understand and that lets us move with the speed that this moment calls for seems important and fine.”
However, opposition has grown from both parties at the state and federal levels, with 40 state attorneys general and over 140 civil society groups warning it threatens consumer protections and undermines ongoing efforts to address harms like deepfakes and algorithmic discrimination.
The bill also allocates substantial funding for adopting AI in government and defense, sparking concerns about regulatory rollbacks and accountability. Pennsylvania officials and senators such as Marsha Blackburn (R-TN) have publicly rejected the bill's sweeping preemption, emphasizing the need for baseline federal safeguards before blocking states' authority.
As detailed in the AP News analysis of the moratorium debate, comprehensive coverage by Tech Policy Press, and state-level reactions in WITF's Pennsylvania reporting, the Senate's looming decision - complicated by reconciliation rules and party divisions - will shape the future of AI policy, state autonomy, and industry power in the U.S.
AdvaMed Publishes Federal AI Policy Roadmap for Medtech
(Up)AdvaMed, the Medtech Association, has published a comprehensive AI Policy Roadmap to Guide Congress and Federal Agencies in Promoting Responsible Innovation aimed at guiding Congress and federal agencies in promoting responsible innovation and widespread access to AI-enabled medical technologies.
With over 1,000 FDA-authorized AI-driven medical devices in the last 25 years - including digital imaging for cancer detection, home monitoring for blood pressure, and advanced cardiac event diagnostics - the roadmap highlights both the rapid pace of innovation and the need for modernized regulation and reimbursement policies.
Central to the policy recommendations are ensuring robust patient privacy protections, sustaining the FDA's role as lead regulator, and updating coverage and payment options to expand patient access.
As Scott Whitaker, AdvaMed's President and CEO, explains,
“The future of AI applications in medtech is vast and bright... the policy environment absolutely must keep up. This is the right time to promote the development of AI-enabled medtech to its fullest potential to serve all patients, regardless of zip code or circumstance.”
The House bipartisan AI Task Force has echoed these priorities, calling on CMS for formal payment pathways to support emerging technologies.
The roadmap further calls attention to the fundamental role coverage and reimbursement play in patient access, while also urging policymakers to tailor federal privacy laws to reflect the unique handling of medical device data.
For a deeper dive into AdvaMed's regulatory proposals and the evolving AI medtech landscape, download the full AI Policy Roadmap from AdvaMed and review further coverage in TechNation's detailed analysis of the roadmap's impact on AI healthcare policy.
Major AI Conferences Cement DC as a Policy and Industry Hub
(Up)This month, Washington, DC further solidifies its status as a policy and industry epicenter for artificial intelligence with the highly anticipated AI+ Expo, running June 2–4 at the Walter E. Washington Convention Center.
Drawing over 15,000 professionals from government, academia, and the private sector, the event features a sold-out exhibit hall, headline sponsors like Google, OpenAI, and Microsoft, and a diverse agenda spanning breakthrough technologies, national security, and global competitiveness.
Major highlights include the fourth Ash Carter Exchange on Innovation and National Security, a first U.S. Military Drone Competition, a $135,000 hackathon co-hosted by AGI House, and live podcasts and media stage discussions with global thought leaders.
In addition to technical demonstrations, attendees can access job networking opportunities, résumé reviews, and career coaching at the AI+ Careers Stage, as well as book talks with authors specializing in AI and geopolitics.
For those interested in showcasing their work, the AI+ Expo exhibit space guide with exhibitor tiers and pricing details opportunities to participate.
A central mission of the expo's organizer, the Special Competitive Studies Project (SCSP), is “to make recommendations to strengthen America's long-term competitiveness as artificial intelligence and other emerging technologies are reshaping our national security, economy, and society” (learn more about SCSP's mission and panels).
As described in an official expo overview, “The AI+ Expo is the place to convene and build relationships around AI, technology, and U.S. and allied competitiveness,” featuring collaboration and knowledge sharing between the nation's top minds in science, policy, and industry (explore key AI+ Expo highlights).
Together, these initiatives and gatherings mark DC as the nation's premier hub for AI strategy and innovation.
News Media Launches ‘Support Responsible AI' Campaign in DC
(Up)Amid escalating concerns about AI-generated content eroding trust and threatening the financial sustainability of quality journalism, a coalition of major news media publishers has launched the Support Responsible AI campaign in Washington, DC.
The initiative, backed by hundreds of outlets, calls on Congress to require Big Tech and AI firms to both fairly compensate content creators and increase transparency about the sources and attribution of AI-generated content.
Danielle Coffey, President and CEO of the News/Media Alliance, stated,
“America's creative industries invest significant resources to provide quality content that benefits users and society... We must continue to protect American creators from exploitation and abuse by Big Tech and AI companies.”
The campaign arrives as scrutiny intensifies around AI's use of copyrighted materials, with copyright owners pushing for stronger intellectual property protections and royalties.
These issues gained fresh urgency after high-profile incidents, such as AI-generated “news” creating fictitious book recommendations, which led to immediate editorial policy updates and public apologies from outlets like the Chicago Sun-Times and Philadelphia Inquirer (read the full analysis of AI-generated news impact).
The conversation is further amplified by a growing trend of licensing agreements between news organizations and AI platforms, as seen in the recent OpenAI–Washington Post licensing deal - a shift industry experts predict will make “content attribution and visibility” central to publishers' SEO and business models.
Conclusion: A Defining Month for American AI Regulation and Innovation in DC
(Up)This month marked a pivotal moment in American AI policy as the U.S. House passed a sweeping budget bill that includes a 10-year moratorium on state-level AI regulation, igniting intense debate in Washington, DC. The measure, part of President Trump's “Big Beautiful Bill,” aims to prevent a patchwork of state laws and centralize AI oversight at the federal level, but it faces stiff opposition from advocacy groups, civil society, and some Republicans who warn it could leave consumers vulnerable and override vital protections for deepfakes and algorithmic discrimination.
According to Tech Policy Press' summary of the moratorium, more than 1,000 state AI bills have been proposed in 2025 alone, with 26 states already enacting over 75 new AI-related laws addressing issues ranging from algorithmic bias to AI-powered harassment (see the National Conference of State Legislatures' 2025 legislation overview for details).
The bill passed by a razor-thin margin and now heads to a divided Senate, where critics argue its inclusion in a budget bill may violate Senate rules and undermine state innovation.
As explained in USA TODAY's coverage on the “Big Beautiful Bill”, tech industry supporters claim a moratorium will “get it right” for national policy, while opponents highlight urgent risks for Americans left unprotected during this federal “time out.” As the future of AI regulation hangs in the balance, it's an important reminder that technology careers - and the skills to shape responsible innovation - are more vital than ever.
For those looking to advance in this rapidly-changing field, Nucamp's flexible bootcamps in AI, cybersecurity, and web development are designed to prepare you for the forefront of technology and policy.
Frequently Asked Questions
(Up)What is the significance of the 10-year federal moratorium on state AI laws passed by the House?
The U.S. House of Representatives narrowly passed a budget bill enacting a 10-year federal moratorium that prohibits states from enforcing AI-related laws, except those with criminal penalties or generally applicable provisions. This aims to prevent a 'patchwork' of state rules and provide Congress time to set national standards, but it has drawn criticism for eroding consumer protections, especially on issues like deepfakes and algorithmic bias.
How is Washington, DC becoming a national hub for tech and AI policy?
Washington, DC has solidified its role as a center for AI policy and innovation by hosting major industry events, such as the AI+ Expo and the Hill and Valley Forum, which attract leaders from government, industry, and academia. These gatherings focus on topics such as AI governance, semiconductor supply chains, national security, and skilled immigration, reinforcing DC's influence in shaping technology policy.
What are the concerns regarding AI chatbots and mental health?
There are increasing concerns about the impact of AI chatbots like Character.AI and Replika on young users' mental health, with lawsuits alleging harm and even suicide linked to chatbot interactions. Experts warn that while chatbots can be empathetic, they lack the nuance of trained professionals and may inadvertently encourage dependency or harmful behavior. Legislators are moving to increase safeguards, with some states considering bans on AI impersonating certified health providers.
What key legislative and industry debates are shaping U.S. AI competitiveness?
Major debates center on federal preemption of state AI laws, U.S.–China AI rivalry, and the need for high-skilled immigration. Industry leaders and lawmakers at events like the Hill and Valley Forum highlight STEM talent shortages and urge immigration reform to maintain America's tech edge. Additionally, tech companies are lobbying for consistent federal oversight, while concerns grow that overly broad federal measures could stifle innovation and remove critical protections at the state level.
How are institutions like Purdue University contributing to AI-driven innovation in critical sectors?
Purdue University is leading a national initiative to reshore pharmaceutical manufacturing using AI and digital technologies. In partnership with industry leaders like Eli Lilly and Merck, Purdue's efforts focus on reducing overseas dependency, improving supply chain resilience, and training a new generation of AI-proficient pharma engineers, with significant investments aiming to transform American medicine production and workforce development.
You may be interested in the following topics as well:
Read about Visa's AI agent payments pilot that's about to transform online commerce.
Discover how Stamford's AI and tech leadership in 2025 is reshaping Connecticut's innovation landscape.
Discover how economic modernization is being fast-tracked by the TRUE Initiative's push for business tech.
Learn about the innovative Vatn Systems and Palantir underwater drone facility boosting defense tech in the area.
Learn how North Dakota's groundbreaking partnership with Microsoft is setting new standards for AI training among state employees and residents.
Ludo Fourrage
Founder and CEO
Ludovic (Ludo) Fourrage is an education industry veteran, named in 2017 as a Learning Technology Leader by Training Magazine. Before founding Nucamp, Ludo spent 18 years at Microsoft where he led innovation in the learning space. As the Senior Director of Digital Learning at this same company, Ludo led the development of the first of its kind 'YouTube for the Enterprise'. More recently, he delivered one of the most successful Corporate MOOC programs in partnership with top business schools and consulting organizations, i.e. INSEAD, Wharton, London Business School, and Accenture, to name a few. With the belief that the right education for everyone is an achievable goal, Ludo leads the nucamp team in the quest to make quality education accessible