This Month's Latest Tech News in Marysville, WA - Saturday May 31st 2025 Edition

By Ludo Fourrage

Last Updated: June 1st 2025

Marysville cityscape with digital AI icons overlay, symbolizing technology and regulation.

Too Long; Didn't Read:

The May 31, 2025, tech news roundup for Marysville, WA spotlights federal efforts to impose a 10-year moratorium on state AI regulation, facing bipartisan resistance and warnings from Washington officials. Washington ranks fifth nationally in AI startups and leads in passing deepfake and AI election laws, with $4.5 billion invested since 2013.

The national debate over AI regulation is intensifying, with Marysville and Washington State at the center of critical deliberations following the U.S. House's narrow passage of a bill proposing a 10-year moratorium on state-level AI laws - a measure aimed at combatting a "confusing patchwork" of regulations and fostering national AI innovation Tech Policy Press: US House Passes 10-Year Moratorium on State AI Laws.

However, the proposal, part of the broader “One Big Beautiful Bill,” faces strong bipartisan resistance on grounds of federal overreach and risks to consumer safety.

Washington Attorney General Nick Brown labeled the ban “dangerous,” cautioning,

“At the pace technology and AI moves, limiting state laws and regulations for 10 years is dangerous. If the federal government is taking a back seat on AI, they should not prohibit states from protecting our citizens.”

State opposition also reflects deep concerns about leaving issues like deepfakes, discrimination, and data privacy unaddressed, as highlighted by the KIRO7 coverage of state AGs' statements.

As the proposed moratorium advances to the Senate - and with Marysville's educational and civic communities, such as Maryville College's recent AI summit, actively engaging the topic - Washington's unique approach, including its AI Task Force, positions the state as a front-runner in shaping responsible AI policy WA Attorney General's Office: Artificial Intelligence Task Force.

Table of Contents

  • Washington AG Opposes Federal Preemption of AI Regulation
  • U.S. Senate Pushes Back on Blanket AI Regulation Ban
  • Bipartisan Bill Targets AI-Generated Revenge Porn
  • Federal vs. State Control: Core Debate in AI Policy
  • Washington Leads States in Combatting Election Deepfakes
  • Tech Giants Urge Uniform National Law for AI
  • Senator Ted Cruz and Executives Suggest AI ‘Learning Period'
  • California's Failed AI Safety Bill Shows Barriers to Reform
  • Bipartisan AG Coalition Warns Against Federal Overreach
  • States Warn of the Dangers of Federal Overreach in AI Policy
  • Looking Forward: Marysville and the Ongoing AI Policy Conversation
  • Frequently Asked Questions

Check out next:

Washington AG Opposes Federal Preemption of AI Regulation

(Up)

Washington State Attorney General Nick Brown has taken a firm stand against a proposed ten-year federal ban that would prevent states from regulating artificial intelligence, warning that such a move would be “dangerous” given the rapid pace of AI innovation.

Brown, joined by a bipartisan coalition of more than 35 state attorneys general, sent a letter to Congress arguing the amendment would strip essential consumer protections and leave critical issues like election security, exploitation, and algorithmic bias unaddressed at the local level.

As he states,

“At the pace technology and AI moves, limiting state laws and regulations for 10 years is dangerous. If the federal government is taking a back seat on AI, they should not prohibit states from protecting our citizens.”

This opposition comes as Washington continues proactive efforts, including the formation of an AI Task Force to examine ethical guidelines and safeguard rights.

The proposed federal provision contrasts sharply with recent state actions, especially as half of U.S. states have already enacted laws to counter AI-driven election misinformation.

To explore Brown's position and the bipartisan response, view the official news release from the Washington Attorney General's Office.

Context on the nationwide debate and industry perspectives, including contrasting calls for a single, light-touch federal framework, can be found at KIRO7's coverage of the AI regulation ban and Yahoo News' detailed report on the evolving regulatory landscape.

Fill this form to download every syllabus from Nucamp.

And learn about Nucamp's Bootcamps and why aspiring developers choose us.

U.S. Senate Pushes Back on Blanket AI Regulation Ban

(Up)

The U.S. Senate is actively deliberating a contentious 10-year moratorium that would block state-level regulation of artificial intelligence, following the House's razor-thin approval of a sweeping budget bill.

Proponents argue the moratorium - embedded in the Artificial Intelligence and Information Technology Modernization Initiative - would prevent a chaotic patchwork of over 1,000 proposed state AI bills, granting Congress time to craft comprehensive federal rules and reinforcing U.S. dominance in AI innovation.

Opponents, including a bipartisan coalition of state lawmakers and attorneys general, warn that halting local protections endangers public safety, undermines states' ability to respond to emerging harms like deepfakes and algorithmic discrimination, and delays crucial safeguards for sectors such as healthcare.

Senate debate is further complicated by the "Byrd Rule," which restricts unrelated policy measures in budget reconciliation bills, raising doubts about the moratorium's viability in this legislative vehicle.

As highlighted in Tech Policy Press's analysis of the Senate's procedural hurdles and political negotiations, “Some Republican senators, such as Marsha Blackburn (TN) and Josh Hawley (MO), have expressed reservations, particularly about protecting existing state laws like the ELVIS Act in Tennessee which targets AI deep fakes.” The moratorium's potential reach is substantial, touching nearly all current and proposed state AI laws, and prompting lawmakers such as Pennsylvania's Sen.

Pennycuick to advocate for its removal in order to preserve state innovation and consumer safeguards, as seen in her detailed letter to Congress.

If Senate roadblocks persist, sponsors, including Senator Ted Cruz, have signaled readiness to introduce standalone AI legislation, ensuring the national debate over federal versus state authority in AI regulation will continue.

For an in-depth look at the House's proposal and the industry and government reactions, see this summary from the National Law Review.

Bipartisan Bill Targets AI-Generated Revenge Porn

(Up)

This month, the federal government took decisive action against the proliferation of AI-generated "revenge porn" with President Trump signing the bipartisan Take It Down Act into law, marking the first national regulation specifically targeting non-consensual intimate imagery, including deepfake content.

The law - championed by Senators Ted Cruz and Amy Klobuchar and supported by First Lady Melania Trump - makes it a federal crime to share explicit images without consent, whether real or digitally manipulated, and requires online platforms to remove flagged content within 48 hours at the request of victims.

As detailed in The Guardian's coverage of the Take It Down Act, platforms must also delete duplicates, and offenders face criminal penalties of up to three years' imprisonment.

Over 120 advocacy and law enforcement organizations supported the law, which passed Congress with near-unanimous approval, as reported by the U.S. Senate Commerce Committee.

The Act narrowly defines covered materials to minimize risks to lawful speech and instructs the FTC to oversee enforcement. However, digital rights groups such as the Electronic Frontier Foundation voice concerns about potential overreach, citing the rapid takedown timeframe and risks of automated filters erroneously flagging legal content, as noted in an insightful explainer from The 19th.

As implementation begins, new federal guidance and resources - such as the Cyber Civil Rights Initiative and the National Center for Missing and Exploited Children - are poised to help victims assert their rights under the new law, while national debate continues on how best to balance privacy, free expression, and digital safety.

Fill this form to download every syllabus from Nucamp.

And learn about Nucamp's Bootcamps and why aspiring developers choose us.

Federal vs. State Control: Core Debate in AI Policy

(Up)

The battle over who should control AI policy in the U.S. reached a boiling point this month as the House narrowly passed a “One Big Beautiful Bill” including a 10-year moratorium on state-level AI regulation, fueling intense debate over federal versus state authority.

Supporters contend a national approach would end the “patchwork” of state AI laws - over a thousand bills proposed in 2025 alone - enabling modernization, innovation, and standardized protections as agencies invest over $500 million in federal IT upgrades.

However, opponents warn the moratorium would “grant tech companies immunity” and undermine vital civil protections; states would lose power to enforce privacy, civil rights, and consumer safety laws in areas like deepfakes, discrimination, school protections, biometric privacy, and more.

As Tech Policy Press explains, the bill's exceptions are so narrowly construed that

“no state law regulating any meaningful use of a computer… may be enforced unless certain narrow exceptions apply.”

The moratorium's fate now hinges on the Senate, where some Republicans and Democrats object, with Senator Marsha Blackburn (R-TN) urging,

“Until we pass something that is federally preemptive, we can't call for a moratorium.”

Tech Policy Press and Hogan Lovells offer detailed breakdowns of the legislative dynamics and potential consequences, as stakeholders across the country anxiously await whether Congress will centralize AI control or preserve state-level innovation and protections.

Washington Leads States in Combatting Election Deepfakes

(Up)

Washington is at the forefront of efforts to counter election-related deepfakes, joining 25 other states in passing laws designed to protect voters and democratic integrity from the risks posed by AI-generated synthetic media.

As detailed in Ballotpedia's analysis, Washington is one of 26 states with laws addressing deepfakes in political communications and one of 41 states targeting the creation or distribution of explicit deepfake imagery, ensuring broad protections spanning both electoral and personal harm contexts.

The surge in state-level action is a response to the lack of comprehensive federal legislation, with bipartisan momentum fueling the rapid adoption of new measures nationwide - just since 2024, 20 new states enacted deepfake election laws, compared to only 5 states prior.

A recent Public Citizen analysis on election deepfake regulations underscores the rapid growth in these legal safeguards.

For a closer look at the evolving patchwork of policies and ongoing legislative trends, visit the State Deepfake Legislation Tracker documenting 2025 bills, which documents more than 100 bills considered in 2025 alone.

As deepfake technologies become increasingly sophisticated, these laws often require AI-generated political ads to carry clear disclaimers and set time-sensitive restrictions before elections, balancing free speech with voter protection.

Ballotpedia highlights the policy landscape as of May 2025 in the table below:

Category Washington U.S. State Totals (May 2025)
Political Deepfake Laws Enacted (since 2019; covers election communications) 26 states
Sexual Deepfake Laws Enacted (prohibits nonconsensual/child sexual deepfakes) 41 states
Recent Deepfake-Related Bills (2024 only) Active session - bills pending 47 bills enacted nationwide

“As deepfake technology gets more realistic by the day, the potential for a deepfake to go viral and sow widespread chaos ahead of an election only grows. Thankfully, half of U.S. states now have protections to make that nightmare less likely - the remaining half of states - and Congress - should follow suit.”

Learn more about the details and ongoing updates to Washington's legal framework in Ballotpedia's overview of AI deepfake policy in Washington.

Fill this form to download every syllabus from Nucamp.

And learn about Nucamp's Bootcamps and why aspiring developers choose us.

Tech Giants Urge Uniform National Law for AI

(Up)

Major tech companies are urging Congress to adopt a unified, federal approach to artificial intelligence regulation, warning that the proliferation of nearly 900 AI-related bills across 48 states risks creating a patchwork of conflicting rules that could impede innovation and business compliance.

At a recent U.S. Senate hearing, executives from OpenAI, Microsoft, AMD, and CoreWeave advocated for “light-touch” federal regulations that support American competitiveness while facilitating expansion of AI infrastructure, workforce training, and responsible use.

The push for federal preemption is led by major players like OpenAI, Google, and Microsoft, who argue that national standards would provide clarity and simplify compliance for businesses operating in multiple jurisdictions.

Microsoft estimates more than $80 billion will be invested in U.S. AI infrastructure this year alone, reinforcing the sector's economic significance. Opponents caution that preempting state laws could erode consumer protections and slow much-needed transparency reforms, but the tech sector maintains that streamlined regulation will accelerate American leadership globally.

For more details, see Big Tech Calls for National AI Regulation to Stop Patchwork of State Laws, read Brad Smith's Senate testimony in Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation, and explore industry calls for policy unity in Tech Leaders Urge Congress for ‘Light-Touch' AI Regulations.

Senator Ted Cruz and Executives Suggest AI ‘Learning Period'

(Up)

Senator Ted Cruz and several top tech executives are urging Congress to consider a 10-year federal "learning period" - effectively a moratorium on state and local AI regulations - to promote U.S. innovation while lawmakers craft comprehensive federal policy.

Modeled after the internet tax moratorium of the late 1990s, Cruz's proposal, introduced during recent Senate hearings, has garnered backing from industry leaders such as OpenAI CEO Sam Altman and Microsoft President Brad Smith, who warn that a patchwork of state rules could stifle growth and set back America's global AI leadership.

The House narrowly passed a budget bill embedding this moratorium, but controversy swirls as its broad preemption of state authority would invalidate hundreds of existing and pending AI laws, especially in states like California that lead in privacy and consumer protection efforts.

Cruz argues this approach, coupled with a forthcoming “regulatory sandbox” bill, would enable rapid adoption and expansion of AI technologies, keeping pace with global competitors such as China.

As debate intensifies in the Senate - with some lawmakers worried about lost consumer safeguards and others about Senate procedural hurdles - Congress faces a pivotal decision whose outcome could define the balance between innovation and regulation for years to come.

For further details, see the Associated Press's in-depth report on the latest AI regulation learning period debate, DLA Piper's expert summary of the ten-year AI moratorium proposal, and Senator Cruz's remarks on why light-touch AI regulation is pivotal to winning the global AI race.

California's Failed AI Safety Bill Shows Barriers to Reform

(Up)

The recent veto of California's SB 1047 by Governor Gavin Newsom highlights the significant hurdles to enacting comprehensive AI safety reforms, underscoring a central tension between fostering innovation and imposing risk-based accountability.

While the bill aimed to regulate large-scale, high-cost AI models with strict measures such as mandatory shutdown features and safety protocols, Newsom argued,

“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data… I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

Instead, the state is moving forward with a suite of targeted laws focused on transparency, election security, and AI misuse, and has launched a high-profile advisory group of industry and academic leaders to propose more nuanced solutions.

As detailed in a sector analysis, California signed 17 AI-related bills in the same legislative cycle and remains home to over 60% of the world's top AI firms.

The table below highlights some of these new AI laws:

Bill Summary Status
AB 2013 Requires AI developers to disclose training data on their websites Signed
SB 896 Mandates risk analysis for GenAI threats to infrastructure Signed
SB 1381 Expands child pornography statutes to cover AI-generated material Signed

This mixed regulatory approach positions California as a bellwether in AI governance even after the SB 1047 setback, with ongoing expert recommendations pushing for empirical, risk-based oversight and transparency.

For an in-depth review, see the analysis of SB 1047 and its implications, Governor Newsom's announcement on new AI initiatives, and a summary of California's ongoing AI policy efforts.

Bipartisan AG Coalition Warns Against Federal Overreach

(Up)

A coalition of 40 bipartisan state attorneys general has raised serious concerns over a newly passed U.S. House provision imposing a decade-long moratorium on state AI regulations, warning that the move would expose consumers to unchecked risks as federal guardrails remain absent.

In a letter led by Colorado Attorney General Phil Weiser and backed by AGs from across the country, state leaders argued that the proposed moratorium strips crucial state power to tackle AI's immediate dangers - such as explicit material, election interference, and deception - despite Congress's lagging progress on meaningful AI oversight.

As summarized in coverage of the bipartisan letter, Weiser stated,

“To enact a 10-year ban on state action just as we are beginning to grasp AI's potential benefits and harms would be a huge mistake.”

The House amendment targets the enforcement of state laws on AI models, systems, and automated decisions, and, according to analysts, would preempt over 1,000 pending state bills and existing regulations in states like California, New York, and Colorado.

Key opposition groups, including Americans for Responsible Innovation and over 140 organizations, have joined AGs in urging Congress to reject what they call a “sweeping and wholly destructive” preemption.

The Senate is expected to review the moratorium amid mounting concerns about both the constitutionality and the consumer protection implications of leaving AI development unregulated at the state level.

For further details, see the official bipartisan attorney general letter to Congress on AI regulations, analysis of the moratorium's federal implications by Hogan Lovells, and a summary of opposition arguments from Americans for Responsible Innovation on state AI law preemption.

States Warn of the Dangers of Federal Overreach in AI Policy

(Up)

States across the U.S. are uniting to warn of the risks posed by a proposed decade-long federal ban on state-level AI regulation, recently passed by the House and embedded within a larger budget reconciliation bill.

The moratorium, if enacted, would bar state enforcement of nearly all AI-related laws, preempting protections in critical areas such as employment, housing, healthcare, consumer transparency, and algorithmic accountability - even as over 1,000 state AI bills have emerged in 2025 alone.

A bipartisan coalition of 40 state attorneys general, with support from major organizations and labor unions, argues this would undermine vital civil rights, children's privacy, and anti-fraud safeguards developed after thorough stakeholder engagement.

As California Attorney General Rob Bonta states,

“States must be able to protect their residents by responding to emerging and evolving AI technology”

.

Meanwhile, industry advocates argue such preemption prevents a costly patchwork of inconsistent rules and allows the federal government to harmonize AI oversight nationwide.

Despite these claims, organizations like EPIC and the Center for Democracy & Technology highlight the urgent need for local innovation and protections in the absence of comprehensive federal legislation.

As this debate continues, the core concern remains whether innovation and safety can be balanced if state policymakers are sidelined for the next decade. For more detailed perspectives, read the analysis of AI Regulation Ban Opposition by State Attorneys General, the law and economics case for federal preemption and AI regulation, and the Center for Democracy & Technology's opposition to federal preemption of state AI laws.

Looking Forward: Marysville and the Ongoing AI Policy Conversation

(Up)

As Marysville and communities across Washington continue to drive national innovation in AI - ranking fifth nationwide in startup activity and hosting over 480 AI startups - a new federal proposal threatens to fundamentally reshape local oversight for a decade.

The “One Big Beautiful Bill” moving through Congress includes a 10-year moratorium on state and local civil AI regulations, which would preempt laws protecting civil rights, consumer privacy, protections against deepfakes, and children's online safety, while carving out only narrow exceptions for criminal statutes.

As explained in a recent policy analysis, this sweeping ban would make state-level enforcement of key technology laws nearly impossible:

“For 10 years, no state law regulating any meaningful use of a computer involved in interstate commerce may be enforced unless certain narrow exceptions apply.”

Concerns are mounting among Washington policymakers and a bipartisan coalition of 40 state attorneys general, who warn that this approach risks “unfettered abuse” by large tech firms, especially in areas with no federal privacy law or effective federal AI regulation, as detailed by Truthout's coverage of the federal AI policy debate.

Despite strong local and national opposition, some in Congress support the moratorium, arguing it would prevent a confusing patchwork of state laws and reduce regulatory burdens for startups.

Meanwhile, Washington's AI innovation and investment climate remain robust, with $4.5 billion in funding since 2013 and a strong talent pipeline from the University of Washington.

Below is a summary of Washington's AI startup landscape highlighting sectors and investments:

IndustryInvestmentDescription
Enterprise SaaS$906MAI productivity & automation tools
Life Sciences & Healthcare$1.36BAI-driven diagnostics, therapeutics
ICT$1.3BData management, cybersecurity

For Washingtonians and Marysville's tech community, the next months will be pivotal as the Senate deliberates whether local control or a top-down approach will guide the future of AI policy.

For those looking to upskill or lead in the evolving landscape, local training options like the Solo AI Tech Entrepreneur bootcamp by Nucamp can help entrepreneurs and professionals navigate - and shape - the rapidly changing regulatory and innovation environment.

Frequently Asked Questions

(Up)

What is the proposed 10-year moratorium on state-level AI regulation, and why is it controversial in Washington?

The U.S. House recently passed a bill containing a 10-year moratorium that would preempt state and local governments from enacting or enforcing AI regulations, aiming to avoid a patchwork of differing state laws and promote unified national innovation. This measure faces strong bipartisan opposition in Washington, with leaders like Attorney General Nick Brown arguing that limiting state protections threatens consumer safety, civil rights, and the ability to address issues like deepfakes, discrimination, and data privacy.

How is Washington State responding to federal efforts to limit state AI laws?

Washington is proactively advocating for state authority to regulate AI, with bipartisan coalitions and Attorney General Nick Brown vocally opposing the federal moratorium. The state has formed an AI Task Force, enacted laws against AI-driven election misinformation and deepfakes, and joined multi-state initiatives to safeguard citizens from emerging AI-related harms.

What are tech industry leaders and companies advocating for regarding AI regulation in the U.S.?

Tech giants like Microsoft, OpenAI, and Google are pushing for a unified, federal framework for AI regulation, emphasizing that a proliferation of divergent state laws could hinder innovation, increase compliance complexity, and slow American global leadership in AI. They support a federal preemption approach to create consistent national standards.

What recent bipartisan actions has Congress taken regarding AI-generated deepfakes and non-consensual explicit imagery?

Congress has enacted the bipartisan Take It Down Act, signed by President Trump, which makes it a federal crime to share non-consensual intimate images - including AI-generated deepfakes - mandates prompt removal by online platforms, and imposes criminal penalties. The law is narrowly defined to target explicit content and aims to balance privacy and free expression concerns.

How is Marysville, WA, and the broader Washington tech community positioned amid national AI regulatory debates?

Marysville and Washington's tech sector are national leaders in AI innovation, with over 480 AI startups and $4.5 billion in sector funding. As Congress debates AI policy, local stakeholders remain highly engaged, with educational initiatives and startup growth positioning the region as a front-runner in shaping responsible AI policy - despite potential new federal restrictions.

You may be interested in the following topics as well:

N

Ludo Fourrage

Founder and CEO

Ludovic (Ludo) Fourrage is an education industry veteran, named in 2017 as a Learning Technology Leader by Training Magazine. Before founding Nucamp, Ludo spent 18 years at Microsoft where he led innovation in the learning space. As the Senior Director of Digital Learning at this same company, Ludo led the development of the first of its kind 'YouTube for the Enterprise'. More recently, he delivered one of the most successful Corporate MOOC programs in partnership with top business schools and consulting organizations, i.e. INSEAD, Wharton, London Business School, and Accenture, to name a few. ​With the belief that the right education for everyone is an achievable goal, Ludo leads the nucamp team in the quest to make quality education accessible