AI Regulation

Trump Backs AI 'Kill Switch' Amid Cyber Threats

An AI model recently demonstrated a 20-hour corporate network attack in just minutes. Now, Donald Trump is calling for safeguards, specifically an AI 'kill switch.'

{# Always render the hero — falls back to the theme OG image when article.image_url is empty (e.g. after the audit's repair_hero_images cleared a blocked Unsplash hot-link). Without this fallback, evergreens with cleared image_url render no hero at all → the JSON-LD ImageObject loses its visual counterpart and LCP attrs go missing. #}
Trump Backs AI 'Kill Switch' After Cyber Attack Demo — Legal AI Beat

Here’s the thing: 32 steps. That’s how many a human might take to simulate a corporate network attack, starting from initial reconnaissance all the way to full takeover. This AI model, Claude Mythos, apparently did it in… well, minutes, not 20 hours. And it’s not just theoretical fluff. This kind of stuff is apparently real, prompting even former President Trump to say, during a Fox Business interview, that yes, there should be safeguards, maybe even a ‘kill switch’ for AI.

Big deal? Maybe. Or maybe it’s just another day in the increasingly absurd AI arms race. We’re talking about AI that can apparently find zero-day vulnerabilities across every major operating system and web browser, flaws that have evaded human eyes for decades. The UK AI Security Institute, which ran these tests, called it a “step up” from previous models. A step up. As if AI hasn’t been stepping up at an alarming rate for the last five years, leaving the rest of us scrambling to figure out what it actually does and who’s making a buck off it.

Anthony Aguirre, President and CEO of the Future of Life Institute (FLI), chimed in, essentially saying Trump’s right. He’s calling for a “strong off-switch” at the hardware level, not just software patches that AI could likely bypass. He points to FLI’s own prototyping of these capabilities. “The same sort of hardware security measures that keep your face private and allow you to remotely shut down your iPhone exist on AI chips,” Aguirre stated, adding that NVIDIA developers could certainly implement these at scale.

The PR speak is thick here, obviously. FLI, a think tank that’s been around since 2014, positioning itself as the sober voice amidst the AI frenzy. And here’s the rub: Who’s actually building these systems? Who stands to gain if we suddenly need a bunch of fancy, government-mandated hardware kill switches embedded in every AI chip? My money’s on the companies that will eventually sell those kill switches, or the companies that can afford to integrate them into their already sky-high development costs, further widening the gap.

Is an AI ‘Kill Switch’ Even Possible?

Aguirre seems to think so, and he’s got FLI’s own prototypes as proof. His argument hinges on the idea that current hardware security features, the ones that let you remote-wipe your phone or secure your biometrics, can be adapted. This isn’t some far-off sci-fi concept to him; it’s a practical, albeit complex, engineering problem. He believes it’s entirely feasible and that the tech giants, particularly NVIDIA in this case, have the capability to implement these at scale. The implication is that relying solely on software safeguards for AI is akin to locking your doors with a piece of string.

Models like Mythos are nearly superhuman in their ability to find and exploit vulnerabilities in critical systems. If we go forward building highly superhuman AI – which FLI believes we should not – for any hope of control we cannot rely on software-based safety measures alone.

This isn’t just about some future existential threat. The Mythos demo showcased vulnerabilities in systems we rely on now—our economy, our financial infrastructure. Imagine that attack, scaled up, against a national power grid or a major stock exchange. The “step up” the UK report mentioned could easily translate into a catastrophic leap for global stability if unchecked. It’s the kind of thing that makes even a politician known for unconventional approaches — Trump, in this instance — nod along and call for a button that makes it all stop.

Who Benefits from the AI Kill Switch Debate?

Let’s not be naive. The AI industry is a gold rush, and every stakeholder wants a piece. When you hear calls for regulation or safeguards, especially from organizations like FLI, it’s worth asking: Who profits? For FLI, the validation of their long-held concerns about AI risks is a win. For politicians, it’s a chance to appear responsible and forward-thinking. But for the actual tech companies, particularly those building the most advanced models and the underlying hardware, this debate is a double-edged sword.

On one hand, calls for a ‘kill switch’ could lead to costly mandates and development slowdowns. On the other, it could also spur a new market for AI safety hardware and consulting services. Think about it: if FLI can prototype these capabilities, you bet the big players are already years ahead, figuring out how to incorporate them — and how to charge extra for them. It’s the classic tech playbook: identify a problem (often one you helped create), then sell the solution.

The truth is, we’re in uncharted territory. The speed at which AI capabilities are advancing, especially in areas like cybersecurity, is frankly terrifying. While Trump’s endorsement of a ‘kill switch’ might be politically convenient, the underlying concerns about control and safety are legitimate. Whether hardware-level safeguards are the silver bullet remains to be seen. But one thing’s for sure: the conversations about AI safety are moving from niche academic circles to the very highest levels of global politics, and that alone is a significant development, for better or worse.

**


🧬 Related Insights

Frequently Asked Questions**

What did Claude Mythos do? Claude Mythos demonstrated a 32-step corporate network attack simulation, which would take humans an estimated 20 hours, in a matter of minutes. It also autonomously discovered zero-day vulnerabilities across major operating systems and web browsers.

Is an AI kill switch feasible? According to FLI President and CEO Anthony Aguirre, yes. He argues that existing hardware security measures on AI chips can be adapted to contain AI models and discontinue their operation if needed, similar to how smartphones can be remotely shut down.

What is the Future of Life Institute (FLI)? FLI is an AI think tank founded in 2014 that focuses on steering the development of transformative technologies towards benefiting life and mitigating extreme large-scale risks.

Written by
Legal AI Beat Editorial Team

Curated insights, explainers, and analysis from the editorial team.

Frequently asked questions

What did Claude Mythos do?
Claude Mythos demonstrated a 32-step corporate network attack simulation, which would take humans an estimated 20 hours, in a matter of minutes. It also autonomously discovered zero-day vulnerabilities across major operating systems and web browsers.
Is an AI kill switch feasible?
According to FLI President and CEO Anthony Aguirre, yes. He argues that existing hardware security measures on AI chips can be adapted to contain AI models and discontinue their operation if needed, similar to how smartphones can be remotely shut down.
What is the Future of Life Institute (FLI)?
FLI is an AI think tank founded in 2014 that focuses on steering the development of transformative technologies towards benefiting life and mitigating extreme large-scale risks.

Worth sharing?

Get the best Legal Tech stories of the week in your inbox — no noise, no spam.

Originally reported by Future of Life Institute

Stay in the loop

The week's most important stories from Legal AI Beat, delivered once a week.