US AI Policy Pivots Sharply From 'Safety' To 'Security'
President Donald Trump rescinded former President Joe Biden’s AI Executive Order on day one of his term (disclosure: I served as senior counselor for AI at the Department of Homeland Security during the Biden administration), and Vice President JD Vance opened up the Paris AI Action Summit, a convening that was originally launched to advance the field of AI safety, by firmly stating that he was not actually there to discuss AI safety and would instead be addressing “AI opportunity.” Vance went on to say that the U.S. would “safeguard American AI” and stop adversaries from attaining AI capabilities that “threaten all of our people.”
Without more context, these sound like meaningless buzzwords — what’s the difference between AI safety and AI security, and what does this shift mean for the consumers and businesses that continue to adopt AI?
Simply put, AI safety is primarily focused on developing AI in a way that behaves ethically and reliably, especially when it’s used in high-stakes contexts, like hiring or healthcare. To help prevent AI systems from causing harm, AI safety legislation typically includes risk assessments, testing protocols and requirements for human oversight.
AI security, by contrast, does not fixate on developing ethical and safe AI. Rather, it assumes that America’s adversaries will inevitably use AI in malicious ways and seeks to defend U.S. assets from intentional threats, like AI being exploited by rival nations to target U.S. critical infrastructure. These are not hypothetical risks — U.S. intelligence agencies continue to track growing offensive cyber operations in China, Russia and North Korea. To counter these types of deliberate attacks, organizations need a strong baseline of cybersecurity practices that also account for threats presented by AI.
Both of these fields are important and interconnected — so why does it seem like one has eclipsed the other in recent months? I would guess that prioritizing AI security is inherently more aligned with today’s foreign policy climate, in which the worldviews most in vogue are realist depictions of ruthless competition among nations for geopolitical and economic advantage. Prioritizing AI security aims to protect America from its adversaries while maintaining America’s global dominance in AI. AI safety, on the other hand, can be a lightning rod for political debates about free speech and unfair bias. The question of whether a given AI system will cause actual harm is also context dependent, as the same system deployed in different environments could produce vastly different outcomes.
In the face of so much uncertainty, combined with political disagreements about what truly constitutes harm to the public, legislators have struggled to justify passing safety legislation that could hamper America’s competitive edge. News of DeepSeek, a Chinese AI company, achieving competitive performance with U.S. AI models at substantially lower costs, only reaffirmed this move, stoking widespread fear about the steadily diminishing gap between U.S. and China AI capabilities.
What happens now, when the specter of federal safety legislation no longer looms on the horizon? Public comments from OpenAI, Anthropic and others on the Trump administration’s forthcoming “AI Action Plan” provide an interesting picture of how AI priorities have shifted. For one, “safety” hardly appears in the submissions from industry, and where safety issues are mentioned, they are reframed as national security risks that could disadvantage the U.S. in its race to out-compete China. In general, these submissions lay out a series of innovation-friendly policies, from balanced copyright rules for AI training to export controls on semiconductors and other valuable AI components (e.g. model weights).
Beyond trying to meet the spirit of the Trump administration’s initial messaging on AI, these submissions also seem to reveal what companies believe the role of the U.S. government should be when it comes to AI: funding infrastructure critical to further AI development, protecting American IP, and regulating AI only to the extent that it threatens our national security. To me, this is less of a strategy shift on the part of AI companies than it is a communications shift. If anything, these comments from industry seem more mission-aligned than their previous calls for strong and comprehensive data legislation.
Even then, not everyone in the industry supports a no-holds-barred approach to U.S. AI dominance. In their paper, “Superintelligence Strategy,” three prominent AI voices, Eric Schmidt, Dan Hendrycks and Alexandr Wang, advise caution when it comes to pursuing a Manhattan project-style push for developing superintelligent AI. The authors instead propose “Mutual Assured AI Malfunction,” or MAIM, a defensive strategy reminiscent of Cold War-era deterrence that would that forcefully counter any state-led efforts to achieve an AI monopoly.
If the United States were to pursue this strategy, it would need to disable threatening AI projects, restrict access to advanced AI chips and open weight models and strengthen domestic chip manufacturing. Doing so, according to the authors, would enable the U.S. and other countries to peacefully advance AI innovation while lowering the overall risk of rogue actors using AI to create widespread damage.
It will be interesting to see whether these proposals gain traction in the coming months as the Trump administration forms a more detailed position on AI. We should expect to see more such proposals — specifically, those that persistently focus on the geopolitical risks and opportunities of AI, only suggesting legislation to the extent that it helps prevent large-scale catastrophes, such as the creation of biological weapons or foreign attacks on critical U.S. assets.
Unfortunately, safety issues don’t disappear when you stop paying attention to them or rename a safety institute. While strengthening our security posture may help to boost our competitive edge and counter foreign attacks, it’s the safety interventions that help prevent harm to individuals or society at scale.
The reality is that AI safety and security work hand-in-hand — AI safety interventions don’t work if the systems themselves can be hacked; by the same token, securing AI systems against external threats becomes meaningless if those systems are inherently unsafe and prone to causing harm. Cambridge Analytica offers a useful illustration of this relationship; the incident revealed that Facebook’s inadequate safety protocols around data access served to exacerbate security vulnerabilities that were then exploited for political manipulation. Today’s AI systems face similarly interconnected challenges. When safety guardrails are dismantled, security risks inevitably follow.
For now, AI safety is in the hands of state legislatures and corporate trust and safety teams. The companies building AI know — perhaps better than anyone else — what the stakes are. A single breach of trust, whether it’s data theft or an accident, can be destructive to their brand. I predict that they will therefore continue to invest in sensible AI safety practices, but discreetly and without fanfare. Emerging initiatives like ROOST, which enables companies to collaboratively build open safety tools, may be a good preview of what’s to come: a quietly burgeoning AI safety movement, supported by the experts, labs and institutions that have pioneered this field over the past decade.
Hopefully, that will be enough.
The above is the detailed content of US AI Policy Pivots Sharply From 'Safety' To 'Security'. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











Meta's Llama 3.2: A Leap Forward in Multimodal and Mobile AI Meta recently unveiled Llama 3.2, a significant advancement in AI featuring powerful vision capabilities and lightweight text models optimized for mobile devices. Building on the success o

Hey there, Coding ninja! What coding-related tasks do you have planned for the day? Before you dive further into this blog, I want you to think about all your coding-related woes—better list those down. Done? – Let’

This week's AI landscape: A whirlwind of advancements, ethical considerations, and regulatory debates. Major players like OpenAI, Google, Meta, and Microsoft have unleashed a torrent of updates, from groundbreaking new models to crucial shifts in le

Shopify CEO Tobi Lütke's recent memo boldly declares AI proficiency a fundamental expectation for every employee, marking a significant cultural shift within the company. This isn't a fleeting trend; it's a new operational paradigm integrated into p

Introduction OpenAI has released its new model based on the much-anticipated “strawberry” architecture. This innovative model, known as o1, enhances reasoning capabilities, allowing it to think through problems mor

Introduction Imagine walking through an art gallery, surrounded by vivid paintings and sculptures. Now, what if you could ask each piece a question and get a meaningful answer? You might ask, “What story are you telling?

SQL's ALTER TABLE Statement: Dynamically Adding Columns to Your Database In data management, SQL's adaptability is crucial. Need to adjust your database structure on the fly? The ALTER TABLE statement is your solution. This guide details adding colu

The 2025 Artificial Intelligence Index Report released by the Stanford University Institute for Human-Oriented Artificial Intelligence provides a good overview of the ongoing artificial intelligence revolution. Let’s interpret it in four simple concepts: cognition (understand what is happening), appreciation (seeing benefits), acceptance (face challenges), and responsibility (find our responsibilities). Cognition: Artificial intelligence is everywhere and is developing rapidly We need to be keenly aware of how quickly artificial intelligence is developing and spreading. Artificial intelligence systems are constantly improving, achieving excellent results in math and complex thinking tests, and just a year ago they failed miserably in these tests. Imagine AI solving complex coding problems or graduate-level scientific problems – since 2023
