March 28, 2026· 52 min

Anthropic, the Pentagon, and the Future of Autonomous Weapons

Orality
Model
82%

Oral Indicators

Agonistic31%
huge, literally, obviously
Engagement51%
you'll, you, your
Memory Aids100%
see, right, now
Repetition100%
like (113x), think (92x), about (80x)
Parallelism86%
But by embedding AI across HR,..., And I'm Tracy Alloway...., So, Tracy, we're recording thi...
Sound Patterns61%
67 question(s), alliteration: "tend to", alliteration: "trying to"
Formulaic Phrases4%
i mean, if you will

Literate Indicators

Hedging10%
could, may, maybe
Passive Voice16%
is designed, be simplified, was used
Abstract Nouns16%
business, information, payment
Subordination7%
since, because, provided
Sentence Length39%
Avg: 14.9 words/sentence
Word Complexity50%
business, overly, complicated
Academic Markers3%
according to
Impersonal Style49%
559 personal pronouns found
Descriptive Style100%
overly, automatically, actually

Description

The last big story right before the war in Iran started was the collapse in the relationship between the Pentagon and Anthropic, with the latter objecting to any potential use of its models in either fully autonomous weapons or domestic surveillance. Of course, this story immediately become more relevant with the start of the war, and the reporting that Anthropic's technology was in fact utilized at the start of hostilities. But what does that mean? How are these models used? And what would a fully autonomous weapons system actually entail? On this episode, we speak with Paul Scharre, the executive vice president and director of studies at the Center for a New American Security. He has written two books on the subject of AI in warfare, and previously worked inside the Department of Defense on some of these very questions. We discuss the future of autonomous weaponry, and the various ethical and technological dimensions such weapons would entail. Subscribe to the Odd Lots Newsletter Join the conversation: discord.gg/oddlots See omnystudio.com/listener for privacy information.