Anthropic Swears No Killswitch for Pentagon's Claude
Anthropic can't sabotage its own AI in wartime, execs insist in court. But the Pentagon's pulling the plug anyway—overkill or caution?
⚡ Key Takeaways
- Anthropic denies any ability to tamper with deployed Claude models in military use.
- Pentagon's supply-chain ban halts DoD Claude access, sparking lawsuits.
- Feud echoes past tech-gov tensions, potentially boosting open-source military AI.
🧠 What's your take on this?
Cast your vote and see what theAIcatchup readers think
Worth sharing?
Get the best AI stories of the week in your inbox — no noise, no spam.
Originally reported by Wired - AI