🤖 Large Language Models

Anthropic's Blunder: 512K Lines of Claude Code Dumped Online

512,000 lines. That's what Anthropic left dangling online from Claude Code, their cash cow AI. One config glitch — and poof, the model's innards are public property.

Screenshot of leaked Anthropic Claude Code repository with 512,000 lines exposed

⚡ Key Takeaways

  • A config file glitch exposed 512K lines of Claude Code's core prompts and logic.
  • Reveals Anthropic's inner workings, from RLHF tweaks to code-specific tokenizers.
  • Undermines their safety-first brand; expect rivals to exploit and regulators to probe.
Published by

theAIcatchup

AI news that actually matters.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Towards AI

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.