⚙️ AI Hardware

Tokens: Silicon Valley's Sneaky Cash Cow in Every AI Chat

What if the 'atoms' of AI aren't innovating language, but just padding your invoice? Tokens rule LLMs, but who's really profiting from these invisible slices?

Flowchart showing text breaking into tokens via BPE in LLMs

⚡ Key Takeaways

  • Tokens are subword chunks via BPE, roughly ¾ word each, driving all LLM processing.
  • Different models = different token counts and costs; always check.
  • Tokenization echoes 90s compression but now boosts AI firms' pricing power.

🧠 What's your take on this?

Cast your vote and see what theAIcatchup readers think

Sarah Chen
Written by

Sarah Chen

AI research editor covering LLMs, benchmarks, and the race between frontier labs. Previously at MIT CSAIL.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Towards AI

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.