Most of the AI headlines today are about LLMs, short for Large Language Models.
When we say “tokens” in this context, think of them as little pieces of text — often whole words, sometimes parts of words or punctuation.
An AI language model is a token predictor. It looks at huge amounts of text and learns which pieces of text usually follow others. That’s it. There’s no built-in check for truth. It doesn’t know fact from fiction—it just knows what’s statistically common.
Some people assume you can “train AI for truth.” But there’s no truth label on all the text of the Internet. Even fine-tuning with curated correct examples only adjusts probabilities—it doesn’t change the fact that the model is a probability machine at its core.
Because it’s trained on mountains of text, it’s great at mimicking confident, fluent language. And we humans are wired to trust smooth, confident answers. But confidence ≠ correctness.
True answers are often surprising, rare, or novel—the very kinds of things a probability-based system won’t produce by default. In other words, correctness often hides in the statistical outliers.
AI is fantastic for brainstorming, drafting, or generating things you’ll check yourself. But for facts that matter—medical, legal, financial, historical, or technical—you must verify everything.
- Check primary sources.
- Cross-verify with independent references.
- If expertise is required, ask an expert—there’s no substitute.
LLM-based AI is constrained by its nature to producing likely text, not true answers. That makes it impressive for writing — but dangerous if you need a correct answer. Always fact-check. Always verify. Never mistake convincing language for truth.
...and laugh in the face of anyone that verifies the output of one LLM with another LLM.