Skip to content

Instantly share code, notes, and snippets.

@navicore
Last active September 7, 2025 15:29
Show Gist options
  • Save navicore/dba46b0afd4525b13c06d2ae12025ce6 to your computer and use it in GitHub Desktop.
Save navicore/dba46b0afd4525b13c06d2ae12025ce6 to your computer and use it in GitHub Desktop.
LLMs and Why Not to Trust Them

Convincing Isn’t Correct: Why You Must Fact-Check AI

Most of the AI headlines today are about LLMs, short for Large Language Models.
When we say “tokens” in this context, think of them as little pieces of text — often whole words, sometimes parts of words or punctuation.

What AI really does

An AI language model is a token predictor. It looks at huge amounts of text and learns which pieces of text usually follow others. That’s it. There’s no built-in check for truth. It doesn’t know fact from fiction—it just knows what’s statistically common.

Why training can’t fix this

Some people assume you can “train AI for truth.” But there’s no truth label on all the text of the Internet. Even fine-tuning with curated correct examples only adjusts probabilities—it doesn’t change the fact that the model is a probability machine at its core.

Why it sounds so convincing

Because it’s trained on mountains of text, it’s great at mimicking confident, fluent language. And we humans are wired to trust smooth, confident answers. But confidence ≠ correctness.

Why real truth is often missing

True answers are often surprising, rare, or novel—the very kinds of things a probability-based system won’t produce by default. In other words, correctness often hides in the statistical outliers.

What this means for you

AI is fantastic for brainstorming, drafting, or generating things you’ll check yourself. But for facts that matter—medical, legal, financial, historical, or technical—you must verify everything.

  • Check primary sources.
  • Cross-verify with independent references.
  • If expertise is required, ask an expert—there’s no substitute.

Bottom line

LLM-based AI is constrained by its nature to producing likely text, not true answers. That makes it impressive for writing — but dangerous if you need a correct answer. Always fact-check. Always verify. Never mistake convincing language for truth.

@navicore
Copy link
Author

navicore commented Sep 7, 2025

...and laugh in the face of anyone that verifies the output of one LLM with another LLM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment