Skip to content

Instantly share code, notes, and snippets.

@ericfode
Created April 29, 2025 19:28
Show Gist options
  • Save ericfode/ae1bc546915db74742edfda5aba63e3f to your computer and use it in GitHub Desktop.
Save ericfode/ae1bc546915db74742edfda5aba63e3f to your computer and use it in GitHub Desktop.
PDFs and the real numbers

Here’s a bird’s-eye view of what’s going on, why it trips up a lot of people, and how you could say it in two lines:

  1. A more succinct statement

“You can’t pick a real number uniformly at random from all of ℝ, because if you tried to assign every point probability p, then p·∞ would have to equal 1, forcing p=0—and yet you still pick exactly one number. That “paradox” dissolves once you use probability densities (for intervals) instead of point-probabilities.”

  1. What’s wrong with the original write-up
    1. Misusing the discrete uniform limit. Writing P(X=x)=\lim_{n\to\infty}\frac1n=0 tacitly treats ℝ like a finite set of n equally likely points, then “lets n→∞.” But that limit does not define a bona fide probability distribution on an infinite or unbounded set. (Countable additivity breaks down, the total mass goes to 0, etc.)
    2. Confusing “probability 0” with “impossible.” In any continuous distribution on ℝ, every single point has P(X=x)=0 yet one of them still occurs. That’s not a logical contradiction—it’s just how Lebesgue-style measures work. Only intervals carry nonzero probability via an integral of the density.

  1. What they were really grappling with

They were noticing that: • ∞ sample space + “uniform” weighting ⇒ every point gets probability 0. • Yet randomness demands you pick some point.

So they suspected a clash between “infinite” and “random.” Really, the clash is with trying to force a uniform probability mass function onto an infinite set. In standard probability you instead use a probability density (a function you integrate over intervals) and accept that individual points get zero mass.

  1. What to study next for coherence
    1. Measure-theoretic foundations of probability – σ-algebras, countable additivity vs. uncountable “sums.” – Why Lebesgue (length) measure on ℝ makes every point measure 0 but assigns measure to intervals.
    2. Continuous distributions & densities – How you describe “uniform on [a,b]” via a constant density f(x)=1/(b−a). – Why you cannot extend that to [−∞,∞] without losing normalizability.
    3. The notion of “almost surely” – Events of probability 0 can nonetheless occur; we only say “with probability 1, X lies in some set of full measure.”

Once you see that probability = measure of sets (not sums of point‐masses) the apparent paradox evaporates.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment