Here’s a bird’s-eye view of what’s going on, why it trips up a lot of people, and how you could say it in two lines:
⸻
- A more succinct statement
“You can’t pick a real number uniformly at random from all of ℝ, because if you tried to assign every point probability p, then p·∞ would have to equal 1, forcing p=0—and yet you still pick exactly one number. That “paradox” dissolves once you use probability densities (for intervals) instead of point-probabilities.”
⸻
- What’s wrong with the original write-up
- Misusing the discrete uniform limit. Writing P(X=x)=\lim_{n\to\infty}\frac1n=0 tacitly treats ℝ like a finite set of n equally likely points, then “lets n→∞.” But that limit does not define a bona fide probability distribution on an infinite or unbounded set. (Countable additivity breaks down, the total mass goes to 0, etc.)
- Confusing “probability 0” with “impossible.” In any continuous distribution on ℝ, every single point has P(X=x)=0 yet one of them still occurs. That’s not a logical contradiction—it’s just how Lebesgue-style measures work. Only intervals carry nonzero probability via an integral of the density.
⸻
- What they were really grappling with
They were noticing that: • ∞ sample space + “uniform” weighting ⇒ every point gets probability 0. • Yet randomness demands you pick some point.
So they suspected a clash between “infinite” and “random.” Really, the clash is with trying to force a uniform probability mass function onto an infinite set. In standard probability you instead use a probability density (a function you integrate over intervals) and accept that individual points get zero mass.
⸻
- What to study next for coherence
- Measure-theoretic foundations of probability – σ-algebras, countable additivity vs. uncountable “sums.” – Why Lebesgue (length) measure on ℝ makes every point measure 0 but assigns measure to intervals.
- Continuous distributions & densities – How you describe “uniform on [a,b]” via a constant density f(x)=1/(b−a). – Why you cannot extend that to [−∞,∞] without losing normalizability.
- The notion of “almost surely” – Events of probability 0 can nonetheless occur; we only say “with probability 1, X lies in some set of full measure.”
Once you see that probability = measure of sets (not sums of point‐masses) the apparent paradox evaporates.