Skip to content

Instantly share code, notes, and snippets.

@bmorphism
Last active December 31, 2024 17:41
Show Gist options
  • Save bmorphism/9adfc04cc91245fc33f1a65db62f79f4 to your computer and use it in GitHub Desktop.
Save bmorphism/9adfc04cc91245fc33f1a65db62f79f4 to your computer and use it in GitHub Desktop.
Feature Balanced Ternary Floating-Point IEEE-754 Binary Floating-Point
Base 3 2
Digits {-1, 0, +1} (often denoted as T, 0, P) {0, 1}
Sign Representation Implicitly handled by the digits themselves. Explicit sign bit (0 for positive, 1 for negative).
Exponent Representation Typically represented in balanced ternary, no bias needed. Biased representation (exponent value + bias).
Mantissa Normalization Leading non-zero digit (+1 or -1) next to the radix point. Leading '1' (implicit or explicit depending on the format).
Representation of Zero Naturally represented as all zeros (0.0 * 3^any_exponent). Requires special handling (all exponent and mantissa bits zero, potentially with a sign bit for -0).
Special Values Fewer special cases needed due to inherent sign representation. Requires special bit patterns for positive/negative infinity, NaN (Not a Number).
Symmetry Inherently symmetric around zero. Asymmetric representation due to the sign bit.
Normalization Goal Consistent placement of the first non-zero digit. Ensuring a leading '1' for maximum precision.

Why IEEE 754 Doesn't Guarantee Uniqueness for Binary Floating Point Numbers:

  • Denormalized Numbers (Subnormals): These numbers, used to represent values very close to zero, break the standard normalization rule (leading '1' bit). This means the same value can sometimes be represented as either a normalized number with a small exponent or a denormalized number with a larger exponent.

  • Multiple Ways to Represent Zero: IEEE 754 has both positive zero (+0) and negative zero (-0). While they compare as equal, they have different bit patterns.

  • Normalization Ambiguity (though less common): While IEEE 754 strongly encourages normalization, subtle differences in rounding during intermediate calculations could theoretically lead to slightly different but equivalent normalized representations in some edge cases (though well-designed implementations minimize this).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment