2026-03-12

Why Floating-Point Still Surprises People

A practical walk through binary fractions, rounding modes, and why 0.1 + 0.2 is not broken.

Every few months, someone rediscovers that 0.1 + 0.2 != 0.3 in many programming languages and declares computers fundamentally cursed. I understand the emotional reaction. Decimal notation feels natural to us, but binary floating-point is built for a different purpose: fast approximate arithmetic with predictable error bounds.

The short version is that numbers like 0.1 cannot be represented exactly in a finite binary expansion, just as 1/3 cannot be represented exactly in finite decimal digits. What you store is the nearest representable value. Then arithmetic happens on those approximations, and the result is rounded again. The machinery is not sloppy; it is systematic.

The trap is usually not floating-point itself but our assumptions. Equality is often the wrong test. It is more robust to ask whether two values are within an acceptable tolerance, and that tolerance should be related to the scale of the quantities involved.

def nearly_equal(a, b, eps=1e-12):
    return abs(a - b) <= eps * max(1.0, abs(a), abs(b))

Another source of confusion is accumulation. Summing a long list from left to right can amplify rounding error. Pairwise summation or Kahan summation can improve numerical stability with very little code.

The bigger lesson is philosophical. Computers do not manipulate “real numbers.” They manipulate carefully engineered finite encodings. Once you accept that, floating-point stops looking mysterious and starts looking elegant.

← Back to archive