Fundamentals 9 min read

Why Does 0.1 + 0.2 Not Equal 0.3 in Python? Understanding Floating‑Point Precision

Python’s unexpected 0.1 + 0.2 = 0.30000000000000004 result stems from binary floating‑point representation limits defined by the IEEE 754 standard, and the article explains this issue, rounding modes, and practical solutions such as the decimal module, fractions, math.isclose, and high‑precision NumPy types.

Code Mala Tang
Code Mala Tang
Code Mala Tang
Why Does 0.1 + 0.2 Not Equal 0.3 in Python? Understanding Floating‑Point Precision

When using floating‑point numbers in Python you must be careful; a simple calculation can produce unexpected results.

Running print(0.1 + 0.2) yields 0.30000000000000004 instead of the expected 0.3, illustrating the precision problem.

Why does this happen?

The root cause is how computers store floating‑point numbers. They use binary (base‑2) representation, and many decimal fractions cannot be represented exactly in binary.

For example, 0.1 becomes an infinite repeating binary fraction:

<code>0.1 in binary ≈ 0.00011001100110011001100110011...</code>

Python stores a finite number of bits, leading to tiny rounding errors that can accumulate.

Python follows the IEEE 754 standard, which is also used by JavaScript, C, C++, Java, etc.

IEEE 754 defines floating‑point representation, rounding rules, and exception handling; it was first published in 1985 and remains the benchmark for floating‑point arithmetic.

The standard defines two main formats:

Single‑precision (32‑bit)

Double‑precision (64‑bit)

Single‑precision consists of a sign bit, an 8‑bit exponent (with a bias of 127), and a 23‑bit fraction. Its value is calculated as:

<code>(-1)^S * (1 + Fraction) * 2^(Exponent - Bias)</code>

Double‑precision expands the exponent to 11 bits and the fraction to 52 bits.

IEEE 754 also defines several rounding modes, such as round‑to‑zero, round‑to‑nearest‑even, round‑to‑positive‑infinity, and round‑to‑negative‑infinity.

Other floating‑point formats exist (IBM FP, VAX, CUDA), but IEEE 754 is the most widely adopted.

How to solve this problem?

Here are common strategies:

1. Use the decimal module for high‑precision arithmetic

The decimal module performs decimal‑based calculations, avoiding binary‑precision issues.

<code>from decimal import Decimal, getcontext

# Set global precision
getcontext().prec = 28

a = Decimal('0.1')
b = Decimal('0.2')
result = a + b
print(result)  # 0.3
</code>

2. Use the fractions module for exact rational numbers

Represent numbers as fractions to eliminate rounding errors.

<code>from fractions import Fraction

a = Fraction(1, 3)
b = Fraction(2, 3)
result = a + b
print(result)  # 1
</code>

3. Compare floats with math.isclose

Use a tolerance‑based comparison instead of direct equality.

<code>import math

a = 0.1 + 0.2
b = 0.3
print(math.isclose(a, b, rel_tol=1e-9))  # True
</code>

4. Adjust the order of floating‑point addition

Adding larger numbers first can reduce loss of significance.

<code>a = 1.0e16
b = 1.0
c = a + b   # b may be ignored
d = b + a   # b’s contribution is retained
</code>

5. Use NumPy’s high‑precision types

NumPy provides float128 for higher precision calculations.

<code>import numpy as np

a = np.float128(0.1)
b = np.float128(0.2)
result = a + b
print(result)  # 0.3
</code>

Choosing the appropriate technique based on the application can effectively mitigate floating‑point precision issues.

pythonPrecisionNumPyFloating-pointIEEE 754decimal module
Code Mala Tang
Written by

Code Mala Tang

Read source code together, write articles together, and enjoy spicy hot pot together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.