From The Floating-Point Guide:
Because internally, computers use a format (binary floating-point)
that cannot accurately represent a number like 0.1, 0.2 or 0.3 at all.
When the code is compiled or interpreted, your “0.1” is already
rounded to the nearest number in that format, which results in a small
rounding error even before the calculation happens.
That accounts for your first example. The second one only involves integers, not fractions, and integers can be represented exactly in the binary floating-point format (up to 52 bits).