# How Integers and Floats Work

The way your computer does math is pretty weird. 4294967295 + 1 = 0! 0.1 + 0.2 = 0.30000000000000004! `-2`

in binary is `11111110`

! And what’s all this jargon? Unsigned integer? Little endian? Hexadecimal? This makes math seem unpredictable, which is very rude because math is the one thing in life that should be predictable.

Of course, the way computers do math *is* predictable — it just plays by slightly different rules than you might expect. And understanding how your computer does math unlocks a lot of things! You’ll:

- know the limitations of your data types (“Oh, I should use a 64-bit integer for this, not a 32-bit int…” or “it’s fine to use a float here because…”)
- be able to reason about WHY your computer is doing weird stuff with numbers (for example: why does
`echo '{ "id": 1648521499652009984 }' | jq '.'`

change the number from`1648521499652009984`

to`1648521499652010000`

?) - unlocks a whole world of binary data and technical specifications you can read more easily (like Wireshark’s packet visualizations!)

And the way integers and floats are represented isn’t going to change (floating point was standardized in 1985!), so you only have to learn it once.

This zine will explain:

- the jargon: signed/unsigned, little/big endian, 32 bit, bytes, hexadecimal, and more
- why floating point math is so weird (and why a little weirdness is inevitable)
- how integers and floats are represented in memory
- exactly how floating numbers work, down to the binary representation
- some alternatives to floating point

The zine also comes with a playground called memory spy where you can run programs and spy on the integers and floats in their memory.