Computers store everything — numbers, text, images, code — as sequences of 0s and 1s called bits. Understanding binary is fundamental to understanding computation.
In decimal (base-10), each position is a power of 10. In binary (base-2), each position is a power of 2:
Programmers use hex as a compact notation for binary. Each hex digit represents exactly 4 bits:
| Decimal | Binary | Hex |
|---|---|---|
| 0–9 | 0000–1001 | 0–9 |
| 10 | 1010 | A |
| 15 | 1111 | F |
| 255 | 11111111 | FF |
Memory addresses, color codes (#FF6600), and file offsets are all written in hexadecimal.
An n-bit unsigned integer can represent 0 to 2ⁿ−1. An 8-bit byte: 0–255. A 32-bit int: 0–4,294,967,295.
To represent negative numbers, computers use two's complement: flip all bits, add 1. This allows the same addition hardware to handle both positive and negative numbers.
How many values can an 8-bit unsigned integer represent?
What is the decimal value of binary 1101?
Hexadecimal is base-16. The hex digit 'A' equals decimal
Two's complement is used to represent
0xFF in decimal is