An algorithm is a finite, unambiguous sequence of steps that solves a problem. Algorithm analysis measures how resource usage (time, memory) scales with input size.
Big-O describes how an algorithm\'s run time grows as input size n increases. It\'s the most important tool for comparing algorithms.
What it means: The algorithm always takes the same amount of time regardless of input size. Doubling the input changes nothing.
Real examples:
array[42] — memory address is computed directly, one step.Why it matters: O(1) is the gold standard. If you can design an algorithm to be constant time, it will work equally well with 10 or 10 billion records.
What it means: Each step eliminates half the remaining possibilities. Doubling the input only adds one more step.
Real examples:
Intuition: log₂(1,000,000,000) ≈ 30. So searching a billion sorted items takes about 30 steps. That\'s why sorted data + binary search is so powerful.
What it means: You touch every element once. Doubling the input exactly doubles the time.
Real examples:
When it\'s the best possible: If you must read all the input (e.g., summing every element), O(n) is optimal — you can\'t do better than looking at each item once.
What it means: Slightly worse than linear but far better than quadratic. This is the complexity of the best general-purpose sorting algorithms.
Real examples:
list.sort() is Timsort — O(n log n).Why it matters: It\'s been mathematically proven that no comparison-based sort can beat O(n log n) in the worst case. This is a theoretical lower bound — not just a limit of current algorithms.
What it means: Doubling the input quadruples the time. Typically arises when you have a loop inside a loop.
Real examples:
When it becomes a problem: n=10,000 means 100 million operations. At 1 billion ops/second that\'s 0.1 seconds — borderline. n=1,000,000 means 10¹² operations — over 16 minutes. Don\'t use O(n²) on large data.
What it means: Each additional input element doubles the work. These algorithms are only practical for tiny inputs (n < ~30).
Real examples:
fib(n) recalculates subproblems exponentially — fix with memoization.Why it matters: Many important real-world problems (scheduling, packing, routing) are exponential in the worst case. This is the core of the famous P vs NP problem in computer science — one of the Millennium Prize Problems with a $1 million prize.
O(2ⁿ) algorithms become completely impractical above ~60 input elements. P vs NP is the most famous unsolved problem in CS — it asks whether every problem that can be verified quickly can also be solved quickly.
Big-O notation describes
Binary search runs in O(log n). For n=1024, roughly how many steps?
Which sorting algorithm has O(n log n) average-case complexity?
An O(n²) algorithm on n=1000 performs approximately how many operations?
P vs NP is a famous open problem. It asks whether