CS 101 › Lesson 6 of 8

Algorithms & Complexity

Lesson 6 · OKSTEM College · Associate of Science in Computer Science

Algorithms & Complexity

An algorithm is a finite, unambiguous sequence of steps that solves a problem. Algorithm analysis measures how resource usage (time, memory) scales with input size.

Big-O Notation — click a complexity class to expand

Big-O describes how an algorithm\'s run time grows as input size n increases. It\'s the most important tool for comparing algorithms.

O(1) — Constant time n=1000 → 1 op

What it means: The algorithm always takes the same amount of time regardless of input size. Doubling the input changes nothing.

Real examples:

  • Looking up array[42] — memory address is computed directly, one step.
  • Pushing/popping a stack — just move a pointer.
  • Hash table lookup (average case) — hash function computes location directly.

Why it matters: O(1) is the gold standard. If you can design an algorithm to be constant time, it will work equally well with 10 or 10 billion records.

n=100: 1 op
n=1M: 1 op
n=1B: 1 op
📉 O(log n) — Logarithmic time n=1000 → ~10 ops

What it means: Each step eliminates half the remaining possibilities. Doubling the input only adds one more step.

Real examples:

  • Binary search: Find a name in a sorted phone book by always opening to the middle — 1 billion names requires only ~30 comparisons.
  • Balanced BST lookup: Tree height grows as log₂(n).
  • Exponentiation by squaring: x¹⁰⁰⁰ in only 10 multiplications.

Intuition: log₂(1,000,000,000) ≈ 30. So searching a billion sorted items takes about 30 steps. That\'s why sorted data + binary search is so powerful.

n=1K: ~10 ops
n=1M: ~20 ops
n=1B: ~30 ops
➡️ O(n) — Linear time n=1000 → 1,000 ops

What it means: You touch every element once. Doubling the input exactly doubles the time.

Real examples:

  • Linear search: Find a name in an unsorted list — worst case you check every entry.
  • Finding max/min: Must check every element at least once.
  • Printing all elements: You have to visit n items to print n items.

When it\'s the best possible: If you must read all the input (e.g., summing every element), O(n) is optimal — you can\'t do better than looking at each item once.

n=1K: 1,000 ops
n=1M: 1,000,000 ops
📊 O(n log n) — Linearithmic time n=1000 → ~10,000 ops

What it means: Slightly worse than linear but far better than quadratic. This is the complexity of the best general-purpose sorting algorithms.

Real examples:

  • Merge sort: Divide the list in half (log n levels), merge each level (n work per level) → O(n log n) total.
  • Heap sort, Timsort: Python\'s built-in list.sort() is Timsort — O(n log n).
  • Fast Fourier Transform (FFT): Powers audio compression (MP3) and signal processing.

Why it matters: It\'s been mathematically proven that no comparison-based sort can beat O(n log n) in the worst case. This is a theoretical lower bound — not just a limit of current algorithms.

n=1K: ~10K ops
n=1M: ~20M ops
O(n²) — Quadratic time n=1000 → 1,000,000 ops

What it means: Doubling the input quadruples the time. Typically arises when you have a loop inside a loop.

Real examples:

  • Bubble sort / insertion sort: For each of n elements, potentially scan n elements to find the right place.
  • Checking all pairs: "Find two numbers in this list that sum to X" — naive approach checks every pair.
  • Matrix multiplication (naive): n² dot products for n×n matrices.

When it becomes a problem: n=10,000 means 100 million operations. At 1 billion ops/second that\'s 0.1 seconds — borderline. n=1,000,000 means 10¹² operations — over 16 minutes. Don\'t use O(n²) on large data.

n=1K: 1M ops
n=10K: 100M ops
n=1M: 10¹² ops ⚠️
💣 O(2ⁿ) — Exponential time n=50 → 10¹⁵ ops

What it means: Each additional input element doubles the work. These algorithms are only practical for tiny inputs (n < ~30).

Real examples:

  • Brute-force password cracking: Trying all possible 8-character passwords: 26⁸ ≈ 200 billion combinations.
  • Travelling Salesman Problem (brute force): Check all n! routes between cities.
  • Recursive Fibonacci (naive): fib(n) recalculates subproblems exponentially — fix with memoization.

Why it matters: Many important real-world problems (scheduling, packing, routing) are exponential in the worst case. This is the core of the famous P vs NP problem in computer science — one of the Millennium Prize Problems with a $1 million prize.

n=20: 1M ops
n=50: 10¹⁵ ops
n=300: more atoms than in the universe

O(2ⁿ) algorithms become completely impractical above ~60 input elements. P vs NP is the most famous unsolved problem in CS — it asks whether every problem that can be verified quickly can also be solved quickly.

Sorting Algorithms

# Bubble sort — O(n²) def bubble_sort(arr): n = len(arr) for i in range(n): for j in range(n-i-1): if arr[j] > arr[j+1]: arr[j], arr[j+1] = arr[j+1], arr[j] # Merge sort — O(n log n) def merge_sort(arr): if len(arr) <= 1: return arr mid = len(arr) // 2 L = merge_sort(arr[:mid]) R = merge_sort(arr[mid:]) return merge(L, R) # merge two sorted halves

Lab — Big-O Growth Visualizer

Knowledge Check

Big-O notation describes

Binary search runs in O(log n). For n=1024, roughly how many steps?

Which sorting algorithm has O(n log n) average-case complexity?

An O(n²) algorithm on n=1000 performs approximately how many operations?

P vs NP is a famous open problem. It asks whether

← PreviousNext →