CS 121 › Lesson 11 of 12

Algorithms & Complexity

Lesson 11 · OKSTEM College · AS Computer Science

Asymptotic Notation

We describe algorithm efficiency in terms of how running time grows as input size n grows, ignoring constants and lower-order terms.

NotationMeaningIntuition
O(g(n))Upper boundAlgorithm is at most this fast in the worst case
Ω(g(n))Lower boundAlgorithm needs at least this many steps
Θ(g(n))Tight boundAlgorithm is exactly this order of growth
Common complexities (slowest to fastest growth):
O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(2n) < O(n!)

P vs NP

P: problems solvable in polynomial time (efficient). NP: problems whose solutions can be verified in polynomial time. NP-complete: hardest problems in NP — solve one efficiently and you solve them all.

Whether P = NP is the greatest open problem in computer science. Most researchers believe P ≠ NP, but no proof exists. A polynomial-time algorithm for any NP-complete problem would transform cryptography, logistics, biology, and AI.

Sorting Algorithms Summary

AlgorithmBestAverageWorstSpace
Bubble sortO(n)O(n²)O(n²)O(1)
Merge sortO(n log n)O(n log n)O(n log n)O(n)
Quick sortO(n log n)O(n log n)O(n²)O(log n)
Heap sortO(n log n)O(n log n)O(n log n)O(1)
Counting sortO(n+k)O(n+k)O(n+k)O(k)

Practice Problems

1. An algorithm runs in 100n² + 50n + 10 steps. What is its Big-O?

O(n²). Drop constants (100) and lower-order terms (50n, 10). Big-O captures the dominant growth term.

2. Is searching a sorted array with binary search O(log n)? Why?

Yes. Binary search halves the search space each step: n → n/2 → n/4 → … → 1. Number of steps = log⊂2;n. So time complexity = O(log n).

Knowledge Check

O(g(n)) gives the

Big-O is asymptotic — it hides constants and lower-order terms.
Correct — O(n²) means the runtime grows no faster than cn² for large n.
Best case is Ω; upper bound is O.
Average case can be any notation; O specifically means upper bound.
Recap: O = upper bound (worst case). Ω = lower bound (best case). Θ = both (tight). In practice 'Big-O' usually refers to worst-case behaviour.

An algorithm that runs in O(n log n) is

n log n grows slower than n²; log n < n for large n.
Correct — n < n log n < n² for large n.
O(log n) < O(n log n).
They differ by a log n factor, which grows without bound.
Recap: O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(2n). Merge sort at O(n log n) is optimal for comparison-based sorting.

The class P contains problems

P problems are efficiently solvable.
Correct — P = 'efficiently solvable'; examples: sorting, shortest path, primality testing.
That's NP (non-deterministic polynomial time).
NP-complete is a subset of NP; P problems are efficiently solvable.
Recap: P = can be SOLVED in poly time. NP = can be VERIFIED in poly time. NP-complete = hardest problems in NP.

Merge sort's O(n log n) worst case makes it

Quicksort's worst case is O(n²); merge sort is O(n log n) always.
Correct — merge sort never degrades; quicksort can hit O(n²) on sorted input.
O(n log n) is optimal for comparison sorts; counting sort breaks this with non-comparison approach.
Bubble sort is O(n²) — far worse.
Recap: merge sort = O(n log n) always. Quicksort = O(n log n) average but O(n²) worst. Merge sort wins when worst-case guarantee matters.

If P = NP were proven true, the most dramatic consequence would be

P=NP means polynomial time, not constant time.
Correct — RSA relies on integer factorisation being hard (NP, not known to be in P). P=NP would give a poly-time factoring algorithm.
P=NP is about algorithmic complexity, not hardware speed.
P=NP would revolutionise cryptography, optimisation, AI, and biology.
Recap: P=NP would mean every problem whose solution can be verified quickly can also be found quickly. RSA security depends on the opposite being true.

← PreviousNext →