We describe algorithm efficiency in terms of how running time grows as input size n grows, ignoring constants and lower-order terms.
| Notation | Meaning | Intuition |
|---|---|---|
| O(g(n)) | Upper bound | Algorithm is at most this fast in the worst case |
| Ω(g(n)) | Lower bound | Algorithm needs at least this many steps |
| Θ(g(n)) | Tight bound | Algorithm is exactly this order of growth |
P: problems solvable in polynomial time (efficient). NP: problems whose solutions can be verified in polynomial time. NP-complete: hardest problems in NP — solve one efficiently and you solve them all.
Whether P = NP is the greatest open problem in computer science. Most researchers believe P ≠ NP, but no proof exists. A polynomial-time algorithm for any NP-complete problem would transform cryptography, logistics, biology, and AI.
| Algorithm | Best | Average | Worst | Space |
|---|---|---|---|---|
| Bubble sort | O(n) | O(n²) | O(n²) | O(1) |
| Merge sort | O(n log n) | O(n log n) | O(n log n) | O(n) |
| Quick sort | O(n log n) | O(n log n) | O(n²) | O(log n) |
| Heap sort | O(n log n) | O(n log n) | O(n log n) | O(1) |
| Counting sort | O(n+k) | O(n+k) | O(n+k) | O(k) |
1. An algorithm runs in 100n² + 50n + 10 steps. What is its Big-O?
2. Is searching a sorted array with binary search O(log n)? Why?
O(g(n)) gives the
An algorithm that runs in O(n log n) is
The class P contains problems
Merge sort's O(n log n) worst case makes it
If P = NP were proven true, the most dramatic consequence would be