Time Complexity
Time complexity estimates how the number of operations grows as input size grows.
- Linear search in an unsorted list is usually
O(n)time. - Binary search on sorted data is
O(log n)time.
Big O is a way to describe how work grows as input gets bigger. It is less about exact milliseconds and more about growth pattern.
Big O is used for two different things: how long an algorithm takes (time complexity) and how much extra memory it needs (space complexity). You should evaluate both, because a very fast solution can still be a bad fit if it uses too much memory.
Time complexity estimates how the number of operations grows as input size grows.
O(n) time.O(log n) time.Space complexity estimates how much additional memory (beyond the input itself) grows with input size.
O(1) extra space.O(n) extra space for merge buffers.You can often trade memory for speed. A hash map can reduce repeated lookups from O(n) to about O(1) average time, but it uses extra memory to store keys and buckets.
Two solutions can both work for 100 records, but one may become painfully slow for 100,000. Big O helps you choose solutions that stay fast as usage grows.
In practice, teams compare both time and space complexity to choose the best compromise for their actual runtime and hardware constraints.
Concrete examples help Big O click faster. Use these as mental shortcuts when comparing solutions.
O(1)
Reading the battery percentage shown on your phone screen right now.
Why this fits: it is one direct read from a fixed location. It does not require scanning a list, so the steps stay constant.
O(log n)
Playing a higher/lower guessing game from 1 to 1,000.
Why this fits: each guess removes about half of the remaining possibilities, so steps grow slowly even when n gets large.
O(n)
Scanning every name in an attendance sheet until you find one person.
Why this fits: in the worst case you check names one by one. If the list doubles, the checks roughly double too.
O(n log n)
Organizing a big stack of cards by repeatedly splitting into smaller piles, ordering each pile, then combining.
Why this fits: you touch all n cards across each round, and the number of rounds grows like log n because piles are repeatedly halved.
O(n²)
Comparing every student in a class with every other student to find duplicate notes.
Why this fits: each item is compared with many others, creating a nested loop effect. Doubling n makes work jump by about 4x.
O(2ⁿ)
Trying every possible yes/no feature combination for a product configuration.
Why this fits: each new yes/no choice doubles total combinations. The growth is exponential and becomes huge very quickly.
O(n!)
Trying every possible seating order for guests around a dinner table.
Why this fits: adding one guest creates many new orderings at once, so possibilities explode even faster than exponential growth.
O(n + m)
Reading two separate checklists: one with n items and one with m items, one pass each.
Why this fits: total work is the sum of both list lengths. You are not comparing every item in one list to every item in the other.
At small scale, even inefficient code can feel fast. At large scale, complexity dominates cost, latency, and hardware footprint. This is why O(n log n) sort strategies usually beat O(n²) once data gets big.
Move your mouse across the chart, or scrub the X-axis slider on touch devices, to compare runtime growth.
Selected baseline: 1,000 items
Scrubbed input: 1,000 items
| Complexity | Estimated Operations | Rough Runtime (Primary: ms) |
|---|---|---|
| O(1) | 1 | 0.000 ms ≈ 0.000 s |
| O(log n) | 10 | 0.000 ms ≈ 0.000 s |
| O(n) | 1,000 | 0.020 ms ≈ 0.000 s |
| O(n log n) | 9,966 | 0.199 ms ≈ 0.000 s |
| O(n²) | 1,000,000 | 20.00 ms ≈ 0.020 s |
| O(2ⁿ) | 478,590,233,456 | 9,571,804.67 ms ≈ 2.66 hr |