Let’s say we have an algorithm runtime function . can be a runtime function for worst, average, or best case scenario; note that these three scenarios may require different runtime functions. We can describe with an approximation function like so (in order of upper to lower bounds):
- : little-o notation
- is an asymptotic upper bound that is strictly not asymptotically tight
- algorithm runtime grows no faster than
- : big-O notation
- is an asymptotic upper bound that is not necessarily tight
- algorithm runtime grows no faster than
- : big-Theta notation
- : big-Omega notation
- is an asymptotic lower bound that is not necessarily tight
- algorithm runtime grows faster than
- : little-omega notation
- is an asymptotic lower bound that is strictly not asymptotically tight
- algorithm runtime grows faster than
Why don't constant coefficients matter?
The reason why coefficients don’t matter is because the way these notations are defined throws the constants out of the window anyway (e.g. . However, the coefficients (and other lower-degree polynomial terms) should still be taken into account when choosing an algorithm in real life, as the may not always be large enough for us to consider the asymptotic behavior.
Why do we use equal signs?
These equalities are not really equalities. It is easier to think of them as set memberships. For instance, means that belongs to the set of functions that grows as fast as .
The reason why equal sign is used is due to its convenience in cases like where asymptotic notation appears on both sides, and also where the equal signs mean there is an function satisfies the equality (source).
Does big-O mean worst case?
Big-O doesn’t necessarily mean worst case, and big-Theta doesn’t always mean average case. You can use big-Theta to precisely describe the worst case runtime of an algorithm, and you can also use big-O to estimate what the average runtime would be for an algorithm. See here for more explanation.
Be careful when comparing big-O notations
can count as and , but it only has one big-theta notation. The big-theta notation most closely reflects the growth of the function because it requires a tight bound. In other words, while and can represents the growth of a function, they don’t have to, e.g. the might be but you can also say it’s : technically not wrong, but quite off. When comparing the big-O notations of two functions, take care in determining how tight the bound is.