The basic idea here is to give programmers a way to compare the performance of algorithms at a very high level in a language/platform/architecture independent way. This becomes particularly important when dealing with large inputs.
Big O notion may consider the worst-case, average-case, and best-case running time of an algorithm.
When describing an algorithm using Big O notation you will drop the lower-oder values. This is because when the inputs become very large, the lower-order values become far less important. For example:
Original | Becomes |
---|