The basic idea here is to give programmers a way to compare the performance of algorithms at a very high level in a language/platform/architecture independent way. This becomes particularly important when dealing with large inputs.
Big O notion may consider the worst-case, average-case, and best-case running time of an algorithm.
When describing an algorithm using Big O notation you will drop the lower-oder values. This is because when the inputs become very large, the lower-order values become far less important. For example:
Original | Becomes |
---|---|
O(2n) | O(n) |
O(log(n, 11) + 3) | O(log n) |
O(n^2.0001) | O(n^2) |
O(1.0001^n) | O(2^n) |
For n inputs, an algorithm does roughly one operation for each input in n.
function in_array(val, arr) {
for (var i = 0; i < arr.length; i++) {
if (arr[i] === val) return true;
}
return false;
}
The idea here is that you must go through each element of n plus some of each element of n again. An example of this would be the Quicksort algorithim because we must touch each element in the list then again for a smaller and smaller fraction of them as we start sorting around the pivot.
function quicksort(list) {
if (list.length <= 1) return list;
var pivot = list.splice(Math.floor(list.length / 2), 1);
var less = [], greater = [];
while (list.length) {
var item = list.shift();
if (item <= pivot) {
less.push(item);
} else {
greater.push(item);
}
}
return quicksort(less).concat(pivot).concat(quicksort(greater));
};
The idea here is that for some input n, we only need a small fraction of n to return the result. We can skip over the vast marjority of n's. An example of this might be searching through a Binary Search Tree, where we don't have to look at a lot of the elements.
BinarySearchTree.prototype.search = function(value, start) {
var node = start || this.root;
while (node) {
if (value < node.value) {
node = node.left;
} else if (value > node.value) {
node = node.right;
} else if (node.value === value) {
break;
}
}
return node;
};
The idea behind O(1) is that for any input the time is exactly the same. In other words the time of the algorithm is not dependent on the size of the input.
function is_true(bool) {
return !!bool;
}
For n inputs, an algo operates on each input close to n times. A couple examples here might be to check for a number in common between two arrays or taking the sum of all pairs of numbers in an array.
function has_number_in_common(arr1, arr2) {
for (var i = 0; i < arr1.length; i++) {
for (var j = 0; j < arr2.length; j++) {
if (arr1[i] === arr2[j]) return true;
}
}
return false;
}
function sum_all_pairs(arr) {
var result = [];
for (var i = 0; i < arr.length; i++) {
for (var j = 0; j < arr.length; j++) {
if (i =! j) result.push(arr[i] + arr[j]);
}
}
return result;
}
This is not where you want to be. Factorial complexity means that for every input n you may just have to do n! operations on each n. Algorithims with this time complexity may not be able to finish before the universe ends. An exaple of such an algorithm could be the Travelling Salesman Problem via brute force searching. A simpler example might be the Bogosort.
function bogosort(arr) {
while (! isSorted(arr)) arr = shuffle(arr);
return arr;
}