Skip to content

Instantly share code, notes, and snippets.

@CEOehis
Last active June 29, 2020 00:24
Show Gist options
  • Save CEOehis/429f4857ff35a75746a199d81bc49f1f to your computer and use it in GitHub Desktop.
Save CEOehis/429f4857ff35a75746a199d81bc49f1f to your computer and use it in GitHub Desktop.

Algorithmic Complexity

It is important to try as much as you can to get the order of complexity of your algorithms to be higher up the following heirachy than lower.

  • O(1): constant time
  • O(log n): logarithmic
  • O(n): linear
  • O(n log n): loglinear
  • O(n^C): polynomial
  • O(C^n): exponential

N.B: C is a constant

Algorithmic complexity

Big O measures an upper bound on the complexity as input increases Big O describes worst case.

function factorialIter(n) {
    let answer = 1; // one step
    while(n > 1) { // one step
        answer *= n; // two steps ie * and =
        n -= 1; // two steps
    } // total of 5 steps performed n times
    return answer; // one step
}

As in the above the total steps it takes to run this program is 1 + 5n + 1 steps.

However, as the value of n increases, the values of the additive constants, viz 1, 5, 1, in this case, becomes negligable. So we can call the above algoritm O(n).

Therefore for worse case complexity:

  • ignore additive constants

  • ignore the multiplicative constants

  • focus on dominant terms

  • combine complexity classes.

Law of Addition for big O: The time complexity of a sequential statements is the sum of complexities for each individual statement.

O(f(n)) + O(g(n)) === O(f(n) + g(n))

For example:

O(n) + O(n*n) = O(n + n^2) = O(n^2) //because of the dominant term

Law of multiplicaion is used for nested statements/loops is the product of the complexities for each nested statement.

O(f(n)) * O(g(n)) === O(f(n) * g(n))

For example:

O(n) * O(n) = O(n * n) = O(n^2) 

O(1) runs in constant time regardless of the size of input. It can also have loops but those loops run independent of the input size.

O(log n) runs in logarithmic time. In simple terms the complexity grows as the log of size of one of its inputs. Often times any algorithm that divides the input at each iteration.

O(n) runs in linear time.. it goes through each element once.

O(n log n)

Searching and Sorting algorithms

Searching algorithms

  • Linear search: The list does not have to be sorted but in the worst case, we would need to go through every item in the list, whether or not the list is sorted. Hence complexity here is O(n)

  • Bisection search: This algorithm requires the list to be sorted. This is a new version of the divide and conquer algorithm. The complexity is O(log(n))

Some times you might need to sort first then make use of bisection search. More often than not, making this decision would be on a case by case basis, i.e if it is worth sorting first before searching or just making use of a linear search.

Sorting algorithms

  • Bubble sort: The bubble sort algorithm passes through the list a few times while comparing two successive elements to see which is larger and then swapping them. The net effect seems to be a "bubble" effect. The complexity of this algorithm in the worst case is O(n^2)

  • Selection sort: The selection sort works by going through a list of items and selecting the next sort item by comparing the last item in the leftmost part of the list with each item in the iteration. The complexity of this algorithm is O(n^2)

  • Merge sort: This is a O(n log(n)) algorithm. It makes use of a divide and conquer approach in sorting a list, by spliting the list into two halves recursively and then sorting and merging recursively until the original list is sorted. The merge step would happen at least n times and the divide step is O(log(n)) leading to a log linear algorithm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment