Skip to content

Instantly share code, notes, and snippets.

@byelipk
Last active September 9, 2017 03:17
Show Gist options
  • Save byelipk/8687e879657a426802ce57bf689dc7b7 to your computer and use it in GitHub Desktop.
Save byelipk/8687e879657a426802ce57bf689dc7b7 to your computer and use it in GitHub Desktop.
the-big-o

Random access to a given element in a collection is always O(1), or constant tine.

The notion of constant time means that the operation will always complete in one compute cycle no matter how large the data structure grows to be.

const array = [1,2,3,4,5]
const theLastElemenet = array[array.length - 1] // O(1)

const hashMap = {hello: "world"}
const theValue = hashMap["hello"] // O(1)

 const giantArray = [1,2,3...100000000000000000000]
 const someElement = giantArray[200] // O(1) 

List iterations are always O(n)

If you are in a situation where you need to evaluate every item in in a list, it means the time complexity will be at least O(n). Sometimes we can get around that, provided we can make certain assumptions about our data. For example, if we have a contigeous sequence of integers, we can use a trick discovered by Carl Frederich Gauss, n(n+1) / 2, to sum the sequence in constant time.

const list = [1,3,4,6,2]
let sum = 0
for (n in list) {
  sum += n.  // O(n) because we're operating on every item in the list
}

Nested loops are always at least O(n^2)

Watch out for loops within loops. They're a real pain in the derriere. Fortunately, they can usually be improved with the right data structure, or by using a divide and conquer algorithm inside the top-level O(n) operation.

const duplicates = [1,2,3,3,3,4,5,6]

for (let i = 0; i < duplicates.length; i++) {    // O(n) 
  const outer = duplicates[i]
  for (let j = 0; j < duplicates.length; j++) {  // O(n) - nested so we can write O(n^2)
    const inner = duplicates[j]
    if (isDuplicate(outer, inner)) {
      // Do stuff...
    }
  }
}

Divide and Conquer in always O(log n)

The very act of dividing a list into smaller sublists is logarithmic. Let me elaborate.

Brute forcing your way through the world works up to a point, but it is inefficient and not exemplary.

In math, logarithms help make working with exponents, like our O(n^2) operation above, a bit easier.

If I gave you the equation n^2 = 512, how would you solve it?

We can flip the question around using logarithms: log n (512) = 2

Here we're asking, "What do I have to multiply twice to get 512?"

Fundamentally what logarithms are doing is splitting a number into some kind of base. Here we're splitting 512 into a number closer to 20 because log 20 (512) ~= 2. We can use this idea of splitting when it comes to analyzing the efficiency of our algorithms.

Merge sort and quicksort are examples of O(log n) algorithms.

What about hashmaps? Are they really O(1)?

Hashmaps are a type of probabilistic data structure. In the average case they are operate at constant time. When collisions occurr, the time complexity becomes O(n).

What about binary search trees?

Retrieving and item, inserting an item, and deleting an item are all O(log n) operations. A BST is a divide and conquer data structure, and we know that the time complexity of divide and conquer algoritms is always O(log n).

Iterations that use divide and conquer are always O(n log n)

For example, let's say we're looping through a list and then executing some algorithm to search for a value in the list.

NEVER ADD ANOTHER NESTED LOOP FOR EVERY INPUT THAT YOU HAVE. THAT IS O(n!) AND IT'S BAD.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment