Simply BIG-O

August 31, 2020 | 10 minute read

If you apply to 100 companies, it can be that 1% would actually get back to you. This could be the case if you’re starting as a software developer. With that 1% you really need to be good at algorithms to take advantage of those precious opportunities.

Simply BIG-O

If you apply to 100 companies, it can be that 1% would actually get back to you. This could be the case if you’re starting as a software developer. With that 1% you really need to be good at algorithms to take advantage of those precious opportunities.

What is good code?

It usually means Readable and Scalable. BIG-O means scalable code:

  • Scalable means speed. It also means memory.
  • What’s the memory usage of code? You can use Big-O for this too.
  • Memory vs. Speed is usually a sacrifice. You get better speed (performance) but you might have more memory.

Note: Faster code (in terms how long it takes to compile) also depends of what CPU you have, the server etc. So it’s hard to measure good code in terms of how ‘fast’ it performs.

What is Big-O?

Big-O means as inputs grows, how much does your function slow down? As elements, input increases how many operations does your function has to do?

  • The slower it slows down, the better.
  • Big-O calculates the number of steps and calculations not time it takes to run.

Big-O is important for big applications. Scalable code is thinking long term. Since we cannot pin down the exact runtime (in seconds) of an algorithms we measure how quickly the runtime grows in relation to its input and as the input gets immensely large.

Good developers make these decisions efficiently. Big-O can give us reliable data in terms of Time Complexity. At the end of this article we’ll discuss about Space Complexity as well.

As we drill down on Big-O notations, please refer to the Big-O Cheatsheet and to the graph below:

Screen Shot 2020-06-18 at 7 29 26 PM

Have in mind that we cannot express speed in seconds. The size of the input we usually call it n. And the relationship with the algorithm so we could say for example our runtime grows lineally to the size of the input as O(n) or of the square of the size of the input as O(n^2).

With that in mind, let’s go through each of these Big-O examples.

O(n) - Linear - PREFERRED

This is linear. As the number of elements increase so does the number of operations. Example:

function findingNemo(message, array) {
  let t0 = performance.now()
  for (let i = 0; i < array.length; i++) {
      if (array[i] === 'nemo' || array[i] <= 40) {
      console.log(message, 'Found NEMO!');
      break;
      }
  }
  // Time is just for demo purposes.
  let t1 = performance.now() 
  console.log(t1-t0, 'miliseconds')
  }

const nemo = ['nemo'];
const large = Array.from({length: 40000}, () => 
    Math.floor(Math.random() * 40000
));

findingNemo('With Nemo: ', nemo); 
findingNemo('Larger Array: ', large); 

If you run this you might notice that the time varies a lot. Again there are CPU, processor and other considerations that makes it hard to evaluate performance in terms of time. That is why we use Big-O and with this algorithm we say it is O(n).

O(1) - Constant Time - EXCELLENT

With O(1) it will always be the same calculation no matter how big the input grows.

  function OnFn(box) {
      console.log(box[1])
  }

Note the 1 is arbitrary. This can be O(2), O(3) but do narrow it down to O(1), to represent constant time. O(1) is used with Objects/Hash Tables data structures.

O(n^2) - Nested Loops - BAD

If you see nested loops you it is O(n^2). Performance decreases n^2 as input grows, so it is not lineally.

You might get interview questions to make O(n^2) into an O(n) or better. Here’s an example:

//Log all pairs of an array

const boxes = ['a', 'b', 'c', 'd', 'e'];
function logAllPairsArray(array) {
for (let i = 0; i < array.length; i++) {
  for (let j = 0; j < array.length; j++) {
  console.log(array[i], array[j])
  }
}
}
// nested loops so O(n^2)
logAllPairsArray(boxes)

O(log n) - EXCELLENT - Used in Tree Data Structures

There is a certain way to calculate number of nodes of perfect binary trees:

  • You calculate 2 to the power of level in question.
    • E.g. Level 0: 2^0 = 1. Number of nodes: 1. Lvl 2: 2^2 = 4. etc.
    • Based on this formula you can do 2^treeHeight, to know how many total nodes. E.g. 3 Level Tree: 2^3 - 1 = 7 nodes.
    • We can simplify as log notes = treeHeight. So you by knowing this, you can limit steps by going in one branch.
    • So O(log n) is like divide and conquer. Choice of next element is one of several. We only choose one not all.
    • This like looking through a phone book. You look based upon the names you want. So divide an conquer. Only need to explore subset of tree.
    • Binary search trees allows us to search efficiently. Google uses this method too.

If this sounds too complicated. Don’t worry it sounds for me too 😅.

O(2^n) Exponential time - PRETTY BAD - Worst Than O(n^2)

A classic example is the fibonacci with recursion:

function fibonacci(num){
  if (num <= 1){
      return num
  }

  return fibonacci(num - 2) 
    + fibonacci(num - 1)
}

O(n log n) - OK - Better than O(n^2) but worst than O(n)

  • This performs better than others.
  • Divide and conquer: Merge sort and quick sort use this conquer along with recursion.
  • These don’t have nested loops. See merge / quick sort for example.
  • This O(n log n) is because it is still compared, everything at least once, but it is not compared everything to everything.
  // O(2^n)
function fibonacciRecursive(n) {
  if (n < 2){
      return n;
  }
return fibonacciRecursive(n - 1) 
  + fibonacciRecursive (n - 2)
}

fibonacciRecursive(6)

O(n!) - Factorial Time - MOST Expensive One

It means you add a nested loop for every input. You probably would never see it.

BIG-O RULES

1) Worst Case: Big-O looks at the worst case only.

  • Identify worst case. You could have O(1) but if O(n), then the Big-O is O(n).

2) Remove Constants.

  • Identify things that don’t change and remove them.
  • E.g. you have two loops and so O(2n) but 2 is constant so it’s actually O(n)

3) Different terms for inputs

function compressBoxesTwice(box1, box2)
box1.forEach(b => {
       console.log(b)

})
box2.forEach(b => {
       console.log(b)

})

This is actually O(a + b) NOT O(n). This because of different terms for inputs.

  • If parallel loops you add. E.g. a + b.
  • If nested loops you multiply. E.g. a * b

4) Drop Non-Dominants

function printNumsThenAllPairSums(numbers) {

console.log('these are the numbers:');
numbers.forEach(function(number) {
   console.log(number);
});

console.log('and these are their sums:');
numbers.forEach(function(firstNumber) {
   numbers.forEach(function(secondNumber) {
   console.log(firstNumber + secondNumber);
   });
});
}

printNumsThenAllPairSums([1,2,3,4,5])
  • This could be as O(n + n^2). But you need to drop non-dominants. n^2 is more important since it is worst. So drop n and it will be as O(n^2).
  • Worry about the MOST important then drop the rest.
  • Focus on dominant terms.

Space Complexity (Memory)

When a program executes it will heap (store variables) or stack (keep track of function calls). You also look at total size and see how much memory is being used.

What adds space complexity?

  • Adding variables
  • Adding data structures
  • Function calls and allocations

Space complexity refers to additional space. Example:

// Space complexity O(n)
function sayHola(n) {
  var holaArray = [];
  for (let i = 0; i < n; i++) {
      holaArray[i] = 'hola👋🏼';
  }
  return holaArray;
}

sayHola(100)

Remember that tradeoff of optimizing code it is often that it will increase space complexity.

Final Notes

For Technical Interviews: Sometimes optimizing might not be the right choice at beginning. Get the problem solved first. In interviews most of time you’ll wanna do O(n) and O(1)!

You also need to get the right balance between run time, space and readability. Big-O is crucial for bigger apps, still you need to work and understand it. It will make you a better engineer.

How to Start: Focus on O(1), O(n) and O(n^2) then expand into others.

Read More