Beginner’s Guide to Big O Notation

Jeff Cuartas
5 min readJun 28, 2021

When I first heard the term Big O Notation, I’ll admit that I was a bit intimidated by what seemed at first glance a nebulous and complex topic.

However, after completing the Big O section in Colt Steel’s Algorithms and Data Structures course on Udemy, I can confidently say that this topic is less scary and cryptic than I originally thought!

In essence, Big O Notation is a way by which we analyze the performance both in terms of time and space of an algorithm. While algorithms themselves can be complex, unique, and vary widely from one to another, Big O notation is more concerned with the “bigger picture” of capturing the general time/space complexity of a given algorithm.

Before I delve deeper into the different types of Big O notation, you might be asking yourself why does this matter? The significance of Big O notation really boils down to the understanding that the efficiency of your algorithm matters in terms of performance. As we’ll see in a minute, some algorithms are more efficient than others and that makes a huge difference in terms of runtime and space. Big O gives us a way to formalize how we talk about the runtime/space of an algorithm and allows us to examine the broader trends of the algorithm’s runtime.

Types of Big O Complexity

source: https://www.bigocheatsheet.com/

At a high level, there are really 4 common types of time complexities in Big O notation that you will encounter:

  • O(1) — Constant time complexity
  • O(n)- Linear time complexity
  • O(n²) — Quadratic time complexity
  • O(log n) — Logarithmic time complexity

O(1) — Constant time

Constant time algorithms will always take the same amount of time to be executed regardless of the size of the input.

For example, in the function above the number of times <i> is logged will always either be 15 or a number smaller than 15. Since we’re talking about Big O notation, we only really care about the larger trend. In this case, time complexity of the function is O(1), because the number of times our <i> is logged will always be constant (in a range from 1–15).

O(n) — Linear Time Complexity

An algorithm has a linear time complexity if the time it takes to execute increases proportionally as the input increases.

A good example of linear time complexity is a simple loop:

In the example above, as n (the input) increases, the runtime will also increase because we will be iterating in our loop until we meet the condition of reaching the length of n. In other words, if we had the number 5 as our input, we would go through our loop 5 times, and if n is 10, we would loop 10x, and so on…

Therefore, as our input increases, so too does the runtime required to execute the code.

O(n²) — Quadratic Time Complexity

If the time to execute an algorithm is proportional to the square of the input size than it has quadratic time complexity. The most common type of algorithms you’ll encounter that have this time complexity are nested for loops.

For example, in the function above, we have a nested for loop, in which we are iterating over the same array twice with each initial query.

O(Log n) — Logarithmic time complexity

Of all the common types of Big O notation, this time complexity is perhaps the most difficult to grasp. It involves having an understanding of logs.

What is a log you might ask? The logarithm of a number roughly equals the number of times you divide that number by 2 before you get a value that’s less than or equal to one.

For example, Log2(8) = 3, because 2 x 2 x 2 = 8. In other words, 2³ is equal to 8. Another way of understanding logs is that they are the inverse of exponentiatation.

Let’s get back to time complexity. If the time it takes to run an algorithm is proportional to the logarithm of the input size n, then the algorithm has logarithmic time complexity.

The most common example of logarithmic time complexity is a binary search.

source: https://jarednielsen.com/big-o-logarithmic-time-complexity/

In the example listed above, we effectively half our array into two parts that we can then use to check against our num query. If num is less than or greater that the median we used to divide the array, then our algorithm will only look in that particular section and half the array section again.

In other words, the more number of times our algorithm needs to look for num in our array, the smaller our sample size will become. So as you can see logarithmic time complexity is very efficient!

Conclusion

The most important point to keep in mind when learning Big O notation is to remember that we’re looking at the bigger picture in order to assess the performance of our an algorithm. In our original info-graph detailing Big O, you may have noticed that some time complexities (such as logarithmic and linear) are more efficient than others (quadratic).

When looking at an algorithm for the first time, it might not always be intuitive or easy as a beginner to know the time complexity of that particular algorithm. However, my best advice is to be intentional when looking at an algorithm and see if you can spot the time complexity. By framing your lens around time complexity, you’ll soon be better able to spot these general patterns!

Learning a new concept also takes time, so don’t fret if you don’t fully understand everything 100%. For example, logarithmic time complexity and binary searches, might take some time to fully understand (and appreciate!).

I also recommend referring to other resources to expand your knowledge of this important topic!

Resources:

--

--