Big O Notation, Time Complexity, and Performance of Algorithms

Andrew G
4 min readNov 2, 2021

In order to prepare for coding interview, I began to study data structures and algorithms. I worked on multiple sources to prepare for what I would see during technical interviews. Having worked with data returned from APIs it was not foreign working through the problems I faced in my practiced. What was new to me was the storage and speed of my solutions. In almost every website or resource I used to practice there was always mention of performance of solutions and code. I started coding with a focus on functionality rather than optimizing storage or speed. I wasn’t building a finely tuned race car, I was building a reliable commuter car that got from point A to B. I wanted to create code that got the job done. That said, ignoring performance or storage is not something that can be done if you are a software engineer. As part of my preparation for interviews I set out to learn about big O notation.

One of the biggest reasons why paying attention to efficiency matters is the sheer amount of data that software engineers will need to work with. If you need to find a page in a book, its no problem to flip through it page by page until you find it if the book is 10 pages long. If you’re reading a famous Russian novel, you will need a different solution to find the page in a reasonable amount of time. Being able to measure and optimize for efficiency matter what working with lots of data.

Big O notation is the way performance is measure when it comes to algorithms and solutions. It measures the run time of an algorithm relative to the input, as the input approaches a very large number.

Big O notation will range from a fixed amount 0 (1) which can be represented on a time vs size graph as a straight horizontal line representing a constant time. An example would be a algorithm that return the first element in an array.

O (N) would represent linear time. This would be a linear line on a time vs size graph and is akin to my earlier example of searching through a book for a particular page.

O (N²) is a quadratic and increases at twice the relative rate of O(N). An example of this type of relative rate would be a function that has two operations within the function, such as 2 for loops that iterate over the same array. In most real world examples of O(N²), the second for loop will be nested inside the first for loop block. For O(N^n) , n will represent the number of nested for loops in an algorithm. The power that N is raised to represents the number of times our algorithm processes our data.

A natural question comes up when thinking about quadratic time or O(N^n) time. What if the for loops are not nested? Because we assume we are working with a large data set, the difference of a multiple is estimated to be very small compared to N. For this reason, multiples are ignored. The same reasoning goes for over variables of (N) that are of lesser power. These estimations are close enough when limits approach large numbers.

O(logN) is where I started to get confused about big O notation, but it’s not so hard. In order to understand how O (logN) it is important to understand its real world application. The most common example is the binary search function. Going back to the previous example of flipping through pages of a book, an O(logN) approach is far more efficient than flipping through pages one by one. Instead of the O(N) approach, we take an approach that splits the book’s pages into equal halves by finding the middle page — referred to as a pivot. If our target page number is greater than or less than our pivot, we can remove all the pages before or after our pivot, respectively. Continuing this pattern we narrow down our pages by calculating the new pivot with a new start and end point in our page numbers, until we have narrowed down our search to our last two pages, where we will be able to find the target page. This is a faster way to find our target page, or our target value in an array. It is much faster than an O(N) approach which would require us to flip through each page. Instead of N number of steps, we get LogN steps.

Finally, there is the O(NLogN) time. This is best understood with sorting functions such as quicksort. Instead of having to iterate through the entire data set after pivot is calculated, quicksort picks a pivot number and runs a partition operation. A partition operation compares adjacent values and moves smaller values to the left in the array. At that point everything before and after the pivot are split into arrays where the same process repeats. Once this is done for the remaining numbers, each pair of split arrays, along with the pivot are joined back into one array organized in non-descending order.

Understanding the basics of these Big O runtimes will provide a base for further understanding of the solutions and algorithms you write as a programmer. While not always a critical factor, optimizing and creating efficient algorithms is something all software engineers should strive for.

--

--