In the field of computer science, a lot of different algorithms are used to solve various problems. Algorithms can take different amount of time to process data. Big-O (spoken as big O of n) is a mathematical notation to describe the time-complexity or space-efficiency of different algorithms based on their data size. Thus, it corresponds to the worst-case complexity of an algorithm.

One suggestion might be why not run different algorithms on a specific data set to find the fastest algorithm with the least execution time. But there is a major problem with this method. It is highly possible that various algorithms take roughly equal execution time but as we increase the amount of data, the execution time can drastically change. So, certain algorithms might perform resonably good on a small data set, but their performance can increase or decrease as the size of data set increases or decreases.

**Classification of Algorithms**

Some important classifications of algorithms are as below,

**O(n)**

In O(n) the complexity of algorithm entirely depends on the length of n, where n is the size of data. For example, we have an unsorted data set of numbers 2, 4, 3, 7 and 6. Let’s suppose that we want to find **k** which is 7 through an algorithm. The algorithm in worst-case scenario will take maximum **n** iterations to traverse through the whole array to find **k** and will compare it with each element of **n**. In other words, the search will be linear.

**O(1)**

In O(1) (spoken as big O of 1), the complexity of algorithm remains constant. To understand this, let’s take a look at hash tables. Hash tables store data in pairs. Each value is stored and retrieved with its key. For example, if we want to store following data,

** Value** ** Key**

Ali 5

Sam 2

Rob 3

Bob 6

The O(1) of algorithm always take constant amount of time to retrieve data from the data set because it does not have to iterate/traverse through the whole data set to find a specific value. The user just has to enter the specific key and the value is retrieved based on the corresponding key. Constant time means as the algorithm gets very large input, the run time of the algorithm stays the same. Hence, the retrieval time for the data will always be constant.

**O(log n)**

In O(log n), the algorithm takes logarithmic time. It is important to mention that in computer science mostly log base is 2. In logarithmic behaviour, we ask ourselves “With which number must I power my base of 2 to get n, where n is the size of input?”

Basically, in this time complexity we keep on halving the size of n until we get the result. Binary-Search algorithm uses exactly this method on sorted data sets.

**General Rules in Big-O Notation**

- Ignore constants

For example, if we have 5n then we will write it as O(n) because it’s behaviour of an algorithm that matters for us. Thus, constants are left out.

- Certain terms dominate others

O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(2^{n}) < O(n!)

This means that low-order terms are ignored.