Oh. Big oh. Or rather Big-O. I thought "O" stood for "operations". And "big" because of magnitude. Man I was way off:

In the family of Bachmann-Landau notations, the one most relevant to algorithms was the one with the Omicron symbol. That symbol can be best described textually or verbally as a 'big O'. Operations correspond to time. It takes time to do something. Algorithms are a series of steps to accomplish something. Quora

## Complexity theory

Complexity theory is the study of how complicated algorithms are. An algorithm should have these three attributes:

- An algorithm should be correct.
- An algorithm should be easy to understand and implement.
- It should be efficient.

There are two sides to this last point however. One is storage **space**. An algorithm is not useful if we run out of memory while it is running. The other side is that an algorithm should be efficient with regards to **time**. Sometimes there is a compromise between storage space and the time for operations to run.

Sometimes time is not of great benefit. For example password search when logging in doesn't need to be lightning fast. If someone were set up a way to guess someones account we don't want to make it fast for them.

This point about **time** is where complexity theory comes in, forming the basis for Big O notation.

Big O notation is the language we use for articulating how long an algorithm takes to run. It's how we compare the efficiency of different approaches to a problem. With big O notation we express the runtime in terms of—brace yourself—how quickly it grows relative to the input, as the input gets arbitrarily large. An array is indexed, so we can access any element in one operation. So the answer is 1.

Interviewcake.com

*Definition*

AsymptoticA line that continually approaches a given curve but does not meet it at any finite distance

*Asymptotic means how the runtime grows relative to the input.*

Data structures are a set of tools (comprised of both structures and algorithms) for a software engineer. They need to understand the task at hand and use the best tool for the job. Understanding all of these tools and their asymptotic behaviour is an essential part of software engineering.

## Notations

Big O notation looks the worst case scenario of an algorithm. This is the safest way to look at it, considering it's asymptotic behaviour.

### O(n)

If an algorithm performs n steps linearly, then the function is O(n). Looping through an array, or getting all of the nodes of a tree structure is of the order n.

### O(2n)

If an algorithm performs n steps linearly, then another n steps linearly, then the function is O(2n). One example is finding the smallest and largest number in an array.

```
function findMax (array) {
var i;
var max = 0;
for (i = 0; i < array.length; i++) {
if (array[i] > max) {
array[i] = max;
}
}
return max;
}
```

In order to find the largest value we have to process the array linearly. The array length could be any size, so the order is of n.

### O(1)

Some algorithms are the same in every case. Accessing a specific element of an array for example, the operation time will always be 1, O(1).

```
function arrayContains (array, target) {
return array.indexOf(target) !== -1;
}
```

This is only one operation therefore the order is of 1.

### O(n^{2})

n^{2}, or n * n, means that for each n you have to go through n again. For example, a for loop within a for loop.

```
function findDuplicates(array) {
var i;
var j;
var duplicates = false;
for (i = 0; i < array.length; i++) {
for (j = 0; j < array.length; i++) {
if (i !== j && array[i] === array[j]) {
duplicates = true;
}
}
}
return true;
}
```

Here we are looping through an array, and then looping through it again for each element in the array. Therefor the order is of n^{2}

### O(log(n))

I've written a separate article on logarithms and Big O notation which explains this in more details.

In essence it is the number of steps taken to get to a final point. Depending on the base, 2 in binary, 10 in decimal etc, the logarithm is the number of operations to take to get to the target value.

### O(n!) - Factorial

This little chestnut below is a factorial:

```
var factorial = function(num){
if (num < 0){
return;
}
if (num === 0 || num === 1){
return 1;
} else {
return num * factorial(num-1);
}
}
```

The factorial function is useful in computing the number of combinations or permutations that can be constructed from a set of objects. Dr. Math

Factorials are useful when calculating the number of possibilites/permutations of a given number of items.

### Constants

If an algorithm has two functions, where one performs n steps or n^{2}, and the other function performs less or a fixed set of steps, then the second function can be ignored and the notation would be the greatest of the two. It is basically rounding up.

## Resources

Free course on data structures and complexity theory

Plain english explanation of Big O