Time complexity is the measure of growth of runtime for an algorithm to complete as a function of input size, usually in its worst case scenario. In other words, how bad does an algorithm get if our input size becomes huge. Best and average-case scenarios are also sometimes used, though less common.
Big O
Big O notation is the standard way of denoting time complexity in computer science. More precisely, big O notation standardizes a way of describing the asymptotic growth of a function. Grasping what that is is crucial for understanding time complexity (and CPSC 221).
To describe the growth of a function asymptotically, means that the only thing that we consider when describing the growth of that function are the highest-order terms of that function, usually denoted by , or .
Order (math)
In this context, order (kinda) just means the exponent, so highest-order term is the variable with the highest exponent. So in , that would be .
Constant factors and lower-order terms are ignored. In other words, a function that takes in a parameter and grows at a rate of , is the exact same asymptotically as one that grows at . Therefore, we can say that both of those functions would be .
You can expand that to having any ‘added’ terms, like being the same asymptotically as .
We do this because in computer science, a lot of algorithms are used many, many, times, on sometimes millions of inputs. What we want to consider in cases like these is not exactly how much time a single run through the algorithm would take, but rather how bad it would get if we had to repeat that algorithm millions of times.
There are three mathematical symbols relevant to this, and a literal capital O is one of them. The three are:
- (Omega)
- (Theta)
Big O denotes the ‘order’ of a function. For a function to be ‘big O’ of another just means that it grows at a slower or equivalent rate. Keep in mind that in big O notation, the meaning of growth is precise, it’s asymptotic growth, meaning when it tends to very, very large inputs.
For example, if we have a function that grows linearly, say , and a function that grows quadratically, say , which of the following are true?
Answer
. grows linearly, so if , . For , . But grows quadratically, when , . Therefore, since grows slower than , we have .
If we have one function and one function , which of the following are true?
Answer
Both. Keep in mind that big O notation discards lower order terms that do not affect the main input of the function, in this case . A function that grows at and one that grows at are of the same asymptotic order in big O notation, in this case linear growth.
Omega denotes the opposite of . It represents the lower bound, as opposed to the higher bound of . In other words, an algorithm is of another if it grows at a faster or equivalent rate than the other. I like to think of it as ‘eclipsing’ another algorithm.
For example, say we have an algorithm whose time complexity can be represented by the function where , and another algorithm represented by where . Since the highest order term of is , and the highest order term of is we can conclude that dominates in growth asymptotically. In other words, as we previously saw with , , but also the opposite is true with , and we have: .
Can you think of any functions that are ?
Answer
There are a few forms of functions that have slower growth than linear, and are thus
- Linear functions (remember is inclusive)
- Logarithmic functions, like
- Constant functions, like
- Reverse linear functions, like
- Reverse Ackerman (more rare, only really seen in disjoint sets)
Theta denotes a combination of and . It represents both the lower and higher bound of a function. In this case, if a function is of another, it means they have the same asymptotic time complexity.
There are many examples, but one could be the algorithms represented by and . In this case, since they both have the same highest-order term, , they are in of each other. So both and are true.
Amortized Time Complexity
Amortized time complexity is a measure of the time complexity spread over each case, as opposed to a single individual worst case. This is usually used in cases where a specific operation is particularly costly. An example of such use is for insertion into dynamically sized arrays, like C++‘s vector.