freeCodeCamp
Big Omega tells us the lower bound of the runtime of a function, and Big O tells us the upper bound.
Often times, they are different and we can’t put a guarantee on the runtime - it will vary between the two bounds and the inputs. But what happens when they’re the same? Then we can give a theta (Θ) bound - our function will run in that time, no matter what input we give it.
In general, we always want to give a theta bound if possible because it is the most accurate and tightest bound. If we can’t give a theta bound, the next best thing is the tightest O bound possible.
Take, for example, a function that searches an array for the value 0:
def containsZero(arr): #assume normal array of length n with no edge cases for num x in arr: if x == 0: return true return false
We’ve given it an omega and O bound, so what about theta? We can’t give it one! Depending on the array we give it, the runtime will be somewhere in between constant and linear.
Let’s change our code a bit.
def printNums(arr): #assume normal array of length n with no edge cases for num x in arr: print(x)
Can you think of a best case and worst case?? I can’t! No matter what array we give it, we have to iterate through every value in the array. So the function will take AT LEAST n time (Ω(n)), but we also know it won’t take any longer than n time (O(n)). What does this mean? Our function will take exactly n time: Θ(n).
If the bounds are confusing, think about it like this. We have 2 numbers, x and y. We are given that x
If you’re familiar with linked lists, test yourself and think about the runtimes for each of these functions!
Things get even more interesting when you consider a doubly linked list!
How do we measure the performance value of algorithms?
Consider how time is one of our most valuable resources. In computing, we can measure performance with the amount of time a process takes to complete. If the data processed by two algorithms is the same, we can decide on the best implementation to solve a problem.
We do this by defining the mathematical limits of an algorithm. These are the big-O, big-omega, and big-theta, or the asymptotic notations of an algorithm. On a graph the big-O would be the longest an algorithm could take for any given data set, or the “upper bound”. Big-omega is like the opposite of big-O, the “lower bound”. That’s where the algorithm reaches its top-speed for any data set. Big theta is either the exact performance value of the algorithm, or a useful range between narrow upper and lower bounds.