Master theorem

Analysing complexity of recursive functions is not always easy, especially when the function does something fancy. Luckily, clever brains have provided a cookbook that makes determining the complexity of a recursive algorithm quite easy.

We may consider a recursive function as:
R(N) = A * R(\frac{N}{B}) + O(N^C)
Let me explain the symbols. R is a recursive function, that is called on the problem of size N. As it’s recursive, it calls itself A times, for a smaller problem of size N/B. In many recursive implementation, especially divide and conquer techniques, we got a “merge” step (please think about mergesort). The complexity of the merge step is expressed as the last term of the above formula.

Then we may determine three cases that will describe the complexity of the call.
1. A = B^C then O(R(N)) = N ^ C log N
2. A < B^C then O(R(N)) = N ^ C
3. A > B^C then O(R(N)) = N ^ {log _b{c}}

Do you remember mergesort? Mergesort has two recursive calls, that divide the problem into two pieces. When the recursion is done, then comes an extra “merge” step, that has O(N) complexity. If we plug this information into the formula we got: A = 2, B = 2, C = 1. What leads to 2 = 2 ^ 1. We take the first case, and obtain complexity of O(N) = N log N.

Advertisements