Recurrences in Algorithms | DSA Tutorials YASH PAL, 6 May 20266 May 2026 In algorithms, a recurrence is an equation or inequality that describes a function in terms of its value on smaller inputs. When an algorithm contains a recursive call to itself, its running time can often be described by a recurrence equation or recurrence, which describes the overall running time on a problem of size n in terms of the running time on smaller inputs. For example, a recurrence for the merge sort is given below:T(n)={Θ(1) if n=12T(n/2)+Θ(n)otherwiseT(n) = \biggl \{ \begin{align*} \Theta(1) \space \qquad if \space n = 1 \\ \quad2T(n/2) + \Theta(n) \quad otherwise \end{align*} Let’s see some methods for solving the recurrences so that the running time for an algorithm can be determined.Recurrence-solving MethodsSubstitution MethodIteration MethodRecursion Tree MethodMaster MethodSubstitution MethodThis method of solving recurrence consists of guessing the form of the solution and using induction to solve. In this method, the guessed answer is used in the recurrence equation, and values of some constants are found. If it becomes easy to get the constants using introduction and mathematical tools, then we can say that the guessed form of the solution is the answer to the recurrence; else, some solution form is to be tried. The guessed solution thus establishes either upper or lower bounds on a recurrence. The method is simple, but you must be very close in guessing the solution, and this comes in handy only when you have a rich experience working with recurrences. Let’s take an example to understand this method.Example 1: Let us determine an upper bound on the recurrence.T(n)=2T[n2]+nT(n) = 2T \left [ \frac {n}{2} \right ] + nSolutionWe guess that the solution isT(n) = O(n log2 n)Then, using the definition of the O notation, we have to prove that T(n) <= (cn log2n) for some legal choice of the constant c > 0. As the ceiling and floor do not affect the running time of the algorithm, we ignore them. Assume the solution is true for n/2 also, thus we can writeT(n2)≤c(n2)log2(n2),substituting in the recurrence we haveT(n)≤2(c(n2)log2(n2))+n≤cn log2(n2)+n=cn log2n−cn log22+n=cn log2n−cn +n<cn log2n,(for some constant c>1)\begin{align*} & T \left ( \frac {n}{2} \right ) \leq c \left ( \frac {n}{2} \right ) log_2 \left ( \frac {n}{2} \right), \quad substituting \space in\space the\space recurrence\space we \space have \\ & T(n) \leq 2 \left ( c\left ( \frac {n}{2} \right ) log_2 \left ( \frac {n}{2} \right) \right )+n \\ & \le cn \space log_2 \left ( \frac {n}{2} \right ) + n \\ & = cn \space log_2n – cn\space log_22 + n \\ & = cn \space log_2n – cn \space + n \\ & < cn \space log_2n, (for\space some\space constant\space c > 1) \end{align*}The next step in the substitution method is to show that, using mathematical induction, the solution holds for the boundary conditions. Using the base case for the induction proof, taking n = 1 as our boundary condition, we haveT(1)≤c1log21=01≤0(Wrong)\begin{align*} T(1) \le c 1 log_2 1 = 0 \\ 1 \le 0 (Wrong) \end{align*}Thus, taking n = 1 creates a problem for which our solution holds for the boundary conditions. But as we have seen, asymptotic notation requires us to prove only that T(n) = cn log2n for some constant n ≥\ge n0, where n0 is our chosen constant. Now taking the value n0 = 2, we show that T(2) and T(3) are good choices for boundary conditions for our recurrence. It is easy to note from the recurrence that T(2) is 4 and T(3) is 5. Now considering T(n) ≤\le cn log2 n for some constant c ≥\ge 1, we can write T(2) ≤\le c2 log22 and T(3) ≤\le c3 log23. You can check it yourself that any choice of c ≥\ge 2 will suffice for the base cases of n = 2 and n = 3 to hold. Ignoring the base condition n = 1 in this example tells us one important point that the base case for recurrence can be different from the base case for the recurrence, as we are looking for an n0 such that n ≥\ge n0 satisfies the equation.Iteration MethodIn the iterative substitution, or “plug-and-chug” technique, we iteratively apply the recurrence equation to itself and see if we can find a pattern. This method is different from the substitution method as we do not have to guess the answer, but we iterate the recurrence on the smaller inputs until some termination condition is met. Thus, it requires us to do more algebraic work compared to the substitution method. Let’s understand this with an example.Example 2: Find the solution of the following recurrenceT(n)=2T(n2)+bnT(n) = 2T \left ( \frac {n}{2} \right ) + bnSolution: Using the iteration method, we write n = n/2 the first time, then n = n/4 the second time and so on. Using this recurrence is iterated as:T(n)=2T(n2)+bn=2(2T(n22))+b(n2)+bn=22T(n22)+2bn=23T(n23)+3bn24T(n24)+4bn=...2iT(n2i)+ibn\begin{align*} T(n) = 2T \left ( \frac {n}{2} \right ) + bn \\ = 2 \left ( 2T \left ( \frac{n}{2^2} \right ) \right) + b \left ( \frac{n}{2} \right ) + bn \\ = 2^2T \left( \frac{n}{2^2} \right) + 2bn = 2^3T \left( \frac{n}{2^3} \right) + 3bn \\ 2^4T \left( \frac{n}{2^4} \right) + 4bn \\ \\ = … \\ \\ 2^iT \left( \frac{n}{2^i} \right) + ibn \end{align*}It is clear from the last line that the terminating condition or boundary condition is achieved when n2i=1{\large \frac {n}{2^i} = 1 } i.e. n = 2i of i = log2n. Taking T(1), some constant say b, we haveT(n) = nb + bn log2nThus, the solution to the recurrence is T(n) = O(n log2) n.Recursion Tree MethodAs the name indicates, the recursion tree method is a tree-based method for solving recurrences. This method clearly shows the iterative view of the recurrence, thus allowing us to visualise each iteration process. Each level of the tree shows the expanded nodes. As during iteration we always have the two parts of the expression: one is recursive and the second is non-recursive, every node of the tree gives the same terms of the recurrence. We associate a cost of expansion at every level of the tree. Summing the cost at each level of the three gives the total cost or solution of the recurrence.This type of method is particularly useful for recurrences that result from the divide-and-conquer strategy for algorithm design and solving techniques. Let’s understand it using an example.Example 3: Consider the following recurrenceT(n)={b if n<22T(n/2)+bnifn≥2T(n) = \biggl \{ \begin{align*} b \space \qquad if \space n < 2\\ \quad2T(n/2) + bn \quad if n \ge 2 \end{align*} Solve the above recurrence using the recursion tree method.Solution: The recursion tree for the above recurrence is shown below in Figure 1 (a), simply T(n), and (b) expanded T(n). At the root, we write the non-recursive cost, and the children of every node are recursive terms for T. Figure 1 (c) shows one more expansion by writing the values for T(n/2). We continue expanding each node in the tree by breaking it into its constituent parts as determined by the recurrence.Figure 1: Recursion tree methodFigure 2: Complete Recursion TreeOn the right side of each level, we have written the cost of that problem. At the root level, we have the cost of original recurrence, i.e. bn. At the next level, the cost is due to 2 subproblems of size n/2. The sum of this is bn/2 + bn/2, i.e. bn. Similarly, at the next level, the cost is 4*bn/4, i.e., bn. The depth, number of T’s and size of the problem are shown in the table below. The number of T’s denotes the number of nodes at each level.DepthT’sSize01n12n/2i2in/2i………The subproblem size of a node at depth i is n/2i. Thus, the boundary condition when the recursion is terminated is when n/2i = 1 or, equivalently, when i = log2 n. Thus, the tree has log2n + 1 levels (0,1,2, ….,log2n). The total cost of the recursion is obtained by the cost at the last level plus the cost of all previous levels. The cost at the last level is Θ\Theta(2i), i.e. Θ\Theta(2logn), which in turn becomes Θ\Theta(n) as 2log n = nlog22, i.e. n. Thus, the total cost isT(n) = i times (bn) + Θ\Theta(n)= log n bn + Θ\Theta(n)= O (n log n)Master MethodThe master method is used for a particular form of recurrence relations. This method gives the resultant complexity directly without expanding the function. It is, in fact, an application of the Master’s theorem. The Master’s theorem states that ifT(n)=aT(nb)+f(n)wheref(n)=O(nd)\begin{align*} T(n) = aT \left ( \frac {n}{b} \right ) + f(n) \\ where f(n) = O(n^d) \end{align*}Depending upon the values of a,b and dT(n)={O(nd) if d>logbaO(ndlogbn)if d=logbaO(nlogba)ifd<logbaT(n) = \biggl \{ \begin{align*} \qquad O(n^d) \space \qquad if \space d > log_ba \\ \quad O(n^d log_bn) \qquad if \space d = log_b a \\ \quad O(n^{log_ba}) \quad if d < log_ba \end{align*} This theorem depends on the same sum rule that we have explained for notations. Here we have two terms to compare: a T(n/b) and f(n). If a term has a higher order, it becomes the order of the whole relation, as per the sum rule. In the application to the analysis of a recursive algorithm, the constants and functions take on the following significance:n is the size of the problem.a is the number of subproblems in the recursion.n/b is the size of each subproblem. Here, it is assumed that all subproblems are essentially the same size.f(n) is the cost of the work done outside the recursive calls, which includes the cost of dividing the problem and the cost of merging the solutions to the subproblems.Figure 3 illustrates how the recurrence relation form used in Master’s theorem was developed.Figure 3: Master MethodLet us take all three cases in detail:Case-1 : d > logba: This simply means that f(n) has a higher order than the recursive term. Or we can say the operations besides the recursive call in an algorithm consume more time. Hence, the whole relation has the same order as the order of f(n), that is, O(nd).Case-2 : d = logba: When both the terms have the same order, then we view the algorithm as f(n) number of steps followed by some c number of recursive calls. The number of recursive calls an input of size n can invoke is given by logbn. Using the product rule, the complexity turns out to be O(nd logbn).Case-3 : d < logba: This means time consumption largely depends on the recursive calls. Hence, the final complexity depends only on its order. The number of recursive calls per algorithm run is a. Reduction in input with every recursive call is that it gets divided by b. Hence, complexity is O(nlogba).Original Master’s theorem allows us to easily calculate the run time of any recursive algorithm in big-O notation without expanding its recurrence relation. In its generic form, the theorem is slightly different and provides asymptotic bounds for a recurrence function. It can be understood as follows:For recurrence functions of type T(n)=aT(nb)+f(n)T(n) = aT \left ( \frac{n}{b} \right ) + f(n)Case-1 : If f(n) = O(nlogb(a) – ε), for some constant ε > 0, then T(n) = θ\theta(nlogb(a))Case-2 : If f(n) = θ\theta(nlogb(a)), then T(n) = θ\theta(nlogb(a)logn)Case-3 : If f(n) = Ω\Omega(nlogb(a) + ε), for some constant ε >= 0, then T(n) = θ\theta(f(n))For the last case to be applicable, one extra condition should be satisfied, f(n/b) <= cf(n), for some constant c < 1. We might need this form only for certain applications, as seen in examples. The method can be better understood by applying it to various recurrence relations:Example 3: Solve the following recurrence relation using Master’s Method.T(n)=3T(n3)+knT(n) = 3T \left( \frac{n}{3} \right) + knSolution: On comparing with T(n)=aT(nb)+f(n)T(n) = aT \left ( \frac{n}{b} \right) + f(n) we geta = 3, b = 3 ⇒logba=log33=1\Rightarrow log_ba = log_33 = 1f(n) = kn = O(n) ⇒d=1\Rightarrow d = 1Since logba = d, it is the second case.T(n) = O(ndlogbn) = O(n log3n).Data Structures & Algorithms Tutorials for Beginners Data Structures Tutorials DSA Tutorials