Update (December 17, 2023): Added a new paragraph in the section on dynamic programming to provide a more balanced view of its trade-offs.
Update (January 18, 2024): Updated the dynamic programming section with a solution using proper memoization, and revised the explanation accordingly.
Update (January 19, 2024): Added tiny optimization for the n = 0 case in the recursive solution, which I had forgotten there.
Update (February 20, 2024): Small improvement in the dynamic programming section.
Table of Contents
- Introduction
- Recursion and Backtracking
- The Call Stack
- The Kata
- Walkthrough
- Bonus: Pure Recursion vs. Dynamic Programming
- Conclusion
Introduction
Recently, I was working on a coding kata on codewars.com. Early on, I started thinking that a potential solution might utilize recursion, a concept that involves a function calling itself. However, I quickly realized that my grasp of recursion was not as solid as it needed to be for this task. In this post, I will share the insights gained from deepening my understanding of recursion while working through the kata.
Recursion and Backtracking
Recursion and the related concept of backtracking are techniques in computer science used to solve complex problems by breaking them down into simpler subproblems.
Recursion involves a function calling itself with different arguments until it reaches a so called base case. The base case is a condition that signals the function to stop calling itself and start returning. It's the simplest possible version of the problem, one that can be solved directly without any further recursive calls. The base case is crucial in recursion as it prevents infinite recursion and allows the function to terminate at a certain point. We will revisit this concept in the practical part of this article, which should provide a clearer understanding of what it means and how it works.
Backtracking, on the other hand, involves exploring all possible solutions to a problem and, when a dead end is reached, "backtracking" to a previous step and trying a different path. These techniques are often used in combination to solve problems in areas such as combinatorics, parsing, and artificial intelligence. They're particularly useful for problems where the solution involves exploring all possible combinations or permutations of a set of elements.
One prominent example of a task that can be effectively solved using recursion is the mathematical puzzle known as the Tower(s) of Hanoi. It involves moving a stack of disks from one peg to another following certain rules. Notably, Douglas Crockford referenced this puzzle in his book "JavaScript: The Good Parts" to explain recursion.
The Call Stack
In JavaScript, the call stack is a key component of the event loop, which is the engine that drives JavaScript's single-threaded, non-blocking, asynchronous behavior. The call stack, recursion, and backtracking are all interconnected. When a function is called in JavaScript, it's added (or "pushed") onto the call stack. If that function calls another function, the new function is pushed onto the top of the stack and becomes the currently executing function. This "Last In, First Out" (LIFO) structure of the call stack is what allows recursion to work. Each recursive call adds a new layer to the stack, and when a base case is reached, the functions start returning and are "popped off" the stack one by one. This is where backtracking comes into play, allowing the program to explore different paths or solutions. The event loop continuously checks the call stack and processes functions based on the LIFO principle. Once the stack is empty, it means that all functions have finished executing.
The Kata
To better understand how these techniques work in practice, let's take a closer look at the aforementioned kata which involves generating all possible combinations of a set of parens:
Write a function which makes a list of strings representing all of the ways you can balance n pairs of parentheses.
Examples:
balancedParens(0) => [""]
balancedParens(1) => ["()"]
balancedParens(2) => ["()()","(())"]
balancedParens(3) => ["()()()","(())()","()(())","(()())","((()))"]
As hinted at above, this is a classic example of a problem that can be solved using recursion and backtracking.
Our strategy will be to build all combinations of parentheses from scratch, starting with an empty string and adding one parenthesis at a time. We'll keep track of the number of open and close parentheses we've added so far to ensure that we only add valid combinations.
To do this, we'll use a recursive function, which we'll call generate
. This function will take the current combination of parentheses, and the number of open and close parentheses.
The function will add an open parenthesis if we haven't used up all the open parentheses, and a close parenthesis if it doesn't exceed the number of open parentheses. This ensures that we don't end up with any unbalanced combinations like ((())
.
If we've used up all the open and close parentheses, we have a valid combination, which we'll add to our list. If not, we'll continue to add parentheses by making recursive calls to generate
.
codwars provides an empty balancedParens
function accepting one argument, n
, representing the pairs of parenthesis to balance. We're gonna put our generate
function inside of it:
function balancedParens(n) {
if (n === 0) {
return [""];
}
let result = [];
const generate = (current, open, close) => {
if (current.length === 2 * n) {
result.push(current);
return;
}
if (open < n) {
generate(current + "(", open + 1, close);
}
if (close < open) {
generate(current + ")", open, close + 1);
}
};
generate("", 0, 0);
return result;
}
Walkthrough
Now, let's walk through the process of how this function works in the context of the kata, using a simple visualization of the call stack. This will give us a clear, step-by-step understanding of how recursion and backtracking unfold in real time.
As to not have too many explanatory steps, but enough to be clear, I'm gonna focus on the n = 2
case.
1. Step
The initial call to the recursive function generate
is made with an empty string: generate("", 0, 0)
. This pushes a new frame onto the call stack. The previously empty stack now looks like this:
- generate("", 0, 0);
The result
array is []
.
2. Step
This leads to a recursive call with an open parenthesis added: generate("(", 1, 0)
. This pushes another frame onto the stack:
- generate("(", 1, 0)
- generate("", 0, 0);
The result
array is still []
.
3. Step
This call can lead to two recursive calls: one where another open parenthesis is added generate("((", 2, 0)
and one where a close parenthesis is added generate("()", 1, 1)
. Let's follow the first call where another open parenthesis is added: generate("((", 2, 0)
. This pushes another frame onto the stack:
- generate("((", 2, 0)
- generate("(", 1, 0)
- generate("", 0, 0)
The result
array is still []
.
4. Step
Now, we can only add close parentheses. The next call is generate("(()", 2, 1)
. This pushes another frame onto the stack:
- generate("(()", 2, 1)
- generate("((", 2, 0)
- generate("(", 1, 0)
- generate("", 0, 0)
The result
array is still []
.
5. Step
Finally, we add another close parenthesis: generate("(())", 2, 2)
. This pushes another frame onto the stack:
- generate("(())", 2, 2)
- generate("(()", 2, 1)
- generate("((", 2, 0)
- generate("(", 1, 0)
- generate("", 0, 0)
We've reached a valid combination "(())"
, which is added to the result
array. The result
array is now ["(())"]
.
This is the base case where the current combination of parentheses has reached the length of 2 * n
(since each pair of parentheses contributes two characters to the string). When this condition is met, we know we've added all the parentheses we can, and we add the current combination to the result
array. This is why we check if current.length === 2 * n
at the beginning of each call to generate. If this condition is true, we've reached the base case, and we add current to result and return from the function. This is the point at which we stop making new recursive callsfor the current combination and start backtracking.
6. Step
We have now explored all combinations starting with two open parentheses. The call generate("(())", 2, 2)
finishes, so its frame is popped from the stack. The stack now looks like this:
- generate("(()", 2, 1)
- generate("((", 2, 0)
- generate("(", 1, 0)
- generate("", 0, 0)
The result
array is still ["(())"]
.
7. Step
Control returns to the previous call, which was generate("(()", 2, 1)
. In this call, we've already explored the possibility of adding a close parenthesis (which led to the valid combination "(())"
), so we can't do anything more here. The function finishes and its frame is popped from the stack. The stack now looks like this:
- generate("((", 2, 0)
- generate("(", 1, 0)
- generate("", 0, 0)
The result
array is still ["(())"]
.
8. Step
Control returns to the previous call, which was generate("((", 2, 0)
. In this call, we've already explored the possibility of adding a closing parenthesis (which led to the string "(()"
), so we can't do anything more here. The function finishes and its frame is popped from the stack. The stack now looks like this:
- generate("(", 1, 0)
- generate("", 0, 0)
The result
array is still ["(())"]
.
9. Step
Control returns to the previous call, which was generate("(", 1, 0)
. In this call, we've already explored the possibility of adding an open parenthesis (which led to the string "(("
), so now we add a close parenthesis and call generate("()", 1, 1)
to explore this new possibility. This pushes another frame onto the stack:
- generate("()", 1, 1)
- generate("(", 1, 0)
- generate("", 0, 0)
The result
array is still ["(())"]
.
10. Step
From here, we can add an open parenthesis: generate("()(", 2, 1)
. This pushes another frame onto the stack:
- generate("()(", 2, 1)
- generate("()", 1, 1)
- generate("(", 1, 0)
- generate("", 0, 0)
The result
array is still ["(())"]
.
11. Step
Finally, we add a close parenthesis: generate("()()", 2, 2)
. This pushes another frame onto the stack:
- generate("()()", 2, 2)
- generate("()(", 2, 1)
- generate("()", 1, 1)
- generate("(", 1, 0)
- generate("", 0, 0)
We've reached another valid combination "()()"
, which is added to the result array. The result
array is now ["(())", "()()"]
. Once again, this is a base case where the current combination of parentheses has reached the length of 2*n.
Final Steps
The function continues to backtrack, popping off all remaining items from the call stack as there are no further valid combinations remaining. Each pop corresponds to a return from a recursive call, indicating that all combinations starting with the current set of parentheses have been fully explored.
After all the remaining items have been popped off the stack, the outer function has completed its task. It then returns the result array, marking the end of the process and the completion of the task.
Summary
This step-by-step walkthrough illustrates how recursion and backtracking work together to solve the problem. The recursive function generate
explores all possible combinations of parentheses, while the call stack keeps track of the different combinations we're exploring. When we've explored all combinations starting with a certain prefix, we backtrack by returning from the current call and popping its frame from the stack. This allows us to explore other combinations starting with a different prefix.
There are also neat tools like latentflip.com/loupe/, www.jsv9000.app and pythontutor.com/javascript.html that allow you to step through the execution of code and provide visualizations at each step.
Bonus: Pure Recursion vs. Dynamic Programming
While this article focuses on recursion, it's important to note that recursion alone can sometimes lead to inefficiencies, especially when a function is called multiple times with the same arguments, leading to redundant computations. This happens because a purely recursive solution doesn't store the results of subproblems, so each time a subproblem is encountered, it's solved from scratch.
However, purely recursive solutions can be simpler to understand and implement, especially for problems that are naturally suited to recursion. They can also be more intuitive, as they often closely mirror the problem's definition. For example, consider a recursive function to calculate the Fibonacci sequence. The problem's definition naturally lends itself to a recursive solution: the nth Fibonacci number is the sum of the (n-1)th and (n-2)th Fibonacci numbers.
But this recursive solution can lead to inefficiencies. To calculate the 5th Fibonacci number, the function needs to calculate the 4th and 3rd Fibonacci numbers. To calculate the 4th Fibonacci number, it needs to calculate the 3rd and 2nd Fibonacci numbers. Notice that the 3rd Fibonacci number is calculated twice. As the input size increases, the number of redundant calculations grows exponentially, leading to poor performance.
Dynamic programming is a strategy that can help overcome this inefficiency. It solves complex problems by breaking them down into simpler subproblems, solving each subproblem only once, and storing the results of these subproblems to avoid redundant work. This is where memoization comes in.
Memoization is a technique often used in dynamic programming to optimize the computation time of recursive functions. It works by storing the results of (expensive) function calls and reusing them when the same inputs occur again. However, it's important to note that to avoid unexpected behavior and bugs, memoization should generally only be used with pure functions, or functions that behave like pure functions for the range of inputs you're interested in.
Here's a dynamic programming solution to the parentheses kata:
function balancedParens(n, memo = {}) {
if (n === 0) {
return [""];
}
if (memo[n]) {
return memo[n];
}
let parentheses = [];
for (let i = 0; i < n; i++) {
for (let left of balancedParens(i, memo)) {
for (let right of balancedParens(n - 1 - i, memo)) {
parentheses.push(`(${left})${right}`);
}
}
}
memo[n] = parentheses;
return parentheses;
}
This solution still utilizes recursion to break the problem down into smaller subproblems. However, it differs from a pure recursive solution in how it uses the results of these subproblems. Instead of solving the same subproblem multiple times, it solves each subproblem only once and stores the results for future use. This approach eliminates the need for backtracking and can be more efficient in terms of the call stack, as the function is only called once for each unique value of n
.
Let's consider the parentheses task with n
being 2
. The initial call to the function is balancedParens(2)
, which checks the memo
object to see if the result for n = 2
has already been computed. If not, it initializes an empty array parentheses and enters a loop that will iterate n
times.
On the first iteration, the function makes two nested calls to balancedParens
, with arguments 0
and 1
, respectively. Since these calls have not been made before, their results are computed and stored in the memo
object. The function then generates new combinations by inserting a pair of parentheses around each combination from the n = 0
array and appending each combination from the n = 1
array. These new combinations are added to the parentheses
array.
The function continues to iterate, generating all combinations for n = 2
, until it has explored all possibilities and returns the parentheses
array, which is also stored in the memo
object for n = 2
.
The key point here is that each unique call to balancedParens(i, memo)
and balancedParens(n - 1 - i, memo)
within the loop is computed only once for each unique value of i
and n - 1 - i
. The memoization ensures that the results of these calls are stored and then reused in the nested for loops to generate all combinations of parentheses. This is the essence of dynamic programming: solving each subproblem once, storing the result, and reusing it to avoid redundant work.
While dynamic programming can offer benefits in terms of reducing redundant computations, it's important to note that it may increase memory usage due to the need to store intermediate results. For the problem at hand, the dynamic programming solution with memoization should outperform the pure recursive solution in terms of speed of execution, as it avoids recalculating the same subproblems. But this likely comes at the cost of higher memory usage. I've chosen to include the dynamic programming solution to provide a fuller picture of the different strategies that can be used to tackle this problem and to illustrate the concept of dynamic programming in a practical context.
I also openly admit that, for me, comprehending the dynamic programming solution presents a greater challenge compared to the purely recursive solution. The complexity introduced by the nested loops, along with the need to correctly handle the left and right conditions, adds layers of difficulty that make the solution harder to grasp.
I recommend using the tools mentioned earlier to get a similar step-by-step walkthrough of this solution, as it provides valuable insights into how the function builds up the solutions for values of n to generate all combinations of well-formed parentheses.
Conclusion
Recursion and backtracking are powerful tools in a programmer's toolkit, capable of tackling complex problems with elegant solutions. Understanding these concepts, along with related concepts like the call stack and dynamic programming, can help you write more efficient and effective code. Whether you're generating combinations of parentheses, solving a puzzle, or traversing a tree, recursion and backtracking can often provide the solution you need.
(Cover image on top by Tine Ivanič on Unsplash)
Top comments (0)