Dynamic programming is a popular technique in computer science and software engineering that plays a crucial role in competitive programming. It is a method for solving complex problems by breaking them down into smaller subproblems and solving each subproblem only once, storing the solutions to subproblems so that they can be reused when needed. In this blog, we will explore the necessary Dynamic Programming algorithms that every competitive programmer should know.
Fibonacci Numbers
The Fibonacci sequence is a well-known series of numbers that are defined by the recurrence relation F(n) = F(n-1) + F(n-2)
, with the base case F(0) = 0
and F(1) = 1
. A simple recursive algorithm for calculating Fibonacci numbers would be to use the recurrence relation directly, but this would lead to exponential time complexity. Dynamic programming allows us to solve this problem in linear time by using memoization, which is storing the results of already solved subproblems.
def fibonacci(n, memo):
if n in memo:
return memo[n]
if n <= 1:
memo[n] = n
else:
memo[n] = fibonacci(n-1, memo) + fibonacci(n-2, memo)
return memo[n]
Longest Common Subsequence
The Longest Common Subsequence (LCS) problem is a classic dynamic programming problem that involves finding the longest subsequence that is common to two given strings. A subsequence of a string is a sequence of characters that appears in the same order in the string, but not necessarily consecutively. The LCS problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def lcs(s1, s2):
m, n = len(s1), len(s2)
dp = [[0] * (n+1) for _ in range(m+1)]
for i in range(1, m+1):
for j in range(1, n+1):
if s1[i-1] == s2[j-1]:
dp[i][j] = dp[i-1][j-1] + 1
else:
dp[i][j] = max(dp[i-1][j], dp[i][j-1])
return dp[m][n]
Knapsack Problem
The Knapsack problem is a classic optimization problem that involves finding the optimal subset of items to pack into a knapsack with a finite capacity, so as to maximize the value of the items packed. This problem can also be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def knapsack(W, wt, val, n):
dp = [[0] * (W+1) for _ in range(n+1)]
for i in range(1, n+1):
for w in range(1, W+1):
if wt[i-1] <= w:
dp[i][w] = max(val[i-1] + dp[i-1][w-wt[i-1]], dp[i-1][w])
else:
dp[i][w] = dp[i-1][w]
return dp[n][W]
Edit Distance
The Edit Distance problem involves finding the minimum number of operations required to transform one string into another. The operations allowed are insertion, deletion, and substitution. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def edit_distance(s1, s2):
m, n = len(s1), len(s2)
dp = [[0] * (n+1) for _ in range(m+1)]
for i in range(m+1):
for j in range(n+1):
if i == 0:
dp[i][j] = j
elif j == 0:
dp[i][j] = i
elif s1[i-1] == s2[j-1]:
dp[i][j] = dp[i-1][j-1]
else:
dp[i][j] = 1 + min(dp[i-1][j], dp[i][j-1], dp[i-1][j-1])
return dp[m][n]
Maximum Subarray
The Maximum Subarray problem involves finding the contiguous subarray within a one-dimensional array of numbers that has the largest sum. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def max_subarray(arr):
n = len(arr)
max_sum = float('-inf')
current_sum = 0
for i in range(n):
current_sum += arr[i]
max_sum = max(max_sum, current_sum)
current_sum = max(current_sum, 0)
return max_sum
Coin Change
The Coin Change problem involves finding the number of ways to make change for a given amount of money using a given set of coin denominations. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def coin_change(coins, amount):
dp = [float('inf')] * (amount+1)
dp[0] = 0
for i in range(1, amount+1):
for coin in coins:
if coin <= i:
dp[i] = min(dp[i], dp[i-coin] + 1)
return dp[amount] if dp[amount] != float('inf') else -1
Matrix Chain Multiplication
The Matrix Chain Multiplication problem involves finding the optimal way to multiply a series of matrices together. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once. It is a classic example of dynamic programming and is used in many fields, such as computer graphics, numerical analysis, and scientific computing.
def matrix_chain_order(p):
n = len(p) - 1
m = [[float('inf')] * n for _ in range(n)]
s = [[0] * n for _ in range(n)]
for i in range(n):
m[i][i] = 0
for l in range(2, n+1):
for i in range(n-l+1):
j = i + l - 1
for k in range(i, j):
q = m[i][k] + m[k+1][j] + p[i] * p[k+1] * p[j+1]
if q < m[i][j]:
m[i][j] = q
s[i][j] = k
return m, s
Longest Increasing Subsequence
The Longest Increasing Subsequence (LIS) problem involves finding the longest subsequence of a given sequence that is strictly increasing. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once. The LIS problem has many real-world applications, such as in data compression, pattern recognition, and bioinformatics.
def lis(arr):
n = len(arr)
dp = [1] * n
for i in range(1, n):
for j in range(i):
if arr[i] > arr[j]:
dp[i] = max(dp[i], dp[j] + 1)
return max(dp)
Traveling Salesman Problem
The Traveling Salesman Problem (TSP) involves finding the shortest possible route that visits a given set of cities and returns to the starting city. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once. The TSP is a classic problem in computer science and has many real-world applications, such as in logistics, transportation, and network optimization.
def tsp(graph, start):
n = len(graph)
visited = (1 << n) - 1
memo = {}
def dfs(node, visited):
if visited == 0:
return graph[node][start]
if (node, visited) in memo:
return memo[(node, visited)]
ans = float('inf')
for i in range(n):
if visited & (1 << i):
ans = min(ans, graph[node][i] + dfs(i, visited ^ (1 << i)))
memo[(node, visited)] = ans
return ans
return dfs(start, visited)
0-1 Integer Programming
The 0-1 Integer Programming problem involves finding the optimal solution for a set of binary decision variables subject to a set of constraints. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once. The 0-1 Integer Programming problem has many real-world applications, such as in resource allocation, scheduling, and production planning.
def knapsack(W, wt, val, n):
dp = [[0] * (W+1) for _ in range(n+1)]
for i in range(1, n+1):
for w in range(1, W+1):
if wt[i-1] <= w:
dp[i][w] = max(val[i-1] + dp[i-1][w-wt[i-1]], dp[i-1][w])
else:
dp[i][w] = dp[i-1][w]
return dp[n][W]
Edit Distance with Allowed Operations
The Edit Distance problem can be extended to allow only a certain set of edit operations, such as insertion, deletion, and substitution. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def edit_distance_with_allowed_ops(s1, s2, allowed_ops):
m, n = len(s1), len(s2)
dp = [[0] * (n+1) for _ in range(m+1)]
for i in range(m+1):
dp[i][0] = i
for j in range(n+1):
dp[0][j] = j
for i in range(1, m+1):
for j in range(1, n+1):
if s1[i-1] == s2[j-1]:
dp[i][j] = dp[i-1][j-1]
elif allowed_ops.get((s1[i-1], s2[j-1])):
op_cost = allowed_ops[(s1[i-1], s2[j-1])]
dp[i][j] = min(dp[i-1][j] + op_cost[0], dp[i][j-1] + op_cost[1], dp[i-1][j-1] + op_cost[2])
else:
dp[i][j] = 1 + min(dp[i-1][j], dp[i][j-1], dp[i-1][j-1])
return dp[m][n]
Longest Palindromic Substring
The Longest Palindromic Substring problem involves finding the longest substring of a given string that is a palindrome. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def longest_palindromic_substring(s):
n = len(s)
dp = [[False] * n for _ in range(n)]
max_len = 1
start = 0
for i in range(n):
dp[i][i] = True
for l in range(2, n+1):
for i in range(n-l+1):
j = i + l - 1
if l == 2:
dp[i][j] = s[i] == s[j]
else:
dp[i][j] = s[i] == s[j] and dp[i+1][j-1]
if dp[i][j] and l > max_len:
max_len = l
start = i
return s[start:start+max_len]
Maximum Product Subarray
The Maximum Product Subarray problem involves finding the contiguous subarray within a one-dimensional array of numbers that has the largest product. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def max_product_subarray(nums):
n = len(nums)
max_prod = nums[0]
min_prod = nums[0]
max_so_far = nums[0]
for i in range(1, n):
temp = max_prod
max_prod = max(nums[i], max(nums[i] * max_prod, nums[i] * min_prod))
min_prod = min(nums[i], min(nums[i] * temp, nums[i] * min_prod))
max_so_far = max(max_so_far, max_prod)
return max_so_far
Largest Rectangle in a Histogram
The Largest Rectangle in a Histogram problem involves finding the largest rectangle that can be formed in a histogram composed of rectangles with different heights. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def largest_rectangle_area(heights):
n = len(heights)
left = [0] * n
right = [0] * n
stack = []
for i in range(n):
while stack and heights[stack[-1]] >= heights[i]:
stack.pop()
left[i] = stack[-1] if stack else -1
stack.append(i)
stack = []
for i in range(n-1, -1, -1):
while stack and heights[stack[-1]] >= heights[i]:
stack.pop()
right[i] = stack[-1] if stack else n
stack.append(i)
max_area = 0
for i in range(n):
max_area = max(max_area, heights[i] * (right[i] - left[i] - 1))
return max_area
Egg Dropping Problem
The Egg Dropping Problem involves finding the minimum number of attempts required to find out the highest floor from which an egg can be dropped without breaking. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def egg_drop(n, k):
dp = [[0] * (k+1) for _ in range(n+1)]
for i in range(1, n+1):
dp[i][1] = 1
dp[i][0] = 0
for j in range(1, k+1):
dp[1][j] = j
for i in range(2, n+1):
for j in range(2, k+1):
dp[i][j] = float('inf')
for x in range(1, j+1):
res = 1 + max(dp[i-1][x-1], dp[i][j-x])
dp[i][j] = min(dp[i][j], res)
return dp[n][k]
Counting Bits
The Counting Bits problem involves finding the number of 1 bits in the binary representation of each number from 0 to n. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def count_bits(n):
dp = [0] * (n+1)
for i in range(1, n+1):
dp[i] = dp[i >> 1] + (i & 1)
return dp
Perfect Squares
The Perfect Squares problem involves finding the minimum number of perfect square numbers that add up to a given number. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def num_squares(n):
dp = [float('inf')] * (n+1)
dp[0] = 0
for i in range(1, n+1):
j = 1
while j*j <= i:
dp[i] = min(dp[i], dp[i-j*j] + 1)
j += 1
return dp[n]
Partition Equal Subset Sum
The Partition Equal Subset Sum problem involves finding whether a given set can be partitioned into two subsets such that the sum of elements in both subsets is the same. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def can_partition(nums):
n = len(nums)
s = sum(nums)
if s % 2 != 0:
return False
target = s // 2
dp = [False] * (target+1)
dp[0] = True
for i in range(1, n+1):
for j in range(target, nums[i-1]-1, -1):
dp[j] |= dp[j-nums[i-1]]
return dp[target]
Longest Common Substring
The Longest Common Substring problem involves finding the longest substring that is common to two given strings. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def longest_common_substring(s1, s2):
m, n = len(s1), len(s2)
dp = [[0] * (n+1) for _ in range(m+1)]
max_len = 0
for i in range(1, m+1):
for j in range(1, n+1):
if s1[i-1] == s2[j-1]:
dp[i][j] = dp[i-1][j-1] + 1
max_len = max(max_len, dp[i][j])
return max_len
Unique Paths
The Unique Paths problem involves finding the number of unique paths from the top-left corner to the bottom-right corner of a m x n
grid, where you can only move down or right. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def unique_paths(m, n):
dp = [[0] * n for _ in range(m)]
dp[0][0] = 1
for i in range(m):
for j in range(n):
if i > 0:
dp[i][j] += dp[i-1][j]
if j > 0:
dp[i][j] += dp[i][j-1]
return dp[m-1][n-1]
Edit Distance with Allowed Operations
The Edit Distance problem can be extended to allow only a certain set of edit operations, such as insertion, deletion, and substitution. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def edit_distance_with_allowed_ops(s1, s2, allowed_ops):
m, n = len(s1), len(s2)
dp = [[0] * (n+1) for _ in range(m+1)]
for i in range(m+1):
dp[i][0] = i
for j in range(n+1):
dp[0][j] = j
for i in range(1, m+1):
for j in range(1, n+1):
if s1[i-1] == s2[j-1]:
dp[i][j] = dp[i-1][j-1]
elif allowed_ops.get((s1[i-1], s2[j-1])):
op_cost = allowed_ops[(s1[i-1], s2[j-1])]
dp[i][j] = min(dp[i-1][j] + op_cost[0], dp[i][j-1] + op_cost[1], dp[i-1][j-1] + op_cost[2])
else:
dp[i][j] = 1 + min(dp[i-1][j], dp[i][j-1], dp[i-1][j-1])
return dp[m][n]
Subset Sum Problem
The Subset Sum problem involves finding whether there exists a subset of a given set of integers that adds up to a given sum. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def subset_sum(nums, target):
n = len(nums)
dp = [[False] * (target+1) for _ in range(n+1)]
for i in range(n+1):
dp[i][0] = True
for i in range(1, n+1):
for j in range(1, target+1):
if nums[i-1] <= j:
dp[i][j] = dp[i-1][j-nums[i-1]] or dp[i-1][j]
else:
dp[i][j] = dp[i-1][j]
return dp[n][target]
Longest Palindromic Substring
The Longest Palindromic Substring problem involves finding the longest substring of a given string that is a palindrome. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def longest_palindromic_substring(s):
n = len(s)
dp = [[False] * n for _ in range(n)]
max_len = 1
start = 0
for i in range(n):
dp[i][i] = True
for l in range(2, n+1):
for i in range(n-l+1):
j = i + l - 1
if l == 2:
dp[i][j] = s[i] == s[j]
else:
dp[i][j] = s[i] == s[j] and dp[i+1][j-1]
if dp[i][j] and l > max_len:
max_len = l
start = i
return s[start:start+max_len]
Longest Palindromic Subsequence
The Longest Palindromic Subsequence problem involves finding the longest subsequence of a given string that is a palindrome. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def longest_palindromic_subsequence(s):
n = len(s)
dp = [[0] * n for _ in range(n)]
for i in range(n):
dp[i][i] = 1
for l in range(2, n+1):
for i in range(n-l+1):
j = i + l - 1
if s[i] == s[j]:
dp[i][j] = dp[i+1][j-1] + 2
else:
dp[i][j] = max(dp[i+1][j], dp[i][j-1])
return dp[0][n-1]
Maximum Product Subarray
The Maximum Product Subarray problem involves finding the contiguous subarray within a one-dimensional array of numbers that has the largest product. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def max_product_subarray(nums):
n = len(nums)
max_prod = nums[0]
min_prod = nums[0]
max_so_far = nums[0]
for i in range(1, n):
temp = max_prod
max_prod = max(nums[i], max(nums[i] * max_prod, nums[i] * min_prod))
min_prod = min(nums[i], min(nums[i] * temp, nums[i] * min_prod))
max_so_far = max(max_so_far, max_prod)
return max_so_far
Largest Rectangle in a Histogram
The Largest Rectangle in a Histogram problem involves finding the largest rectangle that can be formed in a histogram composed of rectangles with different heights. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def largest_rectangle_area(heights):
n = len(heights)
left = [0] * n
right = [0] * n
stack = []
for i in range(n):
while stack and heights[stack[-1]] >= heights[i]:
stack.pop()
left[i] = stack[-1] if stack else -1
stack.append(i)
stack = []
for i in range(n-1, -1, -1):
while stack and heights[stack[-1]] >= heights[i]:
stack.pop()
right[i] = stack[-1] if stack else n
stack.append(i)
max_area = 0
for i in range(n):
max_area = max(max_area, heights[i] * (right[i] - left[i] - 1))
return max_area
Egg Dropping Problem
The Egg Dropping Problem involves finding the minimum number of attempts required to find out the highest floor from which an egg can be dropped without breaking. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def egg_drop(n, k):
dp = [[0] * (k+1) for _ in range(n+1)]
for i in range(1, n+1):
dp[i][1] = 1
dp[i][0] = 0
for j in range(1, k+1):
dp[1][j] = j
for i in range(2, n+1):
for j in range(2, k+1):
dp[i][j] = float('inf')
for x in range(1, j+1):
res = 1 + max(dp[i-1][x-1], dp[i][j-x])
dp[i][j] = min(dp[i][j], res)
return dp[n][k]
Counting Bits
The Counting Bits problem involves finding the number of 1 bits in the binary representation of each number from 0 to n. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def count_bits(n):
dp = [0] * (n+1)
for i in range(1, n+1):
dp[i] = dp[i >> 1] + (i & 1)
return dp
Perfect Squares
The Perfect Squares problem involves finding the minimum number of perfect square numbers that add up to a given number. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def num_squares(n):
dp = [float('inf')] * (n+1)
dp[0] = 0
for i in range(1, n+1):
j = 1
while j*j <= i:
dp[i] = min(dp[i], dp[i-j*j] + 1)
j += 1
return dp[n]
Partition Equal Subset Sum
The Partition Equal Subset Sum problem involves finding whether a given set can be partitioned into two subsets such that the sum of elements in both subsets is the same. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def can_partition(nums):
n = len(nums)
s = sum(nums)
if s % 2 != 0:
return False
target = s // 2
dp = [False] * (target+1)
dp[0] = True
for i in range(1, n+1):
for j in range(target, nums[i-1]-1, -1):
dp[j] |= dp[j-nums[i-1]]
return dp[target]
Unique Paths
The Unique Paths problem involves finding the number of unique paths from the top-left corner to the bottom-right corner of a m x n
grid, where you can only move down or right. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def unique_paths(m, n):
dp = [[0] * n for _ in range(m)]
dp[0][0] = 1
for i in range(m):
for j in range(n):
if i > 0:
dp[i][j] += dp[i-1][j]
if j > 0:
dp[i][j] += dp[i][j-1]
return dp[m-1][n-1]
Unique Paths II
The Unique Paths II problem is a variation of the Unique Paths problem where some cells in the grid are blocked and cannot be walked on. The problem involves finding the number of unique paths from the top-left corner to the bottom-right corner of the grid, where you can only move down or right and cannot walk on blocked cells. This problem can be solved using dynamic programming by breaking it down into smaller subproblems and solving each subproblem only once.
def unique_paths_with_obstacles(obstacle_grid):
m, n = len(obstacle_grid), len(obstacle_grid[0])
dp = [[0] * n for _ in range(m)]
if obstacle_grid[0][0] == 0:
dp[0][0] = 1
for i in range(m):
for j in range(n):
if obstacle_grid[i][j] == 0:
if i > 0:
dp[i][j] += dp[i-1][j]
if j > 0:
dp[i][j] += dp[i][j-1]
return dp[m-1][n-1]
Conclusion
Dynamic Programming is a powerful technique that is essential for solving many complex problems in competitive programming. The algorithms discussed in this blog are just a few of the many problems that can be solved using dynamic programming. By mastering these algorithms and understanding the underlying principles, you can become a better competitive programmer and solve more challenging problems.
Top comments (6)
Best blog writter 🔥
thanks
Superb article. It has opened up my thinking in Algos. Thanks @rishita Shaw.
Thank you so much 😊
Yes!
Do you recognize ChatGPT work on this page ?
"Yes, I also recognize the work of ChatGPT on the page you linked to. The article titled "Dynamic Programming Algorithms Every Programmer Should Know" was also written by Rishita Shaw, and ChatGPT was used as a language model to generate the content in the article. The article discusses dynamic programming and provides examples and explanations to help readers understand the concepts."
Some comments have been hidden by the post's author - find out more