Dynamic Programming Demystified: Python Knapsack & Shortest Path
This article introduces the core concepts of dynamic programming, explains its principles of breaking problems into subproblems with optimal substructure, and provides step‑by‑step Python implementations for the classic knapsack optimization and a shortest‑path graph algorithm, complete with illustrative code and visualizations.
Dynamic Programming Algorithm
Dynamic Programming is an efficient algorithmic paradigm primarily used to solve optimization problems. Its core idea is to break a complex problem into multiple subproblems and derive the solution of the current problem from the solutions of already solved subproblems.
Dynamic programming is often applied to problems with overlapping subproblems and optimal substructure. Overlapping subproblems mean that the same subproblem is solved repeatedly; optimal substructure means the optimal solution of the problem can be derived from optimal solutions of its subproblems.
The basic steps of dynamic programming include:
Define subproblems: decompose the original problem into a series of subproblems.
Define states: determine the state of each subproblem so that solving subproblems yields the overall solution.
Define state transition equations: find relationships between subproblems and express them as a recursive or iterative formula.
Compute the solution: use the recursive or iterative formula to compute the optimal solution of the whole problem.
Classic applications include: knapsack problem, longest common subsequence, edit distance, graph coloring, shortest path, maximum subarray sum, etc.
Example: solving the knapsack problem with dynamic programming.
Assume a knapsack with capacity C and n items, where each item i has weight wi and value vi. The goal is to select a subset of items whose total weight does not exceed C while maximizing total value.
We solve this using dynamic programming with the following steps:
Define state f(i, j) as the maximum value achievable using the first i items with a knapsack capacity j.
State transition: f(i, j) = max(f(i‑1, j), f(i‑1, j‑wi) + vi).
Initialization: when j < wi, f(i, j) = f(i‑1, j); when j ≥ wi, f(i, j) is initialized appropriately.
Compute: iteratively calculate f(1,1) … f(n, C); f(n, C) is the optimal solution.
This process dramatically improves efficiency by reusing optimal solutions of subproblems.
Python implementation of the knapsack problem
Solving the knapsack problem described above.
<code># Define knapsack data
weights = [2, 3, 4, 5] # each item's weight
values = [3, 4, 5, 6] # each item's value
max_weight = 8 # knapsack capacity
def knapsack(weights, values, max_weight):
n = len(weights) # number of items
dp = [0] * (max_weight + 1) # dp[j] = max total value for capacity j
for i in range(n):
for j in range(max_weight, weights[i] - 1, -1):
if j >= weights[i]:
dp[j] = max(dp[j], dp[j - weights[i]] + values[i])
else:
dp[j] = dp[j]
return dp[max_weight] # return max total value
max_value = knapsack(weights, values, max_weight)
print(max_value) # output: 10
</code>The function initializes a one‑dimensional array dp of length max_weight+1 , where dp[j] represents the maximum total value for capacity j. It then uses nested loops to update dp for each item i and capacity j, choosing the better of keeping the previous value or adding the current item's value. The final result dp[max_weight] gives the maximum achievable value, which is 10 for the example.
Animated illustration of the process:
Python implementation of the shortest‑path problem
Finding the shortest path between two points in a weighted graph.
<code># Define graph data structure
graph = {
'1': {'2': 7, '3': 9, '6': 14},
'2': {'3': 10, '4': 15},
'3': {'4': 11, '6': 2},
'4': {'5': 6},
'5': {'6': 9},
'6': {'5': 9},
}
# Define start and end nodes
start_node = '1'
end_node = '5'
def shortest_path(graph, start_node, end_node):
# Initialize shortest distance dict and visited set
shortest_distance = {}
visited_nodes = set()
for node in graph:
shortest_distance[node] = float('inf')
shortest_distance[start_node] = 0
while len(visited_nodes) != len(graph):
# Choose the unvisited node with the smallest distance
current_node = None
for node in graph:
if node not in visited_nodes:
if current_node is None or shortest_distance[node] < shortest_distance[current_node]:
current_node = node
# Update distances of neighbors
for neighbor_node, distance in graph[current_node].items():
if neighbor_node not in visited_nodes:
tentative_distance = shortest_distance[current_node] + distance
if tentative_distance < shortest_distance[neighbor_node]:
shortest_distance[neighbor_node] = tentative_distance
visited_nodes.add(current_node)
return shortest_distance[end_node]
shortest_distance = shortest_path(graph, start_node, end_node)
print(shortest_distance) # output: 20
</code>The example defines a graph with nodes and edge weights, then uses a dynamic‑programming‑style algorithm to compute the shortest distance from the start node to the end node. It maintains a dictionary shortest_distance where each entry stores the current best known distance from the start node, and a set visited_nodes to track processed nodes. The algorithm repeatedly selects the unvisited node with the smallest tentative distance, relaxes its outgoing edges, and marks it as visited until all nodes are processed.
Animated illustrations:
Model Perspective
Insights, knowledge, and enjoyment from a mathematical modeling researcher and educator. Hosted by Haihua Wang, a modeling instructor and author of "Clever Use of Chat for Mathematical Modeling", "Modeling: The Mathematics of Thinking", "Mathematical Modeling Practice: A Hands‑On Guide to Competitions", and co‑author of "Mathematical Modeling: Teaching Design and Cases".
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.