Coding interview prep

April 8, 2026

Dynamic Programming Explained Visually: Memoization, Tabulation, and the Patterns That Stick

Dynamic programming is not hard. The way it's taught is hard. Most explanations show you the finished code without showing how anyone arrived at it. This guide works differently — it shows you the thinking, step by step, with the dp table filling up in front of you.

The classic DP struggle: you read a solution, nod along, close the tab — and freeze the next time you see a similar problem. This happens because DP solutions are explained backwards. The final optimized code is presented without showing the reasoning that led there.

To understand dynamic programming, you need to see it think. Watch the subproblems get solved one by one. Watch the dp array fill up cell by cell. Watch memoization collapse a tree of redundant calls into a flat cache. That's what this guide does.

What dynamic programming actually is

Dynamic programming is a technique for solving problems by breaking them into smaller subproblems, solving each subproblem once, and storing the result so it doesn't get recomputed. That's the complete definition.

Two properties must be present for DP to apply:

Optimal substructure

The optimal solution to the problem contains the optimal solutions to its subproblems. If you know the best way to make change for $3 and $4, you can use that to solve $7.

Overlapping subproblems

The same subproblems appear multiple times in the recursive solution. Without DP, you'd compute fib(3) seven times. With DP, you compute it once and store the result.

The plain English version

If your recursive solution solves the same subproblem more than once, and the subproblem results combine into a correct answer, you have a DP problem. The optimization is just caching those repeated computations.

Memoization vs tabulation — the visual difference

There are two ways to implement DP. They produce the same result but think about the problem in opposite directions. Fibonacci is the simplest example to see them both.

Memoization — top-down

Start with the original recursive solution. Add a cache. Before computing a subproblem, check if the answer is already in the cache. If yes, return it immediately. If no, compute it, store it, and return it.

function fib(n, memo = {}) {
  if (n in memo) return memo[n];   // cache hit — no recomputation
  if (n <= 1) return n;
  memo[n] = fib(n - 1, memo) + fib(n - 2, memo);
  return memo[n];
}

With memoization, fib(5) calls fib(4) and fib(3). fib(4) calls fib(3) and fib(2). But the second call to fib(3) is already in the cache — it returns in O(1). The exponential tree collapses into a linear chain of unique calls.

Tabulation — bottom-up

Forget recursion. Build a dp array starting from the base cases and fill it up to the answer. You're solving subproblems in order, smallest first, until you reach the target.

function fib(n) {
  const dp = new Array(n + 1).fill(0);
  dp[0] = 0;
  dp[1] = 1;
  for (let i = 2; i <= n; i++) {
    dp[i] = dp[i - 1] + dp[i - 2];  // each cell uses the two before it
  }
  return dp[n];
}

The dp array for fib(7) filling up — each cell depends on the two before it:

0
dp[0]
1
dp[1]
1
dp[2]
2
dp[3]
3
dp[4]
5
dp[5]
8
dp[6]
13
dp[7]

dp[i] = dp[i-1] + dp[i-2] · Fibonacci bottom-up · n = 7 → answer: dp[7] = 13

Memoization vs Tabulation — when to use each

PropertyMemoizationTabulation
DirectionTop-down (recursive)Bottom-up (iterative)
When subproblems neededOnly the ones you actually reachAll subproblems up to n
SpaceCall stack + cachedp array only (no stack)
Code styleCloser to the naive recursionExplicit loop, more compact
Interview preferenceEasier to derive on the flyMore efficient in practice

In interviews, start with memoization if you're unsure — it's easier to derive from the recursive solution you already wrote. Convert to tabulation as a follow-up if the interviewer asks for an iterative approach or to reduce stack space.

Dynamic programming examples, step by step

The fastest way to build DP intuition is to watch the dp array fill up on concrete examples. Here are the three problems that every interviewer expects you to understand deeply.

Coin Change — minimum coins to reach a target

Given coins [1, 3, 4] and amount 6, find the minimum number of coins to make exactly 6.

The recurrence: dp[i] = min(dp[i - coin] + 1) for each coin. For each amount i, try every coin and take the minimum.

dp array filling up — coins = [1, 3, 4], amount = 6:

0
dp[0]
1
dp[1]
2
dp[2]
1
dp[3]
1
dp[4]
2
dp[5]
2
dp[6]

dp[6] = 2 — optimal: coins 3+3. Each cell = fewest coins to make that amount.

function coinChange(coins, amount) {
  const dp = new Array(amount + 1).fill(Infinity);
  dp[0] = 0;  // base case: 0 coins to make amount 0

  for (let i = 1; i <= amount; i++) {
    for (const coin of coins) {
      if (coin <= i) {
        dp[i] = Math.min(dp[i], dp[i - coin] + 1);
      }
    }
  }
  return dp[amount] === Infinity ? -1 : dp[amount];
}

House Robber — non-adjacent maximum sum

Given an array of house values, find the maximum amount you can rob without robbing two adjacent houses. This is the canonical linear DP problem — the decision at each house is binary: rob it or skip it.

The recurrence: dp[i] = max(dp[i-1], dp[i-2] + nums[i]). At each house, the best you can do is either skip it (take dp[i-1]) or rob it and add its value to the best result two houses back.

Houses = [2, 7, 9, 3, 1]:

2
dp[0]
7
dp[1]
11
dp[2]
11
dp[3]
12
dp[4]

dp[4] = 12 — optimal: rob houses 0, 2, 4 (values 2 + 9 + 1 = 12)

Longest Common Subsequence — the 2D dp table

LCS introduces the 2D dp table — a grid where rows represent one string and columns represent the other. Each cell dp[i][j] stores the LCS length for the first i characters of string A and the first j characters of string B.

The recurrence: if characters match, dp[i][j] = dp[i-1][j-1] + 1. If they don't, dp[i][j] = max(dp[i-1][j], dp[i][j-1]). The table fills left-to-right, top-to-bottom, and the answer sits in the bottom-right cell.

Why seeing the 2D table matters

Reading the LCS recurrence is confusing. Watching the 2D table fill up — cell by cell, with each cell depending on its top, left, and diagonal neighbors — makes the logic immediately clear. This is the difference between memorizing a formula and understanding a process.

The 5 DP patterns you need for coding interviews

Most DP problems in interviews are variations of five structural patterns. Recognizing the pattern is the hardest part — the implementation follows naturally once you see it.

1

Linear DP

State depends on one or two previous states in a 1D array. The simplest and most common pattern.

Classic problems

  • Climbing Stairs (LC 70)
  • House Robber (LC 198)
  • Maximum Subarray (LC 53)
  • Jump Game II (LC 45)

Signal in the problem: The problem asks for a max/min/count across a sequence where each step depends on previous steps.

2

Knapsack (0/1 and Unbounded)

Choose items with weights and values to maximize value within a capacity. Each item can be taken once (0/1) or unlimited times (unbounded).

Classic problems

  • Coin Change (LC 322) — unbounded knapsack
  • Partition Equal Subset Sum (LC 416)
  • Target Sum (LC 494)
  • 0/1 Knapsack — classic

Signal in the problem: The problem involves selecting items to hit a target sum or maximize value with a constraint. 'Can we reach exactly X?' is almost always knapsack.

3

String DP

2D dp table where rows and columns represent characters from two strings. Answers questions about similarity, transformation, or subsequences.

Classic problems

  • Longest Common Subsequence (LC 1143)
  • Edit Distance (LC 72)
  • Longest Palindromic Subsequence (LC 516)
  • Regular Expression Matching (LC 10)

Signal in the problem: The problem involves two strings and asks for the minimum edits, longest shared subsequence, or whether one string can be transformed into another.

4

Grid DP

2D dp table where each cell depends on its top and left neighbors. The grid itself is the state space.

Classic problems

  • Unique Paths (LC 62)
  • Minimum Path Sum (LC 64)
  • Maximal Square (LC 221)
  • Dungeon Game (LC 174)

Signal in the problem: The problem involves moving through a 2D grid from a start to an end point, optimizing a cost or counting paths.

5

Interval DP

State depends on a range [i, j]. Usually requires trying all possible split points k between i and j. The most advanced pattern — rarely appears outside senior-level interviews.

Classic problems

  • Burst Balloons (LC 312)
  • Strange Printer (LC 664)
  • Minimum Cost to Cut a Stick (LC 1547)
  • Palindrome Partitioning II (LC 132)

Signal in the problem: The problem asks for the optimal way to split, partition, or process a sequence where the cost depends on the range being processed.

Is this a DP problem? The decision framework

When you see a new problem in an interview, run through these two questions before deciding on DP. Verbalizing this process shows the interviewer you have a systematic approach — not just pattern-matched to "use DP."

The two-question test

1. Can the problem be broken into smaller subproblems of the same type?

Can you express the answer to the full problem in terms of answers to smaller versions of the same problem? If the answer to "minimum coins for amount 6" depends on the answer to "minimum coins for amount 3" — yes.

2. Do those subproblems overlap?

Would a naive recursive solution compute the same subproblem multiple times? Write the recursion tree for a small example. If the same node appears more than once — yes. DP caches those repeated computations.

Both yes? → DP. Only the first? → Divide and Conquer (merge sort, binary search). Neither? → Greedy or direct computation.

Additional signals to watch for

  • The problem asks for the maximum, minimum, longest, or shortest of something — classic DP output.
  • The problem asks "how many ways" to do something — counting DP.
  • The problem asks whether something is possible (true/false) — boolean DP, often knapsack.
  • A greedy approach almost works but fails on edge cases — DP is usually the correct solution.
  • The constraint on n is ≤ 1000 or ≤ 10000 — O(n²) DP is feasible, O(2ⁿ) backtracking is not.

How to study DP so it actually sticks

DP has the highest rate of "I understood the solution but couldn't reproduce it" of any topic in interview prep. The reason is almost always the same: people study the code, not the process. Here's what works instead:

  1. Write the brute-force recursion first. Don't jump to the DP solution. Write the naive exponential recursive solution without caching. Once it works, the memoized version is just adding a cache. Once the memoized version works, the tabulated version is just reversing the direction. Each step is a small, motivated change.
  2. Draw the dp array or dp table before writing code. For 1D problems, draw the array and fill in values by hand for a small input. For 2D problems, draw the grid. The recurrence becomes obvious from the pattern of how cells depend on each other. This step takes 2 minutes and saves 20 minutes of confusion.
  3. Watch the execution, not just the code. Static code shows you the final state. Watching the dp array fill up step by step — cell by cell, with each transition animated — builds the mental model that makes DP feel intuitive, not mechanical. This is the gap between candidates who memorize DP and candidates who understand it.
  4. Practice pattern recognition, not problem memorization. After solving a problem, write one sentence identifying the pattern: "This is unbounded knapsack because I can use each coin unlimited times." Train the recognition reflex. In an interview, you'll have 5 minutes to identify the approach — that sentence is what you're building toward.
  5. Do problems in clusters, not randomly. Solve 5 linear DP problems in a row before moving to knapsack. Solve 5 knapsack problems before moving to string DP. Interleaving problem types feels more comprehensive but produces shallow pattern recognition. Blocked practice produces transfer.

Frequently asked questions

What is dynamic programming in simple terms?

Dynamic programming is a technique for solving problems that can be broken into overlapping subproblems. Instead of solving the same subproblem multiple times, DP solves it once and stores the result — either in a cache (memoization) or in a dp array built from the bottom up (tabulation). The result is an algorithm that's exponentially faster than the naive recursive solution.

What is the difference between memoization and tabulation?

Memoization is top-down: start with the full problem, recurse to subproblems, and cache results as you go. Only the subproblems you actually need get computed. Tabulation is bottom-up: start from the base cases and iteratively fill a dp array until you reach the answer. Tabulation avoids recursion overhead and is generally more space-efficient. Both produce the same result — the choice is usually about which is easier to derive for a given problem.

How do I know if a problem requires dynamic programming?

Apply the two-question test: (1) Can the problem be broken into subproblems of the same type? (2) Do those subproblems overlap — would a naive recursive solution compute the same thing multiple times? If yes to both, DP is the right approach. Additional signals: the problem asks for a max, min, count, or "is it possible?" result; the constraint on n suggests O(n²) is acceptable; a greedy approach almost works but fails on edge cases.

Is dynamic programming hard to learn?

DP is hard to learn the way it's usually taught — by reading finished solutions. It becomes significantly more approachable when you (1) always start from the brute- force recursion, (2) draw the dp array by hand before coding, and (3) watch the execution step by step rather than reading static code. The underlying idea is not complex. The difficulty is that most resources skip the derivation process and show only the result.

What are the most important dynamic programming problems for coding interviews?

The ten problems that cover the most ground: Climbing Stairs, House Robber, Coin Change, Longest Common Subsequence, Edit Distance, Unique Paths, Longest Increasing Subsequence, Partition Equal Subset Sum, Word Break, and Maximum Product Subarray. These ten span all five core DP patterns — linear, knapsack, string, grid, and interval — and provide the mental models needed to handle novel problems.

Summary

Dynamic programming is not a collection of tricks to memorize — it's a systematic approach to a specific class of problems. The key insight is always the same: if your recursive solution solves the same subproblem more than once, cache the result. If you can identify the recurrence, you can build the dp array.

The difference between candidates who find DP approachable and candidates who fear it is not intelligence — it's the study method. Watching DP execute visually, drawing the dp table before coding, and practicing by pattern clusters produces transfer. Reading finished solutions produces fragile recall.

Start with Fibonacci. Watch it go from exponential recursion to memoization to tabulation. Once that transition clicks, every other DP problem is a variation of the same story.

Explore more

See dynamic programming execute — cell by cell

Expora's visual debugger shows you the dp array filling up in real time — step by step, with your own code, in any language. Watch memoization cache subproblems as they're computed. Watch tabulation build the answer from the base case up. This is the visualization that makes DP click.