A comprehensive guide for senior developers preparing for Google algorithm interviews. This section focuses on WHY things work, not just templates.
Dynamic Programming: From First Principles
What DP Actually Is
Dynamic Programming is not a specific algorithm or a template to memorize. It’s a way of thinking about problems.
The core insight: "Can I express the answer to this problem in terms of answers to smaller versions of the same problem?"
If yes AND the subproblems overlap (same computation repeated), DP applies. That’s it. Everything else is implementation detail.
The Two Conditions
-
Optimal Substructure: The optimal solution contains optimal solutions to subproblems. If you have the best answer for
n-1, the best answer fornis built from it. -
Overlapping Subproblems: The same subproblem is solved multiple times. Without this, recursion is fine (divide and conquer).
Why This Matters
A senior engineer doesn’t memorize DP patterns. Instead, when you see a recursive solution, ask: "Am I computing the same thing twice?" If yes, you just found the opportunity for DP.
Fibonacci without DP:
fib(5)
├─ fib(4)
│ ├─ fib(3)
│ │ ├─ fib(2) ← computed again!
│ │ └─ fib(1)
│ └─ fib(2) ← computed again!
└─ fib(4) ← computed again!
The recursion tree shows that fib(2) is computed 3 times, fib(3) is computed 2 times. This is O(2^n) — exponential!
With memoization, each unique input is computed exactly once:
fib(5) calls fib(4) and fib(3)
fib(4) calls fib(3) and fib(2)
fib(3) calls fib(2) and fib(1)
...
After computing fib(3) the first time, every subsequent call returns the cached value in O(1). Total: O(n) time, O(n) space.
Top-Down (Memoization) vs Bottom-Up (Tabulation)
Both are "DP" — they solve overlapping subproblems efficiently. They differ in approach and have trade-offs.
| Aspect | Top-Down (Memoization) | Bottom-Up (Tabulation) |
|---|---|---|
Direction |
Start big, recurse down |
Start small, build up |
Control Flow |
Recursive |
Iterative |
Space |
O(recursion depth) + cache |
Cache only (can optimize) |
When to Use |
When you only need some subproblems |
When you need many/all subproblems |
Interview Approach |
Start here (natural thinking) |
Optimize to this if asked |
Top-Down (Memoization)
You write a recursive function, but before computing, check the cache. If not cached, compute and store.
WHY: It’s how humans naturally think. "To solve fib(5), I need fib(4) and fib(3). Let me solve those first."
WHY optimal: You only compute subproblems you actually need. If you ask for fib(5), you never touch fib(100).
Code pattern:
const memo = {};
function fib(n) {
if (n in memo) return memo[n];
if (n <= 1) return n;
memo[n] = fib(n - 1) + fib(n - 2);
return memo[n];
}
Bottom-Up (Tabulation)
You iterate from the smallest subproblem to the largest, filling a table.
WHY: No recursion overhead (no call stack frames). No risk of stack overflow. Easy to optimize space.
Trade-off: You compute all subproblems up to your target, even if some aren’t needed.
Code pattern:
function fib(n) {
if (n <= 1) return n;
const dp = [0, 1];
for (let i = 2; i <= n; i++) {
dp[i] = dp[i - 1] + dp[i - 2];
}
return dp[n];
}
// Can further optimize space:
function fib(n) {
let [prev, curr] = [0, 1];
for (let i = 2; i <= n; i++) {
[prev, curr] = [curr, prev + curr];
}
return curr;
}
When to Pick Which
In a 45-minute interview: 1. Start with top-down if it’s natural. It’s easier to explain and less error-prone. 2. If interviewer asks about space optimization or if you hit recursion limits, convert to bottom-up. 3. Sometimes you can do both: show memoization first, then say "here’s the iterative version" and explain why space is better.
The DP Problem-Solving Framework
When you see a new problem, follow this mental checklist:
| Step | Example: Coin Change (minimum coins to make amount) |
|---|---|
Step 1: Define the state |
dp[i] = minimum coins needed to make amount i |
Step 2: Define the recurrence |
dp[i] = min(dp[i - coin] + 1) for all coins ≤ i |
Step 3: Define the base case |
dp[0] = 0 (zero coins needed for amount 0) |
Step 4: Define the answer |
dp[amount] is the answer |
Step 5: Order of computation |
Bottom-up from 1 to amount (dependency order) |
Step 6: Optimize space |
Could use 1D array instead of 2D for some problems |
This framework is mechanical. If you can answer all 6 questions, you can code the solution.
Let’s apply it to House Robber:
| Question | Answer |
|---|---|
State |
dp[i] = maximum money robbing houses 0 to i |
Recurrence |
dp[i] = max(dp[i-1], dp[i-2] + nums[i]) |
Reasoning |
For house i, either skip it (keep dp[i-1]) or rob it (add nums[i] to best of i-2, since you can’t rob i-1) |
Base Case |
dp[0] = nums[0], dp[1] = max(nums[0], nums[1]) |
Answer |
dp[n-1] |
Order |
Iterate from 2 to n-1 |
The recurrence is the KEY. Write it clearly, then the code almost writes itself.
1D DP Problems (Most Common at Google)
These are the bread and butter of algorithm interviews.
Climbing Stairs
Problem: You’re at step 0. Each move, you can climb 1 or 2 steps. How many distinct ways to reach step n?
Define the State
dp[i] = number of ways to reach step i
Define the Recurrence
To reach step i, you either: - Came from step i-1 (climb 1) - Came from step i-2 (climb 2)
So: dp[i] = dp[i-1] + dp[i-2]
This is exactly Fibonacci! Because you have two choices at each step, the branching matches fib.
Space Optimization
Notice we only need the previous two values. No need to store the entire array:
function climbStairs(n) {
if (n <= 1) return 1;
let [prev, curr] = [1, 1];
for (let i = 2; i <= n; i++) {
[prev, curr] = [curr, prev + curr];
}
return curr;
}
Interview skill: Start with dp[i] array. Then say "We only use dp[i-1] and dp[i-2], so we can optimize space to O(1)." Interviewer loves this.
House Robber
Problem: Houses in a line. Rob a house = get money, but can’t rob adjacent houses. Maximize money.
Why This Is Tricky
Greedy fails! If houses = [5, 3,4, 11, 2], greedy picks 5, then 4, then 2 = 11. Optimal is 5 + 11 = 16.
DP is the tool: "For each house, decide to rob or skip."
Define the State
dp[i] = maximum money robbing houses 0 to i
Define the Recurrence
For house i: - Rob it: get nums[i] + dp[i-2] (best we could do without i-1) - Skip it: get dp[i-1]
So: dp[i] = max(dp[i-1], dp[i-2] + nums[i])
Base Case
-
dp[0] = nums[0] (only one house, rob it)
-
dp[1] = max(nums[0], nums[1]) (two houses, rob the one with more money)
Interview Presentation
"For each house, I have two choices: rob it or skip it. If I rob it, I get the money from this house plus the best I could do up to 2 houses back (since I can’t rob adjacent). If I skip it, I get the best from the previous house. I take the max."
This explanation shows understanding of WHY the recurrence is correct.
Coin Change
Problem: Coins of different denominations. Minimum coins to make amount?
Why Greedy Fails
Coins = [1, 3, 4], amount = 6. Greedy picks 4, then 1, then 1 = 3 coins. Optimal is 3 + 3 = 2 coins.
DP: For each amount, try removing each coin and take the min.
Define the State
dp[i] = minimum coins to make amount i
Define the Recurrence
For amount i, try each coin c: - If c ⇐ i: dp[i] = min(dp[i], dp[i - c] + 1) (the +1 is the coin we just used)
Base case: dp[0] = 0
Why This Works
We build up from amount 0. For each amount, we look at all coins we can use, and pick the one that leads to the fewest total coins.
This is correct because: 1. We consider all options (every coin) 2. We pick the minimum 3. We build from smaller subproblems
Space Note
This uses 1D space because the state only depends on smaller amounts, not on the coin dimension.
Longest Increasing Subsequence (LIS)
Problem: Find the longest subsequence of a sorted sequence. "Subsequence" means not necessarily contiguous.
Example: [3, 10, 2, 1, 20] → LIS is [3, 10, 20] (length 3).
Naive Approach: O(n²) DP
Define: dp[i] = length of LIS ending at index i
Key insight: To compute dp[i], look at all j < i where arr[j] < arr[i]. Take max(dp[j]) + 1.
function lis(arr) {
const dp = Array(arr.length).fill(1);
for (let i = 1; i < arr.length; i++) {
for (let j = 0; j < i; j++) {
if (arr[j] < arr[i]) {
dp[i] = Math.max(dp[i], dp[j] + 1);
}
}
}
return Math.max(...dp);
}
O(n²) time, O(n) space. Good enough for most interviews.
Optimal Approach: O(n log n) with Binary Search
There’s an O(n log n) solution using binary search and a clever observation, but it’s complex to code under time pressure. Mention it if the interviewer asks to optimize.
For the interview: stick with O(n²) and clearly explain the state and recurrence.
Word Break
Problem: Given a string and a dictionary of words, can the string be segmented using dictionary words?
Example: s = "leetcode", dict = ["leet", "code"] → true (leet + code)
Define the State
dp[i] = true if s[0…i-1] can be segmented using dictionary words
Define the Recurrence
For position i, look back at all previous positions j < i: - If dp[j] is true AND s[j…i-1] is in the dictionary, then dp[i] is true
So: dp[i] = OR of (dp[j] AND isInDict(s[j…i-1])) for all j < i
Base case: dp[0] = true (empty string is always segmentable)
Optimization
Use a Set for O(1) dictionary lookup:
function wordBreak(s, wordDict) {
const wordSet = new Set(wordDict);
const dp = Array(s.length + 1).fill(false);
dp[0] = true;
for (let i = 1; i <= s.length; i++) {
for (let j = 0; j < i; j++) {
if (dp[j] && wordSet.has(s.substring(j, i))) {
dp[i] = true;
break; // can optimize with break
}
}
}
return dp[s.length];
}
2D DP Problems
These expand to two dimensions but follow the same framework.
Unique Paths
Problem: Robot at top-left of m×n grid. Can move right or down. Count paths to bottom-right.
Why This Matters
2D DP shows the framework scales. It’s not harder, just more dimensions.
Define the State
dp[i][j] = number of ways to reach cell (i, j)
Define the Recurrence
Cell (i, j) can be reached from: - Cell above (i-1, j) - Cell to the left (i, j-1)
So: dp[i][j] = dp[i-1][j] + dp[i][j-1]
Base Case
-
First row: dp[0][j] = 1 (only one way, move right)
-
First column: dp[i][0] = 1 (only one way, move down)
Space Optimization
We only use the previous row and current row, so we can use 2 rows instead of the full grid:
function uniquePaths(m, n) {
const prev = Array(n).fill(1);
for (let i = 1; i < m; i++) {
const curr = Array(n).fill(1);
for (let j = 1; j < n; j++) {
curr[j] = prev[j] + curr[j - 1];
}
prev.splice(0, prev.length, ...curr);
}
return prev[n - 1];
}
// Or even 1D:
function uniquePaths(m, n) {
const dp = Array(n).fill(1);
for (let i = 1; i < m; i++) {
for (let j = 1; j < n; j++) {
dp[j] += dp[j - 1];
}
}
return dp[n - 1];
}
Longest Common Subsequence (LCS)
Problem: Given two strings, find the longest sequence of characters that appear in both (in order, but not necessarily contiguous).
Example: "abc" and "adc" → LCS is "ac" (length 2)
Define the State
dp[i][j] = length of LCS of s1[0…i-1] and s2[0…j-1]
Define the Recurrence
For s1[i-1] and s2[j-1]:
- If they match: dp[i][j] = dp[i-1][j-1] + 1
- If they don’t match: dp[i][j] = max(dp[i-1][j], dp[i][j-1])
(either skip from s1 or skip from s2)
Base case: dp[0][j] = 0 and dp[i][0] = 0 (empty string has LCS 0 with anything)
Implementation
function lcs(s1, s2) {
const m = s1.length, n = s2.length;
const dp = Array(m + 1).fill(null).map(() => Array(n + 1).fill(0));
for (let i = 1; i <= m; i++) {
for (let j = 1; j <= n; j++) {
if (s1[i - 1] === s2[j - 1]) {
dp[i][j] = dp[i - 1][j - 1] + 1;
} else {
dp[i][j] = Math.max(dp[i - 1][j], dp[i][j - 1]);
}
}
}
return dp[m][n];
}
Edit Distance (Levenshtein Distance)
Problem: Minimum operations (insert, delete, replace) to convert s1 to s2.
Example: "horse" to "ros" needs 3 operations: delete 'h', delete 'e', replace 'e' with 's'.
Define the State
dp[i][j] = min operations to convert s1[0…i-1] to s2[0…j-1]
Define the Recurrence
For s1[i-1] and s2[j-1]:
- If they match: dp[i][j] = dp[i-1][j-1] (no operation needed)
- If they don’t match:
- Replace: dp[i-1][j-1] + 1
- Delete from s1: dp[i-1][j] + 1
- Insert to s1: dp[i][j-1] + 1
- Take minimum: dp[i][j] = min(these three) + 1
Base case: - dp[0][j] = j (insert j characters) - dp[i][0] = i (delete i characters)
Implementation
function editDistance(s1, s2) {
const m = s1.length, n = s2.length;
const dp = Array(m + 1).fill(null).map(() => Array(n + 1).fill(0));
for (let i = 0; i <= m; i++) dp[i][0] = i;
for (let j = 0; j <= n; j++) dp[0][j] = j;
for (let i = 1; i <= m; i++) {
for (let j = 1; j <= n; j++) {
if (s1[i - 1] === s2[j - 1]) {
dp[i][j] = dp[i - 1][j - 1];
} else {
dp[i][j] = 1 + Math.min(
dp[i - 1][j - 1], // replace
dp[i - 1][j], // delete
dp[i][j - 1] // insert
);
}
}
}
return dp[m][n];
}
When to Recognize DP in an Interview
You won’t see "this is a DP problem" on the whiteboard. How do you spot it?
| Signal | What to Think |
|---|---|
"minimum/maximum/best" |
Usually DP (optimize over choices) |
"how many ways" |
Count problems: usually DP (choices branch) |
"is it possible" |
Feasibility: usually DP (check all options) |
"can you make/partition/segment" |
DP (try all divisions) |
You write recursive solution that recomputes the same thing |
Add memoization! |
Interviewer: "Given an array of integers, find the maximum sum of non-adjacent elements."
Your thought process: 1. "For each element, I can either pick it or skip it." 2. "If I pick it, I can’t pick the previous one." 3. "This is similar to House Robber!" 4. "I’ll try a recursive solution: maxSum(i) = max(skip element i, include element i + maxSum(i-2))" 5. "Wait, I’m computing maxSum(i) multiple times when I recurse." 6. "Add memoization!"
You just spotted and solved a DP problem in 2 minutes.
What to Tell the Interviewer
This is how a senior engineer talks about DP:
-
"Let me start with the recursive structure: [write the recursive solution]"
-
"I notice I’m computing the same subproblems. For example, fib(3) is computed twice. Let me add memoization."
-
"With memoization, each unique input is computed once. Time: O(n), space: O(n)."
-
"[Optional] If we want to optimize space, I can do bottom-up iteration instead of recursion: [code bottom-up]"
This shows: - You understand the recursion first - You spot the optimization opportunity - You can articulate complexity - You can trade off time for space if needed
Recursion and Backtracking
Recursion: The Mental Model
Recursion is not magic. It’s: 1. Trust that the function works for smaller inputs 2. Build the answer from those results 3. Stop at a base case
The "leap of faith": When writing fib(n) = fib(n-1) + fib(n-2), you don’t trace through all the recursive calls mentally. You trust that fib(n-1) returns the right answer, and fib(n-2) returns the right answer.
The Call Stack
Each function call pushes a frame on the call stack. The frame stores local variables and where to return. When the function returns, the frame pops off.
Example: fib(4)
Call stack:
[fib(4)]
└─ [fib(4), fib(3)]
└─ [fib(4), fib(3), fib(2)]
└─ [fib(4), fib(3), fib(2), fib(1)]
(returns 1)
└─ [fib(4), fib(3), fib(2)]
(returns 1)
└─ [fib(4), fib(3)]
(returns 2)
└─ [fib(4), fib(3), fib(1)]
(returns 1)
└─ [fib(4)]
(returns 3)
└─ [fib(4), fib(2)]
(similar pattern, returns 2)
└─ [fib(4)]
(returns 5)
The key insight: Recursion uses O(depth) space because each call is on the stack.
Three Parts of Every Recursive Function
| Part | Purpose | Example |
|---|---|---|
Base Case |
When to stop recursing |
|
Recursive Case |
How to make progress toward base |
|
Work at This Level |
What this call computes |
Addition at this level |
Backtracking: Systematic Exploration
Backtracking = recursion + choice + undo.
The intuition: You’re exploring a maze. You pick a direction, walk forward. If you hit a dead end, you backtrack (go back to the junction) and try another direction. You systematically explore all paths until you find the solution.
Code pattern:
function backtrack(state, choices) {
// Base case: solution found
if (isSolution(state)) {
results.push(copy(state));
return;
}
// Recursive case: make a choice and recurse
for (let choice of choices) {
if (isValid(choice, state)) {
// Make the choice (modify state)
makeChoice(state, choice);
// Recurse with the modified state
backtrack(state, remainingChoices);
// CRITICAL: Undo the choice to try the next one
undoChoice(state, choice);
}
}
}
The "undo" step is what makes it backtracking. Without it, you’re just recursing; with it, you’re exploring systematically.
Why the Undo Matters
// WITHOUT undo: state is left modified
function badBacktrack(state, choices) {
if (isSolution(state)) {
results.push(copy(state));
return;
}
for (let choice of choices) {
if (isValid(choice, state)) {
makeChoice(state, choice); // state is now modified
badBacktrack(state, remainingChoices);
// state is still modified! Next iteration uses wrong state.
}
}
}
// WITH undo: state is restored
function goodBacktrack(state, choices) {
if (isSolution(state)) {
results.push(copy(state));
return;
}
for (let choice of choices) {
if (isValid(choice, state)) {
makeChoice(state, choice);
goodBacktrack(state, remainingChoices);
undoChoice(state, choice); // state is restored
// Now state is back to what it was, ready for next choice
}
}
}
With undo, each iteration of the loop starts with the state in the same position. That’s how you explore all possibilities.
Backtracking Problems
Subsets (Generate All 2^n Subsets)
Problem: Given array, return all subsets.
Example: [1, 2] → [[], [1], [2], [1, 2]]
Decision Tree
For each element, you make a binary choice: include it or exclude it.
[]
/ \
[1] []
/ \ / \
[1,2][1] [2] []
This is a perfect binary tree of depth n. Each path from root to leaf is a subset.
Two Approaches
Approach 1: Include/Exclude
function subsets(nums) {
const result = [];
function backtrack(index, current) {
// Base case: we've made choices for all elements
if (index === nums.length) {
result.push([...current]);
return;
}
// Choice 1: include nums[index]
current.push(nums[index]);
backtrack(index + 1, current);
// Choice 2: exclude nums[index]
current.pop();
backtrack(index + 1, current);
}
backtrack(0, []);
return result;
}
Trace
For [1, 2]:
backtrack(0, [])
├─ include 1: backtrack(1, [1])
│ ├─ include 2: backtrack(2, [1, 2]) → result: [1, 2]
│ └─ exclude 2: backtrack(2, [1]) → result: [1]
└─ exclude 1: backtrack(1, [])
├─ include 2: backtrack(2, [2]) → result: [2]
└─ exclude 2: backtrack(2, []) → result: []
Total: 4 subsets = 2^2. Correct!
Permutations (Generate All n! Permutations)
Problem: Given array, return all permutations.
Example: [1, 2] → [[1, 2], [2, 1]]
Decision Tree
At each position, you choose which unused element to place.
Level 0: choices = [1, 2]
Level 1: if chose 1, choices = [2]; if chose 2, choices = [1]
Level 2: all elements used, permutation complete
Tree has n! leaves (one for each permutation).
Implementation
function permutations(nums) {
const result = [];
const used = new Set();
function backtrack(current) {
// Base case: all elements used
if (current.length === nums.length) {
result.push([...current]);
return;
}
// Recursive case: try each unused element
for (let num of nums) {
if (used.has(num)) continue;
// Make choice
used.add(num);
current.push(num);
// Recurse
backtrack(current);
// Undo choice
current.pop();
used.delete(num);
}
}
backtrack([]);
return result;
}
Complexity
-
Time: O(n! * n) — n! permutations, each taking O(n) to generate
-
Space: O(n) for the recursion depth + output storage
Combination Sum
Problem: Given array of numbers (no duplicates) and target, find all combinations that sum to target. Each number can be used multiple times.
Example: nums = [2, 3, 6, 7], target = 7 → [[2, 2, 3], [7]]
Key Insight
Unlike permutations, order doesn’t matter ([2, 3] and [3, 2] are the same). So you need to avoid duplicates by enforcing an order: always pick elements at or after the current index.
Implementation
function combinationSum(nums, target) {
const result = [];
function backtrack(start, current, remaining) {
// Base case: found a combination
if (remaining === 0) {
result.push([...current]);
return;
}
// Pruning: can't possibly reach target
if (remaining < 0) return;
// Recursive case: try each number starting from 'start'
for (let i = start; i < nums.length; i++) {
const num = nums[i];
// Make choice
current.push(num);
// Recurse: allow reusing this number (pass i, not i+1)
backtrack(i, current, remaining - num);
// Undo choice
current.pop();
}
}
backtrack(0, [], target);
return result;
}
Why start Instead of index
By passing start = i (not i + 1), we allow reusing the same number. This is why [2, 2, 3] is valid.
Pruning
The check if (remaining < 0) return; prunes the search tree early. If the remaining target is negative, no point recursing further.
Word Search (DFS + Backtracking on Grid)
Problem: Given 2D grid of characters and a word, determine if the word exists in the grid. Can move up, down, left, right.
Example:
[["A","B","C","E"],
["S","F","C","S"],
["A","D","E","E"]]
Word "ABCCED" → true (found in grid)
Word "SEE" → true
Word "ABCB" → false (would need to reuse B)
Implementation
function wordSearch(board, word) {
const rows = board.length, cols = board[0].length;
function backtrack(r, c, index, visited) {
// Base case: matched entire word
if (index === word.length) return true;
// Out of bounds or character mismatch
if (r < 0 || r >= rows || c < 0 || c >= cols) return false;
if (board[r][c] !== word[index]) return false;
if (visited.has(`${r},${c}`)) return false;
// Make choice: mark as visited
visited.add(`${r},${c}`);
// Recurse: try all 4 directions
const directions = [[0, 1], [0, -1], [1, 0], [-1, 0]];
for (const [dr, dc] of directions) {
if (backtrack(r + dr, c + dc, index + 1, visited)) {
return true;
}
}
// Undo choice: unmark as visited (backtrack)
visited.delete(`${r},${c}`);
return false;
}
for (let r = 0; r < rows; r++) {
for (let c = 0; c < cols; c++) {
if (backtrack(r, c, 0, new Set())) {
return true;
}
}
}
return false;
}
Why Backtracking?
The "visited" set ensures we don’t reuse the same cell in a path. When we backtrack (undo), we remove the cell from visited so it can be used in other paths.
This is different from a simple DFS because we’re not just checking if a path exists; we’re exploring all possible paths until one matches the word.
Backtracking Interview Strategy
-
Draw the decision tree if it’s complex (subsets, permutations).
-
Identify what to choose (which element? which direction?).
-
Identify what to undo (is state modified?).
-
Implement with care: make → recurse → undo, in that order.
-
Mention complexity: O(choices^depth * work_per_node).
For subsets: 2^n subsets For permutations: n! permutations For combination sum: exponential, but with pruning For word search: exponential, but with visited set