Before diving into DP, let us first understand where do we use DP.
The core concept of DP is to avoid repeated work by remembering partial results (results of subproblems). This is very critical in terms of boosting performance and speed of algorithm. Most of the problems in computer science and real world can be solved using DP technique.
In real life scenarios, consider the example where I have to go from home to work everyday. For the first time, I can calculate the shortest path between home and work by considering all possible routes. But, it is not feasible to do the calculation every day. Hence, I will be memorizing that shortest path and will be following that route everyday. In computer science terms, Google Maps will be using DP algorithm to find the shortest paths between two points.
Largest Common Subsequence (LCS) problem  Basis of data comparison problems and to identify plagiarism in the contents.
Longest Increasing Subsequence problem  used in DNA Matching between two individuals. Generally, the DNAs are represented as strings and to form a match between DNAs of two individuals, the algorithm needs to find out the longest increasing sub sequence between them. In cases of DNA match, the longest common substring (LCS) is also found.
Knapsack Problem You have a bag of limited capacity and you decide to go on a challenging trek. Due to the capacity restriction, you can only carry certain items in optimum quantity. How do you select the materials and its quantity in efficient manner so that you don’t miss out on important items? That’s where DP comes into aid.
Apart from the above, DP has found its importance in various fields like Bioinformatics, Operations research, Decision Making, Image Processing, MATLAB, MS Word, MS Excel, Financial Optimisations, Genetics, XML indexing and querying and what not! Read More
Problem  Score  Companies  Time  Status 

Longest Common Subsequence  200 

36:21  
Longest Palindromic Subsequence  200 

35:25  
Edit Distance  300 

46:30  
Repeating SubSequence  300 

62:32  
Distinct Subsequences  325 

62:22  
Interleaving Strings  500 

59:17  
Regular Expression Match  500 

69:06  
Regular Expression II  500 

60:45  
Scramble String  500 

57:54 
Problem  Score  Companies  Time  Status 

Length of Longest Subsequence  200 

64:40  
Largest area of rectangle with permutations  200 

71:18  
Smallest sequence with given Primes  200 

62:55  
Ways to Decode  225 

69:20  
Stairs  225 

15:14  
Intersecting Chords in a Circle  300 

66:30  
Longest Increasing Subsequence  300 

30:35 
Problem  Score  Companies  Time  Status 

Tushar's Birthday Bombs  200 

72:34  
Jump Game Array  225 

40:51  
Min Jumps Array  300 

69:34 
Problem  Score  Companies  Time  Status 

Longest Arithmetic Progression  200 

70:29  
N digit numbers with digit sum S  200 

69:39  
Ways to color a 3xN Board  200 

57:25  
Shortest common superstring  200 

64:39  
Kth Manhattan Distance Neighbourhood  200 

58:27  
Best Time to Buy and Sell Stock atmost B times  200 

58:22  
Coins in a Line  300 

60:47  
Evaluate Expression To True  350 

64:21  
Longest valid Parentheses  700 

57:57  
Best Time to Buy and Sell Stocks III  700 

59:52 
Problem  Score  Companies  Time  Status 

Kingdom War  200 

51:36  
Maximum Size Square Submatrix  200 

30:06  
Maximum Path in Triangle  200 

25:04  
Min Sum Path in Matrix  300 

30:05  
Dungeon Princess  300 

64:34  
Min Sum Path in Triangle  300 

41:42  
Unique Paths in a Grid  300 

32:57  
Max Rectangle in Binary Matrix  350 

74:59  
Rod Cutting  350 

70:40  
Queen Attack  350 

65:15 
Problem  Score  Companies  Time  Status 

Sub Matrices with sum Zero  200 

67:10  
Coin Sum Infinite  225 

62:10  
Best Time to Buy and Sell Stocks I  300 

27:45  
Max Product Subarray  300 

62:58  
Arrange II  350 

66:44 
Problem  Score  Companies  Time  Status 

Chain of Pairs  200 

27:07  
Max Sum Without Adjacent Elements  225 

56:47 
Problem  Score  Companies  Time  Status 

Tushar's Birthday Party  200 

63:49  
Flip Array  200 

70:48  
01 Knapsack  200 

36:28  
Equal Average Partition  350 

76:03 
Problem  Score  Companies  Time  Status 

Best Time to Buy and Sell Stocks II  225 

39:43 
Problem  Score  Companies  Time  Status 

Word Break II  350 

62:34 
Problem  Score  Companies  Time  Status 

Max Sum Path in Binary Tree  400 

53:29 
Problem  Score  Companies  Time  Status 

Unique Binary Search Trees II  400 

34:21  
Count Permutations of BST  400 

67:14 
Problem  Score  Companies  Time  Status 

Word Break  400 

63:14  
Palindrome Partitioning II  400 

57:58 
Problem  Score  Companies  Time  Status 

Tiling With Dominoes  200 

51:15  
Increasing Path in Matrix  200 

33:21  
Egg Drop Problem!  450 

50:48  
Paint House!  200 

25:11  
Minimum Difference Subsets!  200 

34:58  
Subset Sum Problem!  200 

29:52  
Merge elements  300 

53:35  
Max edge queries!  200 

57:15 
Before moving on to approaches to solve a DP problem, let us have a look at the characteristics of a problem upon which we can apply the DP technique.
We can apply DP technique to those problems that exhibit the below 2 characteristics:
We know that a n^{th} Fibonacci number (Fib(n)) is nothing but sum of previous 2 fibonacci numbers, i.e: Fib(n) = Fib(n1) + Fib(n2)
.
From the above equation, we can clearly deduce that a problem of size ‘n’ has been reduced to subproblems of size ‘n1’ and ‘n2’.
Hence, we can say that Fibonacci numbers have the optimal substructure property.
Note: It is important for a problem to have BOTH the above specified characteristics in order to be eligible to be solved using DP technique.
We shall continue with the example of finding the n^{th} Fibonacci number in order to understand the DP methods available. We have the following two methods in DP technique. We can use any one of these techniques to solve a problem in optimised manner.
Top Down Approach is the method where we solve a bigger problem by recursively finding the solution to smaller subproblems.
Whenever we solve a smaller subproblem, we remember (cache) its result so that we don’t solve it repeatedly if it’s called many times. Instead of solving repeatedly, we can just return the cached result.
This method of remembering the solutions of already solved subproblems is called Memoization.
Without Memoization
Think of a recursive approach to solving the problem. This part is simple. We already know Fib(n) = Fib(n  1) + Fib(n  2)
.
Write a recursive code for the approach you just thought of.
/**
* Pseudo code for finding Fib(n) without memoization
* @Parameters: n : nth Fibonacci term needed
*
*/
int Fib(int n) {
if (n <= 1) return n; //Fib(0)=0; Fib(1)=1
return Fib(n  1) + Fib(n  2);
}
The time complexity of the above approach based on careful analysis on the property of recursion shows that it is essentially exponential in terms of n because some terms are evaluated again and again.
With Memoization
Fib(n)
is called again, you do not recompute the whole thing.memo
then.
/**
* Pseudo code for finding Fib(n) with memoization
* @Parameters: n : nth Fibonacci term needed
*
*/
int memo[100] = {0};
int Fib(int n) {
if (n <= 1) return n;
// If we have processed this function before,
// return the result from the last time.
if (memo[n] != 0) return memo[n];
// Otherwise calculate the result and remember it.
memo[n] = Fib(n  1) + Fib(n  2);
return memo[n];
}
O(n)
.O(n)
. So, overall space complexity is O(n) + O(n) = 2 O(n) = O(n)
.Lets look at Fib(n)
.
When Fib(n  1)
is called, it makes a call to Fib(n  2)
. So when the call comes back to the original call from Fib(n)
, Fib(n2)
would already be calculated. Hence the call to Fib(n  2)
will be O(1).
Hence,
T(n) = T(n  1) + c where c is a constant.
= T(n  2) + 2c
= T(n  3) + 3c
= T(n  k) + kc
= T(0) + n * c = 1 + n * c = O(n)
Thanks to Dynamic Programming, we have successfully reduced a exponential problem to a linear problem.
/* C Program to find Nth Fibonacci Number using Memoization */
#include<stdio.h>
int Fibonacci_Series(int);
int memo[100] = {0};
int main()
{
int N, FibN;
printf("\n Enter the Number to find Nth Fibonacci Number : ");
scanf("%d", &N);
FibN = Fibonacci_Series(N);
printf("\n Fibonacci Number = %d", FibN);
return 0;
}
int Fibonacci_Series(int N)
{
if ( N == 0 )
return 0;
else if ( N == 1 )
return 1;
if (memo[n] != 0) return memo[n];
else{
memo[n]=Fibonacci_Series(N  1) + Fibonacci_Series(N  2)
return memo[n];
}
}
/* C++ Program to find Nth Fibonacci Number using Memoization */
#include <bits/stdc++.h>
using namespace std;
#define NIL 1
int memo[100];
/* Function to initialize NIL values in memo */
void initializeMemo()
{
int i;
for (i = 0; i < 100; i++)
memo[i] = NIL;
}
int fib(int n)
{
if (memo[n] == NIL)
{
if (n <= 1)
memo[n] = n;
else
memo[n] = fib(n  1) + fib(n  2);
}
return memo[n];
}
// Main Driver code
int main ()
{
int n = 10;
initializeMemo();
cout << "Fibonacci number : " << fib(n);
return 0;
}
/* Java Program to find Nth Fibonacci Number using Memoization */
public class Fibonacci
{
final int NIL = 1;
int memo[] = new int[100];
/* Function to initialize NIL values in memo */
void initializeMemo()
{
for (int i = 0; i < 100; i++)
memo[i] = NIL;
}
int fib(int n)
{
if (memo[n] == NIL)
{
if (n <= 1)
memo[n] = n;
else
memo[n] = fib(n1) + fib(n2);
}
return memo[n];
}
//Main Driver class
public static void main(String[] args)
{
Fibonacci fibonacci = new Fibonacci();
int n = 10;
fibonacci.initializeMemo();
System.out.println("Fibonacci number : " + fibonacci.fib(n));
}
}
# Python Program to find Nth Fibonacci Number using Memoization
def fib(n, memo):
# Base case
if n == 0 or n == 1 :
memo[n] = n
# If memo[n] is not evaluated before then calculate it
if memo[n] is None:
memo[n] = fib(n1 , memo) + fib(n2 , memo)
# return the value corresponding value of n
return memo[n]
# Driver program
def main():
n = 10
# Declaration of memo
memo = [None]*(101)
# Calculate and display result
print("Fibonacci Number is " + str(fib(n, lookup)))
if __name__=="__main__":
main()
As the name indicates, bottom up is the opposite of the topdown approach which avoids recursion.
Here, we solve the problem “bottomup” way i.e. by solving all the related subproblems first. This is typically done by populating into an ndimensional table.
Depending on the results in the table, the solution to the original problem is then computed. This approach is therefore called as “Tabulation”.
Fib(n) = Fib(n  1) + Fib(n  2)
./**
* Pseudo code for finding Fib(n) using tabulation
* @Parameters: n : nth Fibonacci term needed
* local variable dp[] table built to store results of smaller subproblems
*/
int Fib(int n) {
if (n==0) return 0;
int dp[] = new int[n+1];
//define base cases
dp[0] = 0;
dp[1] = 1;
//Iteratively compute the results and store
for(int i=2; i<=n; i++)
dp[i] = dp[i1] + dp[i2];
//return the value corresponding to nth term
return dp[n];
}
O(n)
.O(N)
to O(1)
by just using 2 variables. This is left as an assignment to the reader.Fib(n)
./* C Program to find Nth Fibonacci Number using Tabulation */
#include<stdio.h>
int fib(int n)
{
int dp[n+1];
int i;
//base cases
dp[0] = 0; dp[1] = 1;
//calculating and storing the values
for (i = 2; i <= n; i++)
dp[i] = dp[i1] + dp[i2];
return dp[n];
}
int main ()
{
int n = 10;
printf("Fibonacci number : %d ", fib(n));
return 0;
}
/* C++ Program to find Nth Fibonacci Number using Tabulation */
#include <bits/stdc++.h>
using namespace std;
int fib(int n)
{
int dp[n+1];
int i;
//base cases
dp[0] = 0; dp[1] = 1;
//calculating and storing the values
for (i = 2; i <= n; i++)
dp[i] = dp[i1] + dp[i2];
return dp[n];
}
// Main Driver code
int main ()
{
int n = 10;
cout << "Fibonacci number : " << fib(n);
return 0;
}
/* Java Program to find Nth Fibonacci Number using Tabulation */
public class Fibonacci
{
int fib(int n){
int dp[] = new int[n+1];
//base cases
dp[0] = 0;
dp[1] = 1;
//calculating and storing the values
for (int i = 2; i <= n; i++)
dp[i] = dp[i1] + dp[i2];
return dp[n];
}
public static void main(String[] args)
{
Fibonacci fibonacci = new Fibonacci();
int n = 10;
System.out.println("Fibonacci number : " + fibonacci.fib(n));
}
}
# Python Program to find Nth Fibonacci Number using Tabulation
def fib(n):
# dp array declaration
dp = [0]*(n+1)
# base case
dp[1] = 1
# calculating and storing the values
for i in xrange(2 , n+1):
dp[i] = dp[i1] + dp[i2]
return dp[n]
# Driver program
def main():
n = 10
print("Fibonacci number : " + str(fib(n)))
if __name__=="__main__":
main()
Why is dynamic programming named “dynamic”?
How to recognize a problem that can be solved using Dynamic Programming?
How to solve dynamic programming problems?
The concept of dynamic programming is very simple. If we have solved a problem with the given input, then we save the result for future reference, so as to avoid recomputing again.
We follow the mantra  Remember your Past.
We need to know that the optimal solutions to each subproblem contribute to the optimal solution of the overall given problem.
We can follow the below steps as a guideline for coming up with a DP solution:
Step 1. Think of a recursive approach to solving the problem which essentially expresses a problem, say P(X), in terms of smaller subproblem, say P(Y) or an expression involving multiple smaller subproblems, say P(Yi). Here we expect Yi < X which could mean one of the following:
a. If X is an integer, then it could mean Yi "less than" X arithmetically.
b. If X is a string, it could mean that Yi is a substring of X.
c. If X is an array, it could mean Yi is a subarray of X, and so forth.
Step 2. Once you have a approach, write a recursive code for that. Consider your recursive code function definition to be as below :
solve(K1, K2, K3 ... )
Step 3. Keep track of the results of each function by saving them after every function call so that if the same function solve(K1, K2, K3 ... ) is called again, we need not compute again.
Step 4. Once we are done, analyze the space and time complexities of the solution developed, and try to improve them if possible.
And that’s how a DP problem is solved.
How is top down approach (memoization) different than bottom up approach (tabulation)?
Parameters  Memoization  Tabulation 

State definition  State definition can be thought of easily.  Complicated to identify what a state should represent 
Ease of code  Less complicated and easy to code.  Complications increase when lots of other conditions arise. 
Speed of execution  Slower due to recursive calls and return statements  Faster as state values are accessed directly from table. 
Space  The cache entries are filled on demand during memoization.  All entries starting from the first one needs to be filled by default. 
Where is dynamic programming used?
What are the characteristics of dynamic programming?
Every DP problem should have optimal substructure and overlapping subproblems.
Please refer to Characteristics of Dynamic Programming section above.
What are the applications of dynamic programming?
How is dynamic programming different from greedy approach?
Parameters  Dynamic Programming  Greedy Approach 

Optimality  There is guaranteed optimal solution as DP considers all possible cases and then choose the best among them.  Provides no guarantee of getting optimum approach. 
Memory  DP requires a table or cache for remembering and this increases it’s memory complexity.  More memory efficient as it never looks back or revises its previous choices. 
Time complexity  DP is generally slower due to considering all possible cases and then choosing the best among them.  Generally faster. 
Feasibility  Decision at each step is made after evaluating current problem and solution to previously solved subproblem to calculate optimal solution.  Choice is made which seems best at the moment in the hope of getting global optimal solution. 
How is dynamic programming different from divide and conquer approach?