What is dynamic programming?
Dynamic programming is kind of careful brute force where you try all possibilities but carefully.
Before going into the definition, lets first take a problem, so that we can define the dynamic programming.
Why was it named this way?
Check out the history here.
Problem - Nth Fibonacci Number
Dynamic programming is a method for efficiently solving a broad range of search and optimization problems which exhibit the characteristics of overlappling subproblems and optimal substructure. I’ll try to illustrate these characteristics through some simple examples and end with an exercise. Happy coding!
Here is the steps in DP
Now lets look at the bottom up dynamic programming approach. For a given n we would need to have F(n-1), F(n-2), F(n-3) ... F(0) already solved. In this case we'll start at the bottom and work our way up.
We could have used a cache like we used in the top down approach, but all we really need is the last 2 results. We can get away with just using 2 variables. This is also an iterative solution which is better compared to the recursive one because of the absence of the stack overhead involved in recursive calls.
Time Complexity: O(n)
Extra Space: O(1)
However this is cool, do you know nth fibonacci number problem can be solved in O(log n) time, here's how. Coming back to topic, bottom up here seems more optimized, as it is not doing function calls.
Here bottom up simpler, so to understand more generally:
So, we compute from least dependent one first.
Bottom up vs top down DP
Contents
Common Characterstics of Dynamic programming
Overlapping Subproblems
Optimal Substructure
The Knapsack Problem
Everyday Dynamic Programming
The above examples might make dynamic programming look like a technique which only applies to a narrow range of problems, but many algorithms from a wide range of fields use dynamic programming. Here's a very partial list.
Dynamic programming is kind of careful brute force where you try all possibilities but carefully.
Before going into the definition, lets first take a problem, so that we can define the dynamic programming.
Why was it named this way?
Check out the history here.
Problem - Nth Fibonacci Number
Dynamic programming is a method for efficiently solving a broad range of search and optimization problems which exhibit the characteristics of overlappling subproblems and optimal substructure. I’ll try to illustrate these characteristics through some simple examples and end with an exercise. Happy coding!
Here is the steps in DP
- memoization (remember)
- re-use solutions to sub problems that help solve the bigger problem
- Time complexity = number of problem * (time spend per sub problem). For fibonacci, we have n problems and θ(1) time. We don't count memoized recursions, as they are one time effort and we don't want to take into account one time effort in time complexity.
Now lets look at the bottom up dynamic programming approach. For a given n we would need to have F(n-1), F(n-2), F(n-3) ... F(0) already solved. In this case we'll start at the bottom and work our way up.
int GetFibonacci(int n) { if(n == 0) return 0; if(n == 1) return 1; last1 = 0; last2 = 1; ans = 0; for(int i=0; i < n-1; i++) { ans = last1 + last2; last1 = last2; last2 = ans; } return ans; }
We could have used a cache like we used in the top down approach, but all we really need is the last 2 results. We can get away with just using 2 variables. This is also an iterative solution which is better compared to the recursive one because of the absence of the stack overhead involved in recursive calls.
Time Complexity: O(n)
Extra Space: O(1)
However this is cool, do you know nth fibonacci number problem can be solved in O(log n) time, here's how. Coming back to topic, bottom up here seems more optimized, as it is not doing function calls.
Here bottom up simpler, so to understand more generally:
- exactly same computation
- topologically sort the sub problem dependency DAG
So, we compute from least dependent one first.
Bottom up vs top down DP
- In bottom up solutions, we do save space, and it may be helpful in real life solution.
- Also, in bottom up we get time complexity easily, but in memoized ones we have to think and then solve.
Contents
Common Characterstics of Dynamic programming
Overlapping Subproblems
Optimal Substructure
The Knapsack Problem
Everyday Dynamic Programming
The above examples might make dynamic programming look like a technique which only applies to a narrow range of problems, but many algorithms from a wide range of fields use dynamic programming. Here's a very partial list.
- The Needleman-Wunsch algorithm, used in bioinformatics.
- The CYK algorithm which is used the theory of formal languages and natural language processing.
- The Viterbi algorithm used in relation to hidden Markov models.
- Finding the string-edit distance between two strings, useful in writing spellcheckers.
- The D/L method used in the sport of cricket.
0 comments:
Post a Comment