# DYNAMIC PROGRAMMING

Dynamic programming is a method for solving a complex problem by breaking it down into a collection of simple subproblems. It is extremely important that there are *overlapping* subproblems. What does it mean? It means, they are not totally independent from each other. So we have to solve the subproblems and of course we have to combine the solutions of the subproblems. This is how we reach the final solution. What is the main concept of dynamic programming? That we solve every subproblem only once: we reduce the number of computations. How? Subproblems can be stored in memory. This is called “memoization”.

Let’s consider Fibonacci numbers. The recursive formula is **F(n) = F(n-1) + F(n-2)**. Of course we have base cases: **F(0) = 0, F(1) = 1**. So we calculate the numbers in a recursive manner. Maybe its not that straightforward at first glance, but there are several overlapping subproblems. Why is it a problem? Because we keep calculating the same problems over and over again.

OK, we see from the illustration above, that we can calculate the Fibonacci values with recursion. What is the problem? The problem is that we keep calculating the same problems several times. We calculate **F(2)** three times. Maybe we could store the **F(2)** value in a hash table and look it up in the future. And this exactly what dynamic programming does.

**DYNAMIC PROGRAMMING VS DIVIDE AND CONQUER**

The two concepts are very similar to each other but there is a huge difference. Divide and conquer algorithms divide the problem into subproblems. Such as mergesort and quicksort. Then combine the results. No overlapping subproblems. For dynamic programming there are overlapping subproblems. So this is the main difference between the two approaches.

**IMPLEMENTATION**

So let’s see the recursive approach and the dynamic programming approach for Fibonacci numbers. The recursive approach is quite intuitive, we just have to code the mathematical formula:

This is what we have been discussing that the algorithm keeps calculating the same subproblems over and over again. So maybe using a hash table would be great. So let’s use the dynamic programming approach:

In the source code above, we use a map to “memoize” the values. First we put the base cases into the map. **F(0) = 0** and **F(1) = 1**. As you can see, when we calculate the given Fibonacci number, we take the values from the map, then put the calculated one into it. Why is it good? We make sure that the algorithm calculates every Fibonacci number exactly once.