Dynamic Programming
Dynamic Programming
problems can be solved by a Bottom-up
approach and Top-down
one.
In the former,
we solve the sub-problems first and then build up the solution.
In the latter,
we use a very similar approach compared to Backtracking:
we still build our Search Tree
,
but we also avoid to re-calculate overlapping problems.
In order to do so, we can adopt a technique called Memoization
,
which is a simple container of our solutions.
I personally prefer the Top-down
one,
since it comes more naturally to implement once you have derived the Search Tree
.
Example
A classic problem to apply Dynamic Programming
is the Fibonacci
one.
The naive solution would be something like:
def fib(n: int) -> int:
if n <= 1:
return n
return fib(n - 1) + fib(n - 2)
Which has a Time
and Space
complexity of and , respectively.
To understand how to apply Dynamic Programming
,
let's derive the Search Tree
of the previous snippet for fib(4)
.
As you can see,
there are two calls for fib
,
but do we actually need both of them?
The answer is no, and we can use Memoization
to store the result of fib(2)
.
A Dictionary suits very well to perform such task.
from typing import Dict
def fib(n: int) -> int:
def dfs(n: int, memo: Dict[int, int]) -> int:
assert n >= 0
if n <= 1:
return n
if n in memo:
return memo[n]
res = 0
res += dfs(n - 1, memo)
res += dfs(n - 2, memo)
memo[n] = res
return memo[n]
return dfs(n, {})
Thanks to this kind of cache,
we are able to avoid some function calls and this reduces the previous Space Tree
to:
Which means that we also reduce our Time
complexity to .
functools.lru_cache
The same result can be achieved by using the lru_cache
decorator from functools.
from functools import lru_cache
@lru_cache
def fib(n: int) -> int:
if n <= 1:
return n
return fib(n - 1) + fib(n - 2)