
There are several excellent articles that talk about the optimization technique called memoization. You can access them here and here. What we’re going to do is give you a brief overview of what memoization is. We will go a bit further by performing a step by step analysis of what “memoizing”code is actually doing, line by line. By the end of the article, you will fully understand memoization.
Memoization in a Nutshell
Memoization is the programmatic practice of making long recursive/iterative functions run much faster.
How, you ask? By caching the values that the function returns after its initial execution.
When we input the same value into our memoized function, it returns the value stored in the cache instead of running the function again, thus boosting performance. No longer does your program have to recalculate every number to get a result.
Sounds awesome, right? Well, what’s even better is that it’s not hard to understand what a memoize function is doing once you break down the code.
Here’s an example of a basic memo
function:
Let’s start from the beginning…
We’re returning what’s called an anonymous closure. Our anonymous closure is able to inherit any variable or, in this case, function passed into memo
. We can then play with the arguments object that our passed-in function provides.
Side Note: So, if every function has an arguments object, why aren’t we inheriting the arguments object of memo
? That’s because, as a rule, closures do not inherit an outer function’s arguments object. We use this rule to our advantage in order to play with the function we want to memoize.
Let’s break down exactly what memoization is doing by looking at a very simple and impractical example. By the way, you should copy/paste this code, or any code for that matter, to really get a feel for how it works.
I think we get the gist of this code example. It practically speaks for itself. Let’s flesh it out a little more by looking at a snapshot of out real memo function:
Now that we’ve broken down what a memo function is doing, let’s create a function expression and pass in a recursive Fibonacci function into memo
and see what happens.
//Now we'll test our code!
// =======TEST=======
fib(10)//On the first try we need to load//=> loading…//=> loading…//=> loading…//=> loading…//=> loading…//=> 4 loadings later we get...//=> 89// And on the second try…fib(10)//=> 89//No loading! Yaay!//The cool thing about memoizing the recursive Fibonacci algorithm is that once we make a call for the value of the nth number in the series, we are able to store all the previous numbers in the series.//So if we try...
fib(7) //=> 21
//We don't have to recalculate anything.//And if we want to try fib(11)...
fib(11)
//loading...
//=> 144
//Our memoized recursive function already cached fib(1-10),so all it needed to do was to calculate the cached values. // This is what the cache now looks like:
/*
{ ‘{“0”:0}’: 1,
‘{“0”:1}’: 1,
‘{“0”:2}’: 2,
‘{“0”:3}’: 3,
‘{“0”:4}’: 5,
‘{“0”:5}’: 8,
‘{“0”:6}’: 13,
‘{“0”:7}’: 21,
‘{“0”:8}’: 34,
‘{“0”:9}’: 55,
‘{“0”:10}’: 89,
‘{“0”:11}’: 144}
*/
Conclusion
Memoization becomes demystified when you boil it down to key-value pairs. All we’re doing is creating an object, checking for existing values that match the user input, and storing new key-value pairs if they don’t exist in our object.
Of course, storing all this data means that we’re going to be using up memory. It’s best to implement memoization on functions that are pure and involve heavy, repetitive calculations.
Bonus:
Just for fun, here’s a functional version of our memoizer:
Resources:
