It’s been two years since I wrote the post Functional JavaScript—Tail Call Optimization and Trampolines where I describe how to implement and use a trampoline to overcome JavaScript’s lack of tail call optimization (TCO) and nothing much has changed during the intervening time. None of the modern JavaScript engines implement tail call optimization. None!

The *SUCC* function is the simplest function to use for testing whether a JavaScript engine implements TCO:

function SUCC(n) { var tco = function(n, acc) { if (n == 0) { return acc; } else { return tco(n-1, acc+1); } } return tco(n, 1);

Run it with a value of *n=250000* and it’s a sure bet you’ll generate a stack overflow if your JavaScript engine doesn’t implement TCO. In fact kangax has setup an ECMAScript 6 compatibility table showing, amongst other things, that TCO is hardly implemented anywhere; with one notable exception.

Babel is a transpiler transforming JavaScript 6 code to JavaScript 5 code so that the new features of JavaScript 6 may be utilized in legacy environments not supporting these features.

Using Babel the above `SUCC`

function is transformed to the following:

function SUCC(n) { var tco = (function (_tco) { function tco(_x, _x2) { return _tco.apply(this, arguments); } tco.toString = function () { return _tco.toString(); }; return tco; })(function (n, acc) { if (n == 0) { return acc; } else { return tco(n - 1, acc + 1); } }); return tco(n, 1);

Sigh. This code is useless—it’s the same code we had before rewritten in a more confusing way. It does *not* implement TCO as no loop has been introduced. Without the loop we suffer the same stack overflow we experienced using the original code without TCO. I thought Babel was supposed to implement TCO?

Turns out we can get Babel to generate the code we want if we tweak our code a little.

function SUCC(n, acc=1) { if (n == 0) { return acc; } else { return SUCC(n-1, acc+1); } }

See the difference? I’m using another new JavaScript 6 feature, default function parameters. A call of the form `SUCC(n)`

is equivalent to calling `SUCC(n, 1)`

. The initial caller isn’t inconvenienced with having to specify an initial accumulator value, which is a leaky abstraction. What difference does this make? Let’s take a look at what Babel does with this code:

function SUCC(_x7) { var _arguments = arguments; var _again = true; _function: while (_again) { acc = undefined; _again = false; var n = _x7; var acc = _arguments[1] === undefined ? 1 : _arguments[1]; if (n == 0) { return acc; } else { _arguments = [_x7 = n - 1, acc + 1]; _again = true; continue _function; } } }

Notice we have our loop – a `while`

loop:

`_function: while(_again) {`

The `_again`

loop control variable is superfluous in this context, I’m supposing it’s an artifact of Babel’s code generation which may be useful in other contexts. Other than that the code is very straightforward to read. Better yet, it produces a true TCO implementation that runs much faster than using a trampoline!

Babel is unable to optimize mutual recursions. This means the classic `even/odd`

implementation won’t be optimized:

function even(n) { if (n == 0) { return true; } else { return odd(n - 1); } } function odd(n) { if (n == 0) { return false; } else { return even(n - 1); } }

This code is identical before and after Babel transpilation.

With Babel we can get TCO for direct recursion without having to resort to the complications of using trampolines. We do have to complicate our development workflow a little bit to accommodate Babel but Babel makes that easy by integrating in with the most commonly used web development workflow tools, one of which you’re likely to be using already for your project.

Happy hacking!

Functional JavaScript — Memoization, Part I

Functional JavaScript — Memoization, Part II

Functional JavaScript — Tail Call Optimization and Trampolines

Functional JavaScript — Currying

This post takes a break from *Functional JavaScript* and has a little fun making music. We’re going to be using Clojure, a Lisp language for the JVM, so that we can utilize the JVM’s MIDI implementation. No experience with music or MIDI is required though a familiarity with Clojure or any other Lisp is helpful.

I’m using Clojure for its functional similarities to JavaScript—the syntax of the languages are different but the underlying programming philosophies are similar. For this post I’m assuming you already have Clojure and Leiningen installed on your system. See Clojure Quick Start for everything you need to get Clojure and Leiningen installed and running on your system.

Once you have everything installed you can create a new *midi-sequencer* project by executing

`lein new app midi-sequencer`

Once your project is created all your source will be contained in the file *core.clj*. To run your program execute

`lein run `

*args*

where we’ll be passing the instrument number, 0-127, as an argument.

Pro tip: you can execute

`lein repl`

to load your code into a REPL for ad-hoc programming and exploring.

Alright! Now you have everything you need to get started so let’s go!

We need to get briefly acquainted with the following MIDI concepts:

- synthesizer
- think of it as your instrument, this is what we use to make music
- channel
- our synthesizer has 16 channels, 0-15, each acting as it’s own instrument
- instrument
- each channel may have one of 128 instruments, 0-127
- note
- the pitch to play, there’s 128 available, 0-127, with middle C at 60 and the 440 Hz middle A at 69
- duration
- how long to play a pitch, measured here in milliseconds
- volume
- how loud to play the pitch, from 0 to 127

The basic idea then is to obtain and initialize the synthesizer, select the desired channel to use, select an instrument into that channel and then use that channel to start playing notes for a specified duration and volume.

Let’s look at the code:

(defn getSynthPlayer [channelNbr instrumentNbr] "Initialize synthesizer and return play function" (let [synth (javax.sound.midi.MidiSystem/getSynthesizer)] (do (.open synth) ; Open synth before using (let [channels (.getChannels synth) instruments (.. synth getDefaultSoundbank getInstruments)] (do (let [channel (nth channels channelNbr) instrument (nth instruments instrumentNbr)] (println "Instrument" instrumentNbr "is" (.getName instrument)) (.loadInstrument synth instrument) (.programChange channel instrumentNbr) ; Lots of blogs never mentioned this! (fn [volume note duration] ; play function (do (.noteOn channel note volume) (Thread/sleep duration) (.noteOff channel note)))))))))

We start off getting the synthesizer using `javax.sound.midi.MidiSystem/getSynthesizer`

and then immediately `open`

it. Next we get the set of channels provided by the synthesizer from which we can select our desired channel and we do likewise for the instrument (the synthesizer provides a `getDefaultSoundbank`

which we use to get the available instruments). We load the instrument into the synthesizer and then, most importantly, invoke `programChange`

on the channel which activates the instrument for that channel. If you omit this step the channel will use the default instrument rather than the instrument you had intended to use.

Lines 14–18 then comprise the `play`

function we’re returning, `getSynthPlayer`

returns a function allowing us to play music using the specified instrument on the selected channel of the synthesizer. This `play`

function takes 3 parameters:

- volume
- note
- duration

as discussed in the MIDI Concepts section above.

The implementation of the `play`

function is simple: invoke the channel’s `noteOn`

method specifying the *note* and *volume*, sleep for the desired *duration* and then invoke the channel’s `noteOff`

method for the specified *note*. We need to specify the *note* to `noteOff`

because it’s possible for the channel to play several notes simultaneously, though I won’t be demonstrating that capability in this post.

Here’s the code we need to play music:

(defn play [synthPlayer accVolume seqNotes] "Play note sequence at specified starting volume on player obtained via getSynthPlayer" (if (or (nil? seqNotes) (empty? seqNotes)) nil (let [[note duration {:keys [vol_adj] :or {vol_adj 0}}] (first seqNotes) volume (+ accVolume vol_adj)] (do (synthPlayer volume note duration) ; invoke function returned from getSynthPlayer (recur synthPlayer volume (rest seqNotes))))))

The music is stored in a list of vectors containing the note, duration, and an optional map specifying the volume adjustment. For example the chromatic scale is define as:

`(def chromatic-scale '([60 250] [62 250 {:vol_adj +2}] [64 250 {:vol_adj +2}] [65 250 {:vol_adj +2}] [67 250 {:vol_adj +2}] [69 250 {:vol_adj +2}] [71 250 {:\`

vol_adj +2}] [72 250 {:vol_adj +2}] [72 250] [71 250 {:vol_adj -2}] [69 250 {:vol_adj -2}] [67 250 {:vol_adj -2}] [65 250 {:vol_adj -2}] [64 250 {:vol_adj -2\

}] [62 250 {:vol_adj -2}] [60 5000 {:vol_adj -2}]))

The volume is continually incremented by 2 when ascending the scale and decremented by 2 when descending the scale. Also note the volume is an *adjustment*, it doesn’t specify the actual volume but adjusts the volume relative to the current volume setting when playing this note (for the musical types out there this is really the *velocity* adjustment, or the *piano*/*forte* adjustment).

The `play`

function takes 3 parameters

- synthPlayer
- accVolume
- seqNotes

The *synthPlayer* is the player returned from `getSynthPlayer`

, *accVolume* is the current volume, and *seqNotes* is the sequence of notes to play as discussed above. The `play`

function employs the standard idiom of recursing through the list, playing each note in succession, until the list is exhausted.

Take a look at line 4:

`(let [[note duration {:keys [vol_adj] :or {vol_adj 0}}] (first seqNotes)`

`(first seqNotes)`

gets the first vector in the supplied notes sequence. Recall that vector has the following values: `[note duration {:vol_adj vol_adj}]`

. This form assigns *note* to note, *duration* to duration, and *vol_adj* to the value mapped to the :vol_adj key, or 0 if none is supplied. This is an example of Clojure’s *destructuring*. Much has been written about destructuring so I won’t explore the topic any further. If it’s not obvious how the symbol bindings are being performed then search the literature.

Finally, line 8 is the recursive invocation of `play`

. Note the `recur`

form only requires us to specify the parameters as it’s known the current function is what’s being recursed. We recurse with `(rest seqNotes)`

to obtain the set of notes excluding the first note, which we just played. In this manner we recurse through the sequence of notes playing each note in turn.

We need a main function:

(defn -main "Right now we play the chromatic scale using the instrument provided" [instrument & args] (let [synthPlayer (getSynthPlayer 2 (Integer/parseInt instrument))] (play synthPlayer 80 chromatic-scale)))

This gives Leiningen an entry point from which to run. Your program can be invoked by:

`lein run `

*instrument#:0-127*

Experiment with the different instruments and see which ones you like. On my particular system 0 is a grand piano, 40 is a violin, 50 is the strings section, and 60 is a French horn.

At this point you should have Clojure and Leiningen installed and running on your system and you know some of the basic concepts of MIDI and know how to play simple music. I’ve opened a can of worms—you can probably think of several improvements to this implementation and think of features to add. For example, we have no means for “playing” rests nor do we have a means for tying notes together. It’s also pretty bogus to specify notes by number rather than name and specifying duration by a millisecond value rather than eighth note, quarter note, half note and so forth.

Experiment and make it so! I may improve upon this implementation from time–to–time in future posts.

I sort of have a tradition (it’s a long story) of playing synthesized Christmas holiday music. Here’s the sequence you need to play *Joy To The World*. Enjoy!

`(def joy-to-the-world`

'([77 660] [76 495] [74 165] [72 990] [70 330] [69 660] [67 660] [65 990]

[72 330] [74 990] [74 330] [76 990] [76 330] [77 990]

[77 330] [77 330] [76 330] [74 330] [72 330] [72 495] [70 165] [69 330] [77 330] [77 330] [76 330] [74 330] [72 330] [72 495] [70 165] [69 330]

[69 330] [69 330] [69 330] [69 330] [69 165] [70 165] [72 990]

[70 165] [69 165] [67 330] [67 330] [67 330] [67 165] [69 165] [70 990]

[69 165] [67 165] [65 330] [77 660] [74 330] [72 495] [70 165] [69 330] [70 330] [69 660] [67 660] [65 1980]))

Simply put, currying is the means for reducing the arity of a function. That’s it. If prior to currying a function accepted *n* parameters then after currying a new function will have been created accepting fewer than *n* parameters. Usually the goal is to reduce the function to a monadic form, i.e. having an arity of 1.

Why would anyone want to do this? The reason usually provided is some languages such as lambda calculus, Haskell, and ML only allow you to define functions accepting one parameter. What if you need a function accepting more than one parameter? Then you’re out of luck—unless you use currying.

Consider this implementation of the falling factorial function which is used to efficiently calculate binomial coefficients:

function FallingFactorial(x, n) { if (n == 0) return 1; return (x - (n - 1)) * FallingFactorial(x, n-1); }

Observe this implementation has an arity of 2 and is recursively defined. As an aside you might also observe that `FallingFactorial(`

.*n*, *n*) == *n!*

The process of currying is straightforward. To create a monadic implementation of falling factorial we need to create a function accepting *x* as a parameter which returns a new function closed over *x* and accepting *n* as a parameter which computes and returns the result. Look at this example:

function CurriedFallingFactorial(x) { return function(n) { if (n == 0) return 1; return (x - (n - 1)) * CurriedFallingFactorial(x)(n-1); } }

The function `CurriedFallingFactorial`

accepts one parameter, *x*, and returns a function accepting one parameter, *n*. Since the returned function refers to *x* in its implementation it’s said to be *closed over* *x* and as such this kind of function is referred to as a *closure*. The returned function has access to both *x* and *n* and so can complete the implementation and return the desired result so that

`FallingFactorial(`

*x*, *n*) == CurriedFallingFactorial(*x*)(*n*)

The astute JavaScript developer will recognize the recursive invocation of `CurriedFallingFactorial`

will generate a lot of anonymous functions (on the heap, no less!), all closed over the same value of *x*, i.e. all having the exact same implementation. Once you’ve obtained an instance of the returned function closed over *x* you can recursively invoke it rather than continuing to recreate it and invoke it as the following implementation illustrates:

function CurriedFallingFactorial(x) { return function(n) { if (n == 0) return 1; return (x - (n - 1)) * arguments.callee(n-1); } }

`arguments.callee`

is the means for obtaining a reference to the currently executing function, which in this case is already closed over *x*.

Why would you ever curry a JavaScript function since it may accept however many parameters as needed? Function adaptation. Functional JavaScript—Memoization, Part I presents a memoization implementation that only works on unary or monadic functions. In that post I said *“There’s an elegant solution for memoizing functions of higher arity that I’ll be demonstrating in a future post.”* This is that post, and currying is the elegant solution.

So how do we begin to go about memoizing `CurriedFallingFactorial`

? We first note a rather peculiar property of `CurriedFallingFactorial`

as it’s currently implemented:

`CurriedFallingFactorial(`

*x*) != CurriedFallingFactorial(*x*)

This is not to be confused with the following expression which is true:

`CurriedFallingFactorial(`

*x*)(*n*) == CurriedFallingFactorial(*x*)(*n*)

The reason for this anomaly is what prompted us to implement the `CurriedFallingFactorial`

more efficiently—every time you call `CurriedFallingFactorial`

a newly-created anonymous function closed over *x* is returned, even when provided the same values for *x*. That is why `CurriedFallingFactorial(`

. We’ll begin our memoization implementation by addressing this anomaly.*x*) != CurriedFallingFactorial(*x*)

function MemoizedCurriedFallingFactorial(x) { if (MemoizedCurriedFallingFactorial.memoizer == null) { MemoizedCurriedFallingFactorial.memoizer = function(x) { return function(n) { if (n == 0) return 1; return (x - (n - 1)) * arguments.callee(n-1); } }.memoize().memoizer; // function(x) memoizer } // delegate to memoized implementation of function(x) return MemoizedCurriedFallingFactorial.memoizer.fn(x); }

I’ve highlighted the heart of the implementation. I’ve created a new anonymous function accepting *x* and then memoize that implementation which I in turn assign to the *memoizer* property of the `MemoizedCurriedFallingFactorial`

function. The implementation of `MemoizedCurriedFallingFactorial`

delegates to the memoized implementation of this anonymous function accepting *x*.

As in the previous implementation the anonymous function accepting *n* is closed over *x* and so we can optimize its recursive invocation by using `arguments.callee`

to refer to itself rather than re-invoking `MemoizedCurriedFallingFactorial(`

to obtain a (memoized) reference to itself.*x*)

The upshot of this change is that `MemoizedCurriedFallingFactorial(`

. *x*) == MemoizedCurriedFallingFactorial(*x*)

We’re not finished though. We need to address the anonymous function accepting *n* that is closed over *x*, we must memoize this implementation as well.

function MemoizedCurriedFallingFactorial(x) { if (MemoizedCurriedFallingFactorial.memoizer == null) { MemoizedCurriedFallingFactorial.memoizer = function(x) { var anon = function(n) { if (n == 0) return 1; return (x - (n - 1)) * anon(n-1); }.memoize(); // memoize function(n) return anon; // return memoized function(n) }.memoize().memoizer; // function(x) memoizer } // delegate to memoized implementation of function(x) return MemoizedCurriedFallingFactorial.memoizer.fn(x); }

Once again the interesting code is highlighted. This implementation additionally memoizes the anonymous function accepting *n*. Notice we’ve assigned this anonymous function to the variable *anon* so that we may recursively reference our memoized form in line 7. The anonymous function is closed over both *x* and *anon*.

If you need to memoize a function accepting more than one parameter then consider currying as a function adapter as a means for doing so. Such function adaptation is the primary motivator for currying in JavaScript.

Functional JavaScript — Memoization, Part I

Functional JavaScript — Memoization, Part II

Functional JavaScript — Tail Call Optimization and Trampolines

Functional JavaScript — Tail Call Optimization and Babel

My son wrote a post List Processing in C which he noted can be used down the road to implement a Lisp interpreter. Which got me thinking, how difficult would it be to write a list processor, a la Lisp, in C?

My first challenge was to write in C, since I haven’t done any C development in over 20 years. Things aren’t quite as bad as they sound though—I was an excellent C developer who transitioned from C to C++ and spent about 10 years writing C++ and then transitioned to Java roughly 10 years ago.

The challenge would be writing without objects—which as it turned out would have been very useful. But no objects for me. Back to manual memory management, which took some getting used to again especially since you can’t really on destructors or garbage collectors to take care of that for you. After working in C for a half hour or so it was amazing how all the idioms came back to me. I guess you really never forget a language once you’ve mastered it.

I hadn’t read John McCarthy’s seminal paper on Lisp prior to writing my List processor and I’m by no means a Lisp expert. But I’d already suspected that everything in Lisp starts with CONS.

So what is a CONS?

A CONS is a simple data structure containing two pointers, named CAR and CDR:

See McCarthy’s paper for the explanation of how these pointers got to be so named.

In my first implementation my CAR pointed to a character and my CDR pointed to another CONS or NIL. I could create the string “Hello” like so:

There’s a couple of things that bothered me with this implementation. The first problem is the only kind of complex data structure I could create is a list. In Lisp a list is a form of a CONS, but not all CONSes are lists. A CONS is merely the means for creating lists. In other words the CAR and CDR pointers aren’t restricted in what they can point to. The second problem is this isn’t very symbolic. Lisp is known to be a symbolic programming language, yet there are no symbols here. These problems are too large to ignore even for a toy implementation.

A symbol can be thought of as a label affixed to a data structure containing slots for a value and for a function. A symbol therefore is a means for naming and referring to a value or function in Lisp. The symbol usage context determines which item is being referenced.

So do the CAR and CDR pointers of the CONS point to symbols? No. In Lisp the CAR and CDR pointers can point to CONSes, characters, strings, functions, keywords and a host of other items that are *not* symbols.

A more intriguing question to me, from an implementation standpoint, was whether the value of a symbol could also be a symbol? If so then my implementation could have the CAR and CDR pointers referring to a value structure, which in turn could be a CONS, symbol or simple value type.

So I ran the following in CLISP:

(setq a "symbol a") (setq b "symbol b") (setf (symbol-function 'a) (lambda() (format t "Symbol A~%"))) (setf (symbol-function 'b) (lambda() (format t "Symbol B~%")))

Invoking `(a)`

yields “Symbol A” as expected and invoking `(b)`

yields “Symbol B” as expected.

Suppose we now do the following:

(setf (symbol-value 'a) 'b) (funcall (symbol-function (symbol-value 'a)))

What will get printed? If you guessed “Symbol B” then you’re correct. This means we can assign a symbol value to be another symbol. A symbol is a kind-of value so-to-speak. Meaning my CAR and CDR pointers should be pointing to values, of which a symbol (and CONS) is merely a kind-of value.

Implementation:

typedef struct _VALUE { union { long integer; double real; char character; STRING string; void* cons; void* symbol; } value; int discriminant; } VALUE, *VALUE_PTR;

There’s several helper functions for creating new VALUES:

extern VALUE_PTR MAKE_NIL_VALUE(); extern VALUE_PTR MAKE_INTEGER_VALUE(long value); extern VALUE_PTR MAKE_REAL_VALUE(double value); extern VALUE_PTR MAKE_CHARACTER_VALUE(char value); extern VALUE_PTR MAKE_STRING_VALUE(STRING value); extern VALUE_PTR MAKE_CONS_VALUE(void* value); extern VALUE_PTR MAKE_SYMBOL_VALUE(void* value);

Freeing a VALUE (remember in C we’re responsible for our own memory management):

extern void FREE_VALUE(VALUE_PTR v);

Printing a VALUE:

extern void PRINT_VALUE(VALUE_PTR v);

And for obtaining a VALUE and determining the type of a VALUE:

extern long GET_INTEGER_VALUE(VALUE_PTR v); extern double GET_REAL_VALUE(VALUE_PTR v); extern char GET_CHARACTER_VALUE(VALUE_PTR v); extern STRING GET_STRING_VALUE(VALUE_PTR v); extern void* GET_CONS_VALUE(VALUE_PTR v); extern void* GET_SYMBOL_VALUE(VALUE_PTR v); extern BOOL IS_VALUE_NIL(VALUE_PTR v); extern BOOL IS_VALUE_INTEGER(VALUE_PTR v); extern BOOL IS_VALUE_REAL(VALUE_PTR v); extern BOOL IS_VALUE_CHARACTER(VALUE_PTR v); extern BOOL IS_VALUE_STRING(VALUE_PTR v); extern BOOL IS_VALUE_CONS(VALUE_PTR v); extern BOOL IS_VALUE_SYMBOL(VALUE_PTR v);

There’s a couple of things to notice in this implementation. VALUES are created using the MAKE_*_VALUE functions. These functions handle the allocation of the VALUE object, setting the desired value, and setting the type discriminant. The type discriminant is how we subsequently determine the *kind* of value this is.

Since symbols have values and the CAR and CDR pointers of CONSes point to values, the definition of these items need the definition of VALUE. We therefore have a circular reference. I need a VALUE to define a SYMBOL but I need a SYMBOL to define a VALUE. The same issue is encountered when defining CONS. I circumvent these circular reference by using void* for the SYMBOL and CONS in implementation of VALUE.

Finally, note that a PRINT_VALUE function is provided for printing VALUES. This function has the responsibility for printing the values in such a manner that a REPL could process the output as input and reconstruct the VALUE. There will be more to say about this later.

Once we have values, symbols are straightforward to implement:

typedef struct _SYMBOL { STRING symbolName; // symbol name, a means for referencing the symbol VALUE_PTR symbolValue; // symbol value, defaults to being same as name } SYMBOL, *SYMBOL_PTR;

Notice that I haven’t defined a slot for a function. This implementation is a list processor at this time, should I desire at a future point in time to be able to define and execute functions then I’d have to come back and add a slot for a function in this structure.

I’ve defined several methods for creating and working with symbols:

extern VALUE_PTR MAKE_SYMBOL(STRING symbolName); extern VALUE_PTR SETQ(VALUE_PTR symbol, VALUE_PTR symbolValue); extern STRING SYMBOL_NAME(VALUE_PTR symbol); extern VALUE_PTR SYMBOL_VALUE(VALUE_PTR symbol); extern void PRINT_SYMBOL(VALUE_PTR symbol); extern void FREE_SYMBOL(SYMBOL_PTR symbol);

Notice that since a SYMBOL is-a VALUE, MAKE_SYMBOL returns a VALUE_PTR, not a SYMBOL_PTR. Likewise all the other helper methods accept a VALUE_PTR and not a SYMBOL_PTR. The downside to this approach is these methods must check that an actual SYMBOL has been provided, however this is easy to accomplish using the IS_VALUE_SYMBOL method we saw earlier. Plus only the symbol implementation methods ever need to work with the innards of a SYMBOL, all usage outside of the implementation is working with a VALUE referencing a SYMBOL. This greatly simplifies the API and I believe leads to fewer usage errors.

Now we’re finally ready to implement the CONS:

typedef struct _CONS { VALUE_PTR car; VALUE_PTR cdr; BOOL _isList; } CONS_CELL, *CONS_PTR;

That’s it!

One of the decisions I made that simplifies the implementation of CONS is to create a NIL value. This means a CAR or CDR are never NULL—they instead may refer to NIL values. This greatly simplifies things, you know that it’s always an error to encounter a NULL pointer. It also allows you to properly handle lists of NIL—which yes, Lisp allows.

The other item to note in this implementation is the _isList flag. A list is a CONS whose CDR points to another CONS or NIL, which is the last CONS in the list. For example we saw above how “Hello” was implemented as a list. But this also means “ello”, “llo”, “lo” and “o” are lists too. Rather than walk the list to determine if the criteria are met for being a list, a flag is kept instead. This makes the implementation faster.

This is important information to have because lists and CONSes are printed differently. Since the evaluator takes as input what the print puts as output we need to ensure we’re handling the printing of lists and CONSes appropriately.

CONS also has methods for printing and freeing:

extern void PRINT_CONS(VALUE_PTR cons); extern void FREE_CONS(CONS_PTR cons);

Since a CONS is-a VALUE, when printing a VALUE if it determines the value is a CONS it invokes the PRINT_CONS method. The same is true when freeing a VALUE, if it’s a CONS it will free the CONS. This makes working with CONSes uniform and less error-prone: you’re always working with a VALUE.

In Lisp, the function CONS is used to create a CONS, the function CAR is used to retrieve the CAR of the CONS, and the function CDR is used to retrieve the CDR of the CONS. We’re now ready to implement these methods:

/* Create a new CONS given both the CAR and CDR */ VALUE_PTR CONS(VALUE_PTR car, VALUE_PTR cdr) { assert(car != NULL); assert(cdr != NULL); CONS_PTR cons = (CONS_PTR) malloc(sizeof(CONS_CELL)); cons->car = car; cons->cdr = cdr; // determine if this CONS is a LIST if (IS_VALUE_NIL(cdr)) { // if CDR is NULL, then this CONS is a LIST cons->_isList = TRUE; } else if (IS_VALUE_CONS(cdr)) { // appending a CONS to a LIST makes a LIST cons->_isList = ((CONS_PTR)GET_CONS_VALUE(cdr))->_isList; } else { // CDR is not NIL and is not CONS, this is not a LIST cons->_isList = FALSE; } return MAKE_CONS_VALUE(cons); } /* Return the CAR of the CONS */ VALUE_PTR CAR(VALUE_PTR cons) { assert(cons != NULL); if (IS_VALUE_CONS(cons)) { return ((CONS_PTR)GET_CONS_VALUE(cons))->car; } else { return NULL; } } /* Return the CDR of the CONS */ VALUE_PTR CDR(VALUE_PTR cons) { assert(cons != NULL); if (IS_VALUE_CONS(cons)) { return ((CONS_PTR)GET_CONS_VALUE(cons))->cdr; } else { return NULL; } }

We now have the implementation of the basic Lisp functions CONS, SETQ, CAR and CDR along with the means for creating and printing SYMBOLS and VALUES. We’re now set to do a lot of list processing!

The following code samples demonstrate some simple list processing. The graph after each sample depicts what the CONSes look like in memory.

// (1 2 3) VALUE_PTR list = CONS(MAKE_SYMBOL("1"), CONS(MAKE_SYMBOL("2"), CONS(MAKE_SYMBOL("3"), MAKE_NIL_VALUE()))); PRINT_VALUE(list); printf("\n");

// (2 3) PRINT_VALUE(CDR(list)); printf("\n");

// (3) PRINT_VALUE(CDR(CDR(list))); printf("\n");

// NIL PRINT_VALUE(CDR(CDR(CDR(list)))); printf("\n");

Simply yields the value NIL.

We can retrieve any of the values in the list as the following code demonstrates:

// 1 PRINT_VALUE(CAR(list)); printf("\n"); // 2 PRINT_VALUE(CAR(CDR(list))); printf("\n"); // 3 PRINT_VALUE(CAR(CDR(CDR(list)))); printf("\n");

This is starting to look a lot like Lisp!

We can get more advanced in our list processing as the following samples demonstrate. Again, the graph after each sample depicts what the CONSes look like in memory.

// (NIL) PRINT_VALUE(CONS(MAKE_NIL_VALUE(), MAKE_NIL_VALUE())); printf("\n");

Perhaps now you’re beginning to understand why using a NIL value rather than a NULL pointer simplified the implementation and usage?

// ((1)) PRINT_VALUE(CONS(CONS(MAKE_SYMBOL("1"), MAKE_NIL_VALUE()), MAKE_NIL_VALUE())); printf("\n");

// (((1))) PRINT_VALUE(CONS(CONS(CONS(MAKE_SYMBOL("1"), MAKE_NIL_VALUE()), MAKE_NIL_VALUE()), MAKE_NIL_VALUE())); printf("\n");

// (1 (2 (3))) VALUE_PTR embeddedList = CONS(MAKE_SYMBOL("1"), CONS(CONS(MAKE_SYMBOL("2"), CONS(CONS(MAKE_SYMBOL("3"), MAKE_NIL_VALUE()), MAKE_NIL_VALUE())), MAKE_NIL_VALUE())); PRINT_VALUE(embeddedList); printf("\n");

This is a bit of a mind-bender but look at the code and look at the graph and you should start seeing how this is working.

See if you understand why the following code works. Hint: Use the definition of a list and the previous graph to follow the CAR and CDR pointers:

// (2 (3)) PRINT_VALUE(CAR(CDR(embeddedList))); printf("\n"); // (3) PRINT_VALUE(CAR(CDR(CAR(CDR(embeddedList))))); printf("\n");

Our list processor is complete and it has the same behavior as the list processor in Lisp!

It’d be nice to have a REPL (read-evaluate-print-loop) for entering expressions. This way you could enter CONSes in their printed form, which is really convenient and much less confusing than entering them programatically. For example it would be much easier to enter `(1 (2 (3)))`

for embeddedList from the above samples. Look at how much simpler that is!

The second problem is a lack of expressiveness. Once a CONS is created there’s no means for modifying it. On the one hand this makes the data structures immutable, but on the other hand there’s a lack of flexibility. We’d really like to have both, as implemented for example by Clojure.

Of course there’s no way to define and evaluate functions, meaning this is no Lisp. I’ve been focusing solely on the list processor aspect of Lisp. Adding the ability to define and evaluate functions would not be a trivial undertaking, but you would have a fully-functional Lisp at that point. It’s something to think about.

All the source for this list processor is available on GitHub.

Happy Hacking!

]]>The JavaScript Memoization series introduced a recursive Fibonacci sequence generator. Memoization, a method of caching results, was used to enhance performance. Performance can also be enhanced by tail call optimization.

Tail call optimization is a compiler feature that replaces recursive function invocations with a loop. Eliminating function invocations eliminates both the stack size and the time needed to setup the function stack frames. Both time and space are saved.

The developer must write methods in a manner facilitating tail call optimization. This means the last function invoked must be the invocation of the recursive function. Let’s take a look at the traditional recursive implementation of the factorial function to see what this means.

function factorial(n) { if (n == 0) { return 1; } else { return n * factorial(n-1); } }

The *factorial* function invokes *** last, not *factorial*. This implementation cannot be optimized.

function factorial(n) { function recur(n, acc) { if (n == 0) { return acc; } else { return recur(n-1, n*acc); } } return recur(n, 1); }

This *factorial* implementation features an inner function, *recur*, providing a delegate for *factorial*. The *recur* function is implemented for tail call optimization. The last function invoked is *recur*. The compiler is thus able to produce code invoking *recur* in a loop until a result other than *recur* is returned.

An extra parameter has been added to *recur*—** acc**. This is the accumulator, the function value accumulated to this point. Let’s trace through the

**factorial(5)** → recur(5,1)

recur(5,1) → recur(4,5)

recur(4,5) → recur(3,20)

recur(3,20) → recur(2,60)

recur(2,60) → recur(1,120)

recur(1,120) → recur(0,120)

recur(0,120) → **120**

All recursive functions must have a **base case** terminating the recursion. The base case for the factorial function is when **n=0** in which case 1 is returned. Likewise the base case of *recur* is also when **n=0**, but instead of returning 1 the accumulated value is returned.

Use this strategy for creating optimized recursive functions:

- Create an inner recursive function having an additional
*accumulator*parameter - The base case of the inner recursive function is to return the
*accumulator* - The recursive invocation provides an updated value for the
*accumulator* - Outer function delegates to the inner recursive function using appropriate initial values

Follow this strategy and your recursive functions can be optimized—providing significant performance improvements. That is as long as your system provides tail call optimizations. Which JavaScript doesn’t.

How can we optimize recursive JavaScript functions without tail call optimization? One option is to re-write your recursive function in loop form.

function factorial(n) { var acc = 1; for (var i=n; i > 0; i--) { acc *= i; } return acc; }

Thus implementing the tail call optimization by hand. The downside being the loss of the elegance and expressiveness of the recursive implementation.

Another option is to use a *trampoline*.

Wikipedia defines a *trampoline* as *“a loop that iteratively invokes thunk-returning functions (continuation-passing style). A single trampoline is sufficient to express all control transfers of a program; a program so expressed is trampolined, or in trampolined style; converting a program to trampolined style is trampolining. Trampolined functions can be used to implement tail-recursive function calls in stack-oriented programming languages.”*

That’s a mouthful! Trampolines are actually very easy to understand.

Here’s a simple implementation of a trampoline:

function trampoline(f) { while (f && f instanceof Function) { f = f.apply(f.context, f.args); } return f; }

The trampoline takes a function and repeatedly executes the return value until a result other than a function is returned. One is tempted to use the trampoline like so:

function factorial(n) { function recur(n, acc) { if (n == 0) { return acc; } else { return recur(n-1, n*acc); } } return trampoline(recur(n, 1)); }

But this doesn’t produce the desired optimization. Why? Suppose *factorial* is invoked with **n=5**. What value is passed to the trampoline? It’s not what you expect.

`recur(5, 1)`

Is a function *expression* returning the value 120. The trampoline is invoked with the value 120—not at all what we intended! This is essentially the same tail call optimized implementation of factorial, except tail call optimization is unavailable in the JavaScript environment. What went wrong?

We intended to invoke trampoline with a function reference, not a function result. While the following would work

`return trampoline(Function() { recur(n, 1); });`

it’s unwieldy. We need a means for obtaining a reference to a function invocation, complete with all the parameters, so we can invoke the function at a later point in time. Though it wasn’t intended for this purpose the **Function.bind** function fits the bill nicely and thus improve our Factorial implementation:

function factorial(n) { function recur(n, acc) { if (n == 0) { return acc; } else { return recur(n-1, n*acc); } } return trampoline(recur.bind(null, n, 1)); }

Now we call trampoline with the expression

`recur.`

**bind**(**null**, n, 1))

The *bind* function is a function that when invoked returns the value of the calling function, in this case *recur*, using the specified calling context (the *this* pointer, which in this case is *null* since we’re not calling our function within an object instance), and the list of parameters. The result of recur(n, 1) is returned when this *bind* function is invoked. This is exactly what we intended to pass to the trampoline—a function!

This implementation is still not yet optimized, however. To see this suppose we invoke *factorial* with the value **n=5**. When the trampoline invokes the *bind* function returning the invocation of recur(5, 1) what result is returned? It’s not a function—instead it’s the result of recur(4, 5) which is 120. We only got one bounce.

This is easily fixed:

function factorial(n) { function recur(n, acc) { if (n == 0) { return acc; } else { return recur.bind(null, n-1, n*acc); } } return trampoline(recur.bind(null, n, 1)); }

Now the trampoline will provide the desired effect and will continue looping until recur returns the accumulated value.

Can mutually recursive functions be optimized? Consider the mutually recursive functions even/odd:

function even(n) { if (n == 0) { return true; } else { return odd(n-1); } } function odd(n) { if (n == 0) { return false; } else { return even(n-1); } }

Here’s a quick look at the execution of *even*:

**even(5)** → odd(4)

odd(4) → even(3)

even(3) → odd(2)

odd(2) → even(1)

even(1) → odd(0)

odd(0) → **false**

And another:

**even(6)** → odd(5)

odd(5) → even(4)

even(4) → odd(3)

odd(3) → even(2)

even(2) → odd(1)

odd(1) → even(0)

even(0) → **true**

One is tempted to write the following to benefit from the trampoline:

function even(n) { if (n == 0) { return true; } else { return trampoline(odd.bind(null, n-1)); } } function odd(n) { if (n == 0) { return false; } else { return trampoline(even.bind(null, n-1)); } }

Remember the intent of the trampoline was to replace recursive function invocations with a loop. This implementation does not achieve this. Instead the trampoline is recursively invoked. Use the following implementation instead:

function _even(n) { if (n == 0) { return true; } else { return _odd.bind(null, n-1); } } function _odd(n) { if (n == 0) { return false; } else { return _even.bind(null, n-1); } } function even(n) { return trampoline(_even.bind(null, n)); } function odd(n) { return trampoline(_odd.bind(null, n)); }

Now the implementation is appropriately optimized.

The simplicity of the implementation of *even*/*odd* allow us to directly measure the performance of the trampoline and compare that performance to the recursive invocation.

This graph clearly shows trampolines are not an optimization—tail call optimization is still badly needed in the JavaScript environment. What this graph doesn’t show is that after 30,000 recursive invocations the browser hung; to the point it had to be forcibly shut down. Meanwhile the trampoline continued bouncing through hundreds of thousands of invocations. There are no practical limits to the number of bounces a trampoline can make.

Why does a trampoline not improve performance? A couple of observations should be made about this trampoline implementation:

- The number of function invocations is not reduced. The trampoline implementation has essentially the same number of function invocations as the traditional recursive implementation. A tail call optimization would eliminate these function invocations.
- We’ve traded the work of creating stack frames with creating function bindings. Both operations consume time and memory. Tail call optimization doesn’t create function references due to its modifying the function implementation in place to transform the recursive implementation to a loop.

What can we conclude from this? Be skeptical of claims that trampolines improve performance. Demand proof.

Trampolines are still useful even without performance improvement. Your application will run correctly in all operating environments and you’ll encounter fewer hard-to-diagnose errors. Your code is also written to take advantage of tail call optimization when (if?) it’s provided by the JavaScript environment.

Happy hacking!

Functional JavaScript — Memoization, Part I

Functional JavaScript — Memoization, Part II

Functional JavaScript — Currying

Functional JavaScript — Tail Call Optimization and Babel

- Are you a Mac or PC? It’s silly, but fanboys everywhere want to know.
- Do you support the Open Government Initiative?
- Do you believe patents should apply to software? If not, how would you propose to unwind existing patents?
- Should software developers require licensing for creating critical software such as:
- flight control systems/avionics
- medical monitors/scanners/devices
- automotive control systems
- anywhere humans can be killed/injured by malfunctioning software

- Do you believe we’re free to rip content that we’ve purchased, audio CDs and DVDs?
- Do you believe we’re free to sell our digital purchases? Can we bequeath them when we die?
- Should we provide an educational grant to an American student studying in a technical field for every H1B visa issued?
- Should the U.S. invest in broadband infrastructure to ensure all citizens have access to high-speed, low-cost data access?
- Are open source software contributors indemnified from tort lawsuits?
- What are your thoughts on cyber warfare? Is it just to use bombs, guns and missiles in response to a cyber attack?
- Do you believe the Federal Government is free to eavesdrop on any OTA transmission, for example cellular calls and data transfers, and any information provided to a public network, for example Facebook, without the need to obtain a warrant? In other words what are your views on privacy in the digital age especially as it relates to federal investigations?
- Some groups have advocated that Internet access is a fundamental human right. Do you therefore believe that the suspension of Internet services as part of a sentence for a crime committed is in violation of the 8th amendment of the U.S. constitution?
- Do you believe the public’s right to Internet access may be suspended or restricted during times of war or civil unrest?

What questions would *you* have liked to have had answered prior to going into the voting booth next Tuesday? Share your thoughts in the comments.

In Part 1 of this series we looked at a simple function generator for memoizing functions. Recall memoized functions are functions that cache their results; once the value is determined for a particular input the value is placed into a cache and subsequent calls to the function using the same input return the previously cached value — thus *eliding* the function call. Using a fibonacci sequence generator we examined how memoized functions can greatly improve program performance.

Let’s take another look at the Fibonacci function presented in the previous post:

// retrieve nth fibonacci number from sequence (0-relative) // recursive solution for fibonacci calculation is not efficient function fibonacci(idx) { if (idx == 0) return 0; if (idx == 1) return 1; return fibonacci(idx-2) + fibonacci(idx-1); }

What’s the problem here? Let’s suppose we’ve generated a memoized version of this fibonacci function, called *mfibonacci*, and we call *mfibonacci(10)*. What’s happening? At a high level we’re going to return

*fibonacci(8) + fibonacci(9)*

Do you see the problem? We’re not calling memoized versions of our fibonacci function in our recursive generation! In calculating *mfibonacci(10)* we’ve calculated *fibonacci(9), fibonacci(8), fibonacci(7), fibonacci(6), fibonacci(5), fibonacci(4), fibonacci(3), fibonacci(2)* and none of these values have been cached. If after calling *mfibonacci(10)* we were to subsequently call *mfibonacci(9)*, we’d have to start all over again. The fact we’ve already calculated *fibonacci(9)* while computing *mfibonacci(10)* has been lost.

This particular implementation of *fibonacci* compounds the problem:

*fibonacci(n) = fibonacci(n-2) + fibonacci(n-1)*

When calling *mfibonacci(10)* both *fibonacci(8)* and *fibonacci(9)* are called. We’d like the result of *fibonacci(8)* to be cached and available when calling *fibonacci(9)*. This implementation of *fibonacci* was deliberately chosen for this pathological case.

How do we memoize recursive functions?

We need to backtrack a little bit. Previously we used the following function generator for memoizing functions:

// Return a memoizing version of a function f function memoize(f) { if (f instanceof Function) { // only memoize functions of arity 1, otherwise return function if (f.length == 0 || f.length > 1) return f; var fn = function(x) { if (fn.memoizer.values[x] == null) { fn.memoizer.values[x] = f.call(f,x); } return fn.memoizer.values[x]; }; fn.memoizer = { values : [] }; return fn; } else { return f; // garbage in, garbage out I always say! } }

While this function generator works, it’s less than ideal. To see what’s wrong consider the following contrived scenario:

memoize(fibonacci)(10); memoize(fibonacci)(10);

Here the function generator *memoize* is creating a new and separate memoized version of *fibonacci* — each having their own cache! That’s not what we want. Instead we’d like the function’s memoization to be a property of the function itself. This way everyone getting the memoized form of a function are getting the same implementation, and more importantly, the same cached set of values.

It’s extremely useful to make a function’s memoized form a property of the function itself. In JavaScript functions are first-order objects: they can be assigned to variables, stored in properties, passed as function arguments and so forth. There are many ways you can get a reference to a function and it would be convenient to get the memoized form of the function in a uniform way.

How do we do that?

The solution is to place the memoizer function and the property to the memoized form of the function in the function prototype. Recall that in JavaScript whatever functions and properties exist in the object’s prototype also exist in all instances of the object. You may also recall that in JavaScript all functions are objects — all being instances of the base *Function* object. Whatever methods and properties we add to *Function.prototype* will therefore exist for all functions.

We don’t typically think of functions as being objects having methods and properties, but in JavaScript that’s exactly what they are. Now let’s look at how we can add a memoizer and memoized form property to a *Function.prototype*:

// Return a memoizing version of function Function.prototype.memoize = function() { // only memoize functions of arity 1, otherwise return function if (this.length == 0 || this.length > 1) return this; var _this = this, _caller = this.caller; if (this.memoizer == null) { this.memoizer = { fn : function(x) { if (_this.memoizer.values[x] == null) { _this.memoizer.values[x] = _this.call(_caller, x); } return _this.memoizer.values[x]; }, values: [] } } return this.memoizer.fn.wrap(this); }

As before we only memoize functions whose arity is 1. *this* refers to the function object being memoized. We’re adding a new *memoizer* property to the function. The *memoizer* property is an object having 2 properties:

The memoized form of this function*fn*The value cache for the memoized form of this function*values*

The key difference now is *memoize* is a property of the function instance. Meaning regardless of how many times we call *memoize* we’re retrieving the same memoized form for the function.

Once we’ve gotten over the fact that functions are objects and may therefore have their own properties and methods, we need to be careful to delegate those properties in the memoized form of the function. That’s what *wrap* does:

// Wrap an object, pulling wrapped object properties up to wrapper // so as to make them accessible Object.prototype.wrap = function(o) { for (var p in o) { if (o.hasOwnProperty(p)) { this[p] = o[p]; } } return this; }

We won’t discuss the *wrap* function any further at this time. We’ll see the importance of *wrap* in future posts.

Let’s put this all together and generate a fully memoized form of *fibonacci*:

// retrieve nth fibonacci number from sequence (0-relative) // memoized solution for fibonacci calculation is very efficient function fibonacci(idx) { if (fibonacci.memoizer == null || !fibonacci.memoizer._fibonacci) { var innerFibonacci = function(idx) { if (idx == 0) return 0; if (idx == 1) return 1; return innerFibonacci(idx-2) + innerFibonacci(idx-1); }.memoize(); fibonacci.memoizer = innerFibonacci.memoizer; fibonacci.memoizer._fibonacci = true; } return fibonacci.memoizer.fn(idx); }

The *innerFibonacci* is key to the implementation. It’s a function object assigned to a memoized form of an anonymous function implementation of the fibonacci function. The *innerFibonacci* references within the anonymous function definition refer to this outer *innerFibonacci* variable being defined. Thus the recursive form is fully memoized.

The second key to this implementation is aliasing the *innerFibonacci* memoizer with the *fibonacci* memoizer. It’s our *Function.prototype* version of *memoize* that allows us to do this: it created the *memoizer* property on the function that we can now refer to. When we retrieve the *Fibonacci* memoizer we’ll actually be retrieving the *innerFibonacci* memoizer — which is exactly what we want.

Finally, notice how we nonchalantly call the *memoize* method on the anonymous fibonacci function we’ve created at runtime. Now you should begin to understand that in JavaScript functions are objects and as such may have properties and methods and that these properties and methods not only apply to named functions but anonymous functions as well.

We’ve seen how to create fully memoized forms of recursive functions. In so doing we’ve created a single value cache for a function’s memoized form. Both are vast improvements to where we were at the end of the previous post when we first considered memoization.

We’re still not done with memoization however. We’ve not looked at any aspect of cache management. Currently our memoization cache will grow without bounds. We may need to limit both the size constraints and time constraints of the cache. We may only wish to cache so many values based off frequency of use. We may be dealing with a function that returns different values over time even when provided with the same inputs, meaning we may need to empty the cache when some time limit has been exceeded.

In future posts we’ll look at ways of enhancing our *memoize* implementation to provide these capabilities. See you then!

Functional JavaScript — Memoization, Part I

Functional JavaScript — Tail Call Optimization and Trampolines

Functional JavaScript — Currying

Functional JavaScript — Tail Call Optimization and Babel

Does this really surprise anyone?

Mark Twain famously quipped he never let schooling interfere with his education. Likewise I don’t allow development methodologies to interfere with my programming.

I tend to build software both top-down and bottom-up simultaneously. I code to the middle. I first start top-down so I can establish the landscape, determine the main actors, where the core functionality lies and establish a rough sense of the required underlying data model, software services and interfaces required to support this functionality. Then I switch to the bottom-up mode to work on the data model and those software services and interfaces to ensure it’s reasonable and will perform as expected. Sometimes changes are needed and so I need to rinse and repeat until I’m fairly satisfied I have a workable solution.

Now I begin the activity most non-developers think of as development. I begin fleshing everything out and connecting it all together. I work on the hard, read *risky*, items first. That accomplishes two things:

- Assurance my solution is technically correct and I don’t need to go back to the drawing board.
- Validation of project schedule. If I need to go back to the drawing board then the earlier I do so the better.

So now that I’m coding how do I do it? Mainly by using print statements. Now that I’m mainly doing Java development I use log4j debug statements, but it’s the same thing. Log4j simply gives me the convenience of turning off my print statements in production without having to modify my code.

Of course I test my software, and since regression testing is essential for ongoing maintenance I capture these tests as unit tests. I have a two-fold strategy for unit testing:

- Coarse-grained functionality testing
- Fine-grained testing of risky implementation items

I’ve come to find this strategy both minimizes the amount of test cases I need to write and also minimizes the risk of bugs slipping into production.

So how is this maintenance driven development?

When I find bugs using my coarse-grained functionality tests I capture it using a newly-created fine-grained test. This way I can repeat the bug, isolate it, and work to resolve it. I proceed in this manner until all coarse-grained tests have passed.

Likewise when bugs are found in production the first step to resolving it is to create a new fine-grained test that repeats and isolates the bug. Then work to resolve it.

The bottom line is I don’t create a lot of fine-grained tests until I need them. When I need them it’s going to be for code that has been problematic at some point in the past, and now that I have a test for it I can ensure it won’t be problematic in the future.

]]>I recently attended Neal Ford’s functional thinking presentation at COJUG. Neal aims to introduce functional programming concepts to traditional developers. Neal covered a lot of ground but for now I want to focus on one aspect of his presentation: *memoization*.

A key concept of functional programming is a function’s result is solely determined by its inputs. A function called with the same inputs produces the same result every time. The result is not determined by some transient state not captured by the inputs.

Since the same inputs are always mapped to the same result then we can cache the result in a key/value map and return the result from the cache, if present, instead of calling the function. In the case of a cache miss we would call the function with the inputs and insert the result into the cache. Such function result caching is called *memoization*. Memoization is a useful technique for functions that are called repeatedly with the same set of inputs but whose result is relatively expensive to produce.

Functions are first-class objects in functional languages. This means functions can be assigned to variables, passed as inputs to other functions and so on. The ability to provide a function as an input to another function allows us to create *higher-order* functions. When we say ‘provide a function as an input’ we’re talking the function itself, not the result of the function applied to some set of inputs. You simply can’t do such things in non-functional languages.

We can do some pretty amazing things by combining the ability to provide a function implementation as the input to another function with the ability to generate functions at runtime as needed, via function generators. One of those things is creating a higher-order memoization function generator. We can create a memoized version of a function as needed at runtime and then use the memoized version wherever we would have used the non-memoized version. They’re interchangeable.

Creating a memoization function generator in JavaScript is simple:

// Return a memoizing version of a function f function memoize(f) { if (f instanceof Function) { // only memoize functions of arity 1, otherwise return function if (f.length == 0 || f.length > 1) return f; var fn = function(x) { if (fn.memoizer.values[x] == null) { fn.memoizer.values[x] = f.call(f,x); } return fn.memoizer.values[x]; }; fn.memoizer = { values : [] }; return fn; } else { return f; // garbage in, garbage out I always say! } }

Lines 7-12 is the heart of the implementation. This is the function being generated and returned. This generated function contains a *memoizer* property containing the cached function results. When this generated function is called it first checks its cache to retrieve the value. In the case of a cache miss it simply calls through to the function *f* supplied to the function generator and caches the result. In both cases the cached result is returned.

The rest of the implementation is housekeeping. In line 3 we check to ensure *f* is a function and if not we return *f*, whatever it was, in line 17. Line 5 checks the number of inputs, or arity, of *f*. To simplify things we only memoize functions whose arity is 1 (unary functions). There’s an elegant solution for memoizing functions of higher arity that I’ll be demonstrating in a future post.

*f* is returned in the case where *f* isn’t a unary function. This is fine since *memoize(f)* and *f* are interchangeable functions producing indistinguishable results. The only downside is the caller may believe it’s using a memoized version of the function when it isn’t. The caller can determine whether the function it’s calling is a memoized function by checking for the existence of the function’s *memoizer* property.

To demonstrate the powers of memoization we need a function to memoize. This example utilizes a fully-recursive fibonacci sequence generator. The parameter is the 0th—based index of the fibonacci sequence whose corresponding value is to be returned. This is an extremely inefficient implementation thus illustrating the potential computational time savings of memoization.

// retrieve nth fibonacci number from sequence (0-relative) // recursive solution for fibonacci calculation is not efficient function fibonacci(idx) { if (idx == 0) return 0; if (idx == 1) return 1; return fibonacci(idx-2) + fibonacci(idx-1); }

Creating a memoized version of this fibonacci function is simple:

The function *fn* is the memoized version of the *fibonacci* function. *fn* may be invoked in lieu of the *fibonacci* function like so:

yielding the result 9227465 in 273ms of execution time. Once we’ve retrieved the value once for 35, subsequent invocations for 35 yield the result in 1ms, according to the Google Chrome profiler; which has a minimum resolution of 1ms—so the actual time may be less!

Memoization illustrates the classic execution time *vs.* execution space trade-off. More memory is consumed by caching the results but there’s the potential to save a lot of execution time. If you need to clear the function result cache to free up some memory then simply do the following:

and the cached values will be released.

The recursive definition of *fibonacci* may have caused you to wonder whether the intermediate results are cached. They are not. So when computing

we’ve also computed:

though none of these interim results are cached. In future posts we’ll consider ways to cache these interim results.

In future posts I’m going to further expand upon the ideas presented here addressing both the memoization of the interim results of recursive functions and memoizing non-unary functions. We’ll even be looking at ways to make the memoization generator function a property of all functions. See you then!

Functional JavaScript — Memoization, Part II

Functional JavaScript — Tail Call Optimization and Trampolines

Functional JavaScript — Currying

Functional JavaScript — Tail Call Optimization and Babel

We’re accustomed to applying the DRY principle to our designs and implementations, but what about our careers?

To paraphrase an infamous interview question what do you see yourself doing in 5, 10, 15, or 20 years? I know you still want to be programming at some level. That’s why you’re reading this blog: you’re passionate about programming and creating software. You love it. You seek to continually improve your craft. You’re going to continue programming until you retire. Even then you won’t stop and will continue programming until you expire!

Many of us are in this passionate programmer category. But we’ve also dodged the question. Given our nature and passion for programming it’s easy to say we’re going to be programming 20 years from now. It’s much more difficult to say *what* we’re going to be programming.

I’m not talking languages and tools – the things you’ll be programming *with*. I’m not talking about computers, clouds, processors and memory – the things you’ll be programming *on* or *for*. We can be sure these things will change, in some ways we can’t imagine, over the course of the next 20 years.

I’m talking about what *kinds* of systems will you be creating? The same kinds you’re creating today? Really? Twenty years of creating the same kind of software? That sounds pretty boring. It’s also potentially career limiting. Not to mention how do you really improve your craft when you’re recreating the same thing over and over again?

Working in industry verticals or niche markets is OK. But you don’t want to find yourself creating the umpteenth accounting system 20 years from now and trying to console yourself that it’s somehow different because the implementation language is different. Or the IDE you’re using is different. Or the type of machine it runs on is different. It’s not different. It’s the umpteenth accounting system you’ve created.

If you really want to spend an entire career creating software then I have one simple advice: *Don’t Repeat Yourself*. Don’t create the same system twice.