In Chapter 3 of Lisp In Small Pieces, you are encouraged to write a lisp interpreter using continuations. Here's what I learned as I implemented the chapter 3 examples in Javascript:

A continuation is a representation of the incomplete state of the program. For any evaluation being done, the continuation at that moment holds everything else that must be accomplished in order for the program to finish.

It sounds, odd, but it's actually an easily understandable feature of any development environment with first-class functions and working closure-- both of which Javascript has. Imagine the instruction print 4 * 2. At the very top, that breaks into "Evaluate four times two then print the result." The part after 'then' is the final continuation.

So it breaks down to:

  * Wrap the "print something" as a FinalContinuation. Pass that continuation to the _evaluate_ function with a pointer to the rest of the AST (Abstract Syntax Tree) to evaluate.
  * Evaluate what the '*' means. This retrieves a native function that expects two numbers and returns a product. Wrap the FinalContinuation with a new EvFunc continuation.
  * Evaluate what '4' means. Continue on to...
  * Evaluate what '2' means. Continue on to...
  * Apply the function. This continuation has been built by the previous three. Inside the source code, you'll see EvFunc, which calls EvArgs and passes in Apply, a continuation to run after the arguments have been evaluated. Each instance of "EvArgs" creates as "GatherArgs", which appends the discovered arg into a list, and then passes the 'Apply' continuation forward. When the EvArgs discover no more arguments, it finally calls the 'Apply' continuation, which performs the primitive operation '4 * 2', and then the Apply calls _its_ continuation, which is the FinalContinuation, which prints the value passed by Apply.
  * As that was the FinalContinuation, the program terminates.

Rather than stepping through the program line by line, we step through continuation by continuation. Each line of a program becomes its own continuation, encapsulating the current state of the system, as well as all possible future states-- at least abstractly. For a while, this was really hard for me to understand, but now that I've done the exercise, I see that the notion "the rest of the program" doesn't mean the rest of the syntax tree all laid out and ready to trigger, but instead means "What we've done before," (the current continuation), "What we want to achieve and what we haven't processed yet that needs doing to acheive it," (the continuation we're about to build), and "How we bridge the two after processing this node of the AST" (the current evaluation).

Each continuation becomes a handler of its own momentary responsibility in the interpretation process and a wrapper around all the future responsibilities that have thus far been discovered during interpretation.

That seemed like a lot of work, and I wasn't entirely sure of the benefits of it. On the other hand, continuations do allow for some features of programming that are much harder without them: blocks of code with early return, catch/throw, and finally.

Blocks with early return were easy. When you enter a block, an interpreter creats a new continuation: What do to after the block. Then it creates a continuation: What to do inside the block. It then resumes the inside one. If the block ends, or if return is used, the results are delivered as the value to be processed by the after continuation. Blocks with labels look up the stack where the prior continuations were stored, and when it finds a match, resumes the program using that continuation.

This is important: the stack's purpose, which up to this point was strictly to maintain the identity and value of variables in a given scope, has a new responsibility: maintaining the integrity of the interpretation process itself. There is a new thing on the stack that is not a Symbol/Value pair! Or rather, it is a Symbol/Value pair, but not one you can dereference and manipulate on your own. Eventually, we learn that the classic call/cc (call with current continuation) does allow you to access it and run it whenever you want.

Catch/throw is more complicated. Throw statements can appear anywhere in the source code and abstract syntax tree (AST), and aren't necessarily containerized within a catch block. When they trigger, a corresponding catch continuation has to be looked up and triggered. A catchLookup occurs up the collection of continuations (not the environment, note) until it gets a match or the program dies with an uncaught throw.

A protect is even more important. In Javascript and Python, protect is usually called 'finally', and it means the collection of statements that must be executed within the current continuation-managing context. So when the body of the protected statement ends in any way, we look up the collection of current continuations (remember, they're wrapping each other up to the BottomCont, creating a sort of stack-that's-not-a-stack) until we find the continuation in which the protect was created, then continue with evaluating the final body, then continue on with execution. Because any continuation can be wrapped this way, the base continuation has to be modified, and both the 'block' and 'catch' handlers have to also be modified to scan for protected evaluations.

This is fun. And kinda deep. It goes way more into detail than your usual "let's write an interpreter!" that you find on the Internet, even if one of my side-excursions was to write the λisperator interpreter.

Even this code is twonky. Symbols are not handled well. There's an arbitrary distinction between initial symbols and created symbols. It's not as bad as the λisperator interpreter where the continuation designations were kinda arbitrary.

The oddest thing about this code, and it's endemic in all of these "write an interpreter in Scheme" textbooks, is that it rides hard on top of the scheme garbage collection. When the environment stack pops down, or when a continuation is executed and unwraps, the wrapper is simply discarded, and the assumption is that garbage collection just works at the interpreter's interpreter level. You start to notice it a lot as you go through the exercise.