Hang onto your hats! This final part is where the ride gets wild.

#### Topics covered in the series

Alas, despite having trailed generative programming, I find I don't have any notes from the seminar series covering that topic explicitly (it was not one I was presenting). If I find something, I'll add it in as an extra post.

#### Continuations

This style of programming originally arose in handling data streams in a fashion using lazy evaluation, and uses closures to represent each step of the program. I/O is made available as a pair of lazy lists, a request list and a response list, and the program is a function which takes a response list to a request list:

type Request = Getq | Putq of char
type Response = Getp of char | Putp
type Dialog = Response list -> Request list

For example:

let readChar f p =
Getq ::
(match p with
| (Getp c) :: p1 -> f c p1
| _ -> []);;

The hidden causality here is that putting the `Getq`

at the head of the request queue causes the `Getp c`

to exist at the head of the response queue (perhaps by reading the keyboard).

In general, the continuation style uses chained closures to construct a complex compound operation from a collection of small basic operations. We introduce the types

type continuation = K of (unit -> unit)
let execute (K f) = f()
let stop = K (fun () -> ());;

Our building blocks will mostly be producers and consumers, and others that are neither:

Producer: (X -> continuation) -> continuation
Consumer: X -> continuation -> continuation // strictly a continuation generator, as it produces a continuation from the output of the producer.
Operation: continuation -> continuation

- Where possible, the continuations are built during the construction phase.
- Each step is provided with the rest of the execution as a continuation building block(s).
- A producer is provided with a consumer which constructs a continuation from the produced data at execution time.

#### Example -- Console I/O

First we define some primitives:

let putc (c: char) (k: continuation) =
let write () = System.Console.Write(c);
execute k
K write
let getc (g: char -> continuation) =
let read () = execute (g <| System.Console.ReadKey(true).KeyChar)
K read;;

where we introduce the standard convenience function

let (<|) f x = (f x);;

to avoid extra parentheses.

We can then define a compound operation, for example:

let rec echo k =
let echoChar c = if c = '\r'
then putc '\n' k
else putc c (echo k)
getc echoChar;;

#### Example -- The Lexer revisited

The type constructor `lexerResult`

describes the values that the lexer is going to produce. Note that the lexer is going to be a producer continuation.

// Define a result type of a lexeme list or error message:
type lexerResult<'lexeme> =
Success of 'lexeme list
| Failure of string;;

The type constructor `lexerSystem`

provides us with an I/O abstraction.

// Define a type with a character source, a 'lexeme sink and an error string sink:
type lexerSystem<'lexeme> = {
input: unit -> char option;
output: 'lexeme -> unit;
error: string -> unit;
result: unit -> lexerResult<'lexeme> };;

Given that the lexer is going to be a producer, we will need a consumer to complete its construction.

// Read the output result and continue with (g result)
let lexComplete s g =
let complete () =
g (s.result ())
K complete;;

To allow for lexical analysis failures, we also have a building block for that.

// Set the output to the failure state and continue with k
let lexFail s e k =
let storeError () =
s.error e
execute k
K storeError;;

Note that although it looks like a consumer, in practice we always know the message to use at construction time.

Function `lexGetChar`

builds our most primitive producer continuation.

The consumer (`g`

) constructs a continuation from the character option read from the input.

// s is our lexerSystem
let lexGetChar s g =
K (fun () -> execute (g (s.input())));;

Similarly, `lexPutLexeme`

builds our most primitive consumer, which writes a lexeme to the output.

let lexPutLexeme s lexeme (k: continuation) =
let storeLexeme () =
s.output lexeme;
execute k
K storeLexeme;;

In order to make decisions based on our inputs we need to express a conditional branch.

// (^^) associates on the right, which means that we can chain instances without requiring nested brackets
let ( ^^ ) (test, g1) (g2: 'x -> continuation) x =
(if test x then g1 else g2) x;;

and similarly we want to be able to express repetition.

let rec ( ^* ) (generator: ('x option -> continuation) -> continuation, test)
(consumer: 'x list -> 'x option -> continuation) =
generator (function Some x -> if test x
then (generator, test) ^* (fun l -> consumer(x::l))
else consumer [] (Some x)
| None -> consumer [] None);;

Thus equipped we can start to write the lexer.

type LispLex = Bra | Ket | Dot | Str of string | Num of int | Symbol of string
let alpha c = ((c >= 'a') && (c <= 'z')) || ((c >= 'A') && (c <= 'Z'))
let digit c = (c >= '0') && (c <= '9')
let eqChar c c' = c = c'
let l2s l = String(List.toArray l)
let ( ^| ) p q = fun x -> (p x) || (q x)
// Read a symbol from the input
let readSymbol s char g =
(lexGetChar s, alpha ^| digit ^| eqChar '_') ^*
(fun l ->
fun opt ->
lexPutLexeme s (Symbol (l2s ([char]@l))) (g opt));;

**Exercise**: Complete the lexer.

#### Summary

We have used closures as continuations to construct functions which compose using function application to create more complex functions. The end result is a complex closure, which is itself just a function which, when applied to the unit value “()”, is the program we want, with any side effects of the constituent closures. The main problem that we, as programmers, are likely to feel with this approach is that the process of construction is counter-intuitive, due to the back to front construction style, given that each step of the closure takes an argument representing the rest of the program, thus

type continuation = K of (unit -> unit)
step1: continuation -> continuation
end: continuation
program = step1 end : continuation

As Eric Lippert put it (in the context of writing asynchronous code in this style in C#):

Remember what I was saying about the pros and cons of CPS?

- PRO: Arbitrarily complex and interesting control flows can be built out of simple parts – check.
- CON: The reification of control flow via continuations is hard to read and hard to reason about – check.
- CON: The code that represents the mechanisms of control flow completely overwhelms the meaning of the code – check.
- CON: The transformation of ordinary code control flow into CPS is the kind of thing that compilers are good at, and almost no one else is – check.

It is important to note, however, that this is not the only use of continuations. We regularly use continuations as event handlers, and they can also be used to search a set of alternatives, by effectively constructing a set of programs to be tried one after another until one does not fail provides a result, or all of them fail.

Used appropriately, this is a powerful technique.

#### Monads

Despite the scary name, monads are constructs that arise quite naturally in functional programming; typically as a result of trying to express the OO notion of a separation of concerns, here between individual functions doing localized transformations, and the larger control flow of the program.

A monad is a type the values of which encapsulate a piece of a program in much the same way as a continuation does. There are, however, differences in the way the values work and the way they compose, which fits into the functional programming style more naturally. We will, for the moment, write a monad as a type `M t`

where `t`

is a type which values of the monad produce when “run”, except when the run fails (in which case no value is produced). As with continuation programming, a program is constructed by the composition of monads and functions which produce monads. The basic constructions used are:

Return: t -> M t
Fail: M t
Bind: (t1 -> M t2) -> M t1 -> M t2

#### Understanding “bind”

The “bind” combinator is often written as the infix operator `>>=`

. This begins to make sense if we use a program layout thus:

m >>= fun value1 ->
E1[value1] >>= fun value2 ->
E2[value2]

We read this as binding value1 to the output(s) of m, using this to produce the monad E1[value1], and then binding value2 to the outputs of that, constructing the monad E2[value2], and so on. A language with monads built in can even provide syntactic sugar for this, such as:

value1 := m;
value2 := E1[value1]
E2[value2]

Unfortunately the F# type system is not expressive enough for us to write a general `>>=`

along the lines of

let __binder__<'T, 'U, 'M when 'M : (static member bind : ('T -> 'U 'M ) -> 'T 'M -> 'U 'M)> (x: 'T 'M) (f : 'T -> 'U 'M) =
x |> 'M.bind f;;
let (>>=) = __binder__;;

so for the moment we will write `>>=`

but mean the comparable operator specialised to the monad of interest.

#### Derived combinator cases

A monad type `M unit`

is special (think of it as pure side-effect). With such a type there little point in binding the result values – we know that we will only get `()`

:

let (>>) m1 m2 = m1 >>= fun () -> m2;;

A monad which produces a function or functions is also interesting:

let Apply m1 m2
= m1 >>= fun f ->
m2 >>= fun x ->
Return (f x);;

Given a function we can construct a function over monads:

let Lift f m =
m >>= fun x ->
Return (f x);;

**Exercise**: Prove that `Lift f = Apply (Return f)`

Given two functions that return monads, we can combine them:

let (>=>) f g
= fun x -> (f x) >>= g;;

A monad which produces a monad or monads can be flattened (lose a level of monad):

let Join mm
= mm >>= fun x -> x;;

The name `join`

makes sense when the monad involved produces multiple values – the result monad produces all of the values produced by all of the monads produced by `mm`

.

Given a list of monads we can construct a monad of lists:

let rec Sequence l
= match l with
| [] -> Return []
| h::t -> h >>= fun h1 -> (Sequence t) >>= fun t1 -> Return (h1::t1);;

#### An example of a monad

We introduced the notion of a monad with the description that it handles things like separation of concerns. One very familiar language construct that we have been using from the very first session can be written in monadic form; a `'a list`

is just another `monad 'a`

:

let ReturnList x = [x]
let FailList = []
let (>>=*) m f = List.concat (List.map f m);;

Another pure monad construction is Option.

let ReturnOption x = Some x
let FailOption = None
let (>>=?) m f = Option.bind f m;;

which allows us to chain together a series of operations that might fail, and bypass the calls made after any failure.

**Exercise**: Construct a pure monad based on trees.

#### An I/O Monad

These pure monads do not really have a notion of execution. We can use closures again to construct a monad which itself carries out some side-effecting computation and returns a value. As we did for continuations, we will wrap the closures up using a data type (as before this is to highlight the construction and can be elided):

type 'a io = IO of (unit -> 'a)
exception IOFailure
let RunIO (IO f) = f();;

The principal functions for this are:

let ReturnIO x = IO (fun () -> x)
let FailIO = IO (fun () -> raise IOFailure)
let (>>=) m f = IO (fun () -> RunIO(f(RunIO m)));;

The construction for bind `(>>=)`

deliberately constructs a wrapped closure. At first sight it looks the same as `f(runIO m)`

, which has the same type, but we need to make sure that the sub-term `runIO m`

is evaluated when the result of the bind is run rather than when it is constructed. As all of the other combinators are defined in terms of these three, we won’t need to worry about evaluation order for them. The issue will arise, however, for most special primitives, as we will want side effects to occur at the right time.

To read and write characters we can use:

let GetC = IO (fun () -> System.Console.ReadKey(true).KeyChar)
let PutC (c:char) = IO (fun () -> System.Console.Write c) ;;

We can, of course, string a lot of these together using the derived combinators `>>=`

and `>>`

. The result does nothing until run. For example:

let EchoC = GetC >>= PutC;;

We can also define read and write for strings:

let GetS length =
let rec get chars count =
if count > 0
then GetC >>= fun c -> get (chars@[c]) (count – 1)
else Return IO String(List.toArray chars)
get [] length;;

**Exercise**: write `PutS`

.

#### Computation expressions

F# does provide a form of syntactic sugar that allows us to write monadic code, and continuation passing, in a form that is actually comprehensible to the programmer.

Consider the Option monad. If we introduce a singleton type

type OptionBuilder() =
member self.Bind(input, operation) = Option.bind operation input
member self.Delay (operation:(unit -> 'a option)) = operation ()
member self.Return input = Some input
member self.ReturnFrom (input : 'a option) = input
let option = OptionBuilder();;

where `Delay`

has the effect (as in the `>>=`

operation for our I/O monad) of delaying the execution of the operation until the appropriate point in the sequence, we can write the monadic code using chains of `>>=?`

combinators as a sequence of assignment-like `let!`

expressions.

let result = option {
let! y = f x
let! z = g y
return! h z};;

where `f, g, h`

are functions taking plain values to `option`

values.

The actual compiled form of the `let!`

binding

builder {let! pattern = expr
compound_expr }

is in the continuation form

builder.Bind(expr, (fun pattern -> compound_expr))

This not only allows us to write monadic code in something close to the syntactic sugar form as alluded to above, but also to express naturally continuation based code (e.g. asynchronous operations) in a sequential style (indeed the `async`

computation builder is a standard part of the library for exactly this purpose).

We can also express the equivalence of the two notations by building a monadic form from a computation expression, such as the `async`

computation builder:

let ReturnAsync x = async { return x }
let (>>=!) m f = async { return Async.RunSynchronously(f(Async.RunSynchronously m)) }
let (>>!) m1 m2 = m1 >>=! fun () -> m2
let (>=>!) f g = fun x -> ((f x) >>=! g)
let ApplyAsync m1 m2 =
m1 >>=! fun f ->
m2 >>=! fun x ->
ReturnAsync (f x)
let LiftAsync f = ApplyAsync (ReturnAsync f)
let SequenceAsync (l : _ list) =
Async.Parallel l >>=! fun a ->
ReturnAsync(Array.toList a);;

#### Concluding Remarks

Now, like Wittgenstein's Ladder once you have grokked the content of this course, you should be able to discard the often inefficient implementations used for simplicity (such as avoiding cluttering the example code with reversing cons'd lists), and be confident in using a functional style where complex programs can be built from simple pure elements composed together.

As to the F# language itself, we have barely skimmed the surface -- material like active patterns, quotations (the missing last lecture), the standard collections modules, reactive events and more still remain untouched. But in my experience, once the initial hurdle of the programming style has been crossed, picking those up when you realise you need them is the easy part; whereas before the "Aha!" moment the motivation and need for such features will remain mostly obscure, hidden behind the dilemma of "how do you program when your variables don't?" .

**Exercise**: Test your understanding by reworking the course material for another language of your choice. Note that for some functional languages, parts will not be applicable as they stand (e.g. neither assignment nor the shared-state queue will be expressible in Erlang); and that for many languages pattern based dispatch and infix notation will not be available.