I can't remember where I found it, but there was a brilliant explanation of how functional code maps value. Remember, in a functional program, the basic notation is `x → y`

, that is, for every function, it maps value `x`

to another value `y`

. Things like `map()`

map an array to another array, while `reduce()`

maps a single thing (an array) to another single thing (a value). How does functional programming encode other things?

Well, there's

`x → y`

x is mapped to y
`x → y∪E`

x is mapped to y or Error (Maybe)
`x → P(y)`

x is mapped to all possible values of y (Random Number Generators)
`x → (S -> y ⨯ S)`

x is mapped to a function that takes a state and returns a value and a new state (State)
`x → Σy`

x is mapped to the set of all real-world consequences (IO)
The other day I realized that there's one missing from this list:

`x → ♢y`

x is mapped to y *eventually* (Promises)
I'm not sure what to do with this knowledge, but it's fun to realize I actually knew one more thing than my teacher. Note that the first case, `x → y`

, really *does* cover all sum (union) and product (struct) types, which tells me that the ML-style languages' internal type discrimination features are orthogonal to their encapsulation of non-linear mappings.

The really weird thing is to realize that the last four are all _order-dependent. _They're all about making sure things happen in the correct sorted order (and temporal order, if that matters). That leads me to think more about compiler design...