I have a problem with the shiny. It’s the whole ADHD/Interictal thing interacting. There are so many things I want to learn and I haven’t got the time to learn all of them. Right now, I’m going back to a well I’ve gone to a number of times and dived deep into interpreters and… other things.

Current Learning Spree:

SICP (Again!)

Last week I made my way up to the end of Chapter 2 of The Structure and Interpretation of Computer Languages. My impression, after finishing Chapter 2, is I now get why Haskell and Lisp are lumped together as "functional languages," but, as a writer, I can say that the theme and premise of both languages is very different. I can start to see just how easy it would be to implement both an Object Oriented language in Lisp, and how easy it would be to implement Hindley-Milner in classical McCarthy Lisp or its derivatives like Racket and CL, and also why it would be a mistake to do so.

The generic interfaces of classical Scheme seem like a lot of typing. The amount of typing that one has to do, as well as the mystery types of classic Lisp’s unlabled tuples, are both ergonomic hitches that a postmodern Lisp has to overcome, and I’m not sure how.

Build Systems à la Carte

I also read the first ten pages of Mokhov, Mitchell, and Peyton-Jones’ paper Build Systems à la Carte, a lovely little paper about 30 pages long in which the authors try to prove (and do a pretty good job, all things considered) of trying to find the abstraction in build systems. They do a pretty good job, creating a common vocabulary for build systems that not only encompasses classical systems like xmkmf and make and even ninja, but somehow manages to encompass Microsoft Excel as well!

It occurred to me as I was reading it that if "the store" is the unification of the (local) repository and the filesystem, then version control systems are also build systems with narrow task capability: the tasks function’s job is to, for a given hash, drive the filesystem to match that hatch. There’s an abstraction layer here that’s backward looking, rather than MMP-J’s forward looking, but I can feel there’s a commonality here.

Parsing With First Class Derivatives

I’ve been ooh-shinied a lot this week. My Rust skills are getting remarkably rusty as I neglect them, but I want to get back into them. The paper Parsing With First Class Derivatives might be just the hook I need. The examples look Ocaml-ish, but I think I can parse them enough to get a Rusty version working, if I’m crazy enough. What attracted me most to the paper was section 3.3, which seems to imply a principled way to tackle Landin’s "Offside Rule," which is important for whitespace-delimited languages like Python, YAML, and Coffeescript.

Don’t ask why I care. You won’t like the answer.

The first two parts of my swagger tutorial [Part 1, Part 2] were dedicated to the straightforward art of getting swagger up and running. While I hope they’re helpful, the whole point of those was to get you to the point where you had the Timezone project, so I could show you how to add Command Line Arguments to a Swagger microservice.

One thing that I emphasized in Go Swagger Part 2 was that configure_timeofday.go is the only file you should be touching, it’s the interface between the server and your business logic. Every example of adding new flags to the command line, even the one provided by the GoSwagger authors, starts by modifying the file cmd/<project>-server/main.go, one of those files clearly marked // DO NOT EDIT.

We’re not going to edit files marked // DO NOT EDIT.

To understand the issue, though, we have to understand the tool swagger uses for handling command line arguments, go-flags.


go-flags is the tool Swagger uses by default for handling command line arguments. It’s a clever tool that uses Go’s tags and reflection features to encode the details of the CLI directly into a structure that will hold the options passed in on the command line.

The implementation

We’re going to add a single feature: the default timezone. In Part 2, we hard-coded the default timezone into the handler, but what if we want to change the default timezone more readily than recompiling the binary every time? The Go, Docker, and Kubernetes crowd argue that that’s acceptable, but I still want more flexibility.

To start, we’re going to open a new file it the folder with our handlers and add a new file, timezone.go. We’re going to put a single tagged structure with a single field to hold our timezone CLI argument, and we’re going to use go-flags protocol to describe our new command line flag.

Here’s the whole file:

package timeofday
type Timezone struct {
    Timezone string  `long:"timezone" short:"t" description:"The default time zone" env:"GOTIME_DEFAULT_TIMEZONE" default:"UTC"`

If you want to know what you can do with go-flags, open the file ./restapi/server.go and examine the Server struct there, and compare its contents to what you see when you type timeofday-server --help. You can learn a lot by reading even the generated source code. As always, // DO NOT EDIT THIS FILE.

Next, go into configure_timeofday.go, and find the function configureFlags. This, unsurprisingly, is where this feature is supposed to go.

We’ve already imported the timeofday package, so we have access to our new Timezone type. Right above configureFlags, let’s create an instance of this struct and populate it with defaults:

<<add timezone to configure_timeofday.go>>=
var Timezone = timeofday.Timezone{
    Timezone: "UTC",

See the comment in configureFlags? Note that swag package? You’ll have to add it to the imports. It should be already present, as it came with the rest of the swagger installation. Just add:

<<new import for swag>>=
    swag "github.com/go-openapi/swag"

And now modify configureFlags():

<<rewrite configureFlags>>=
func configureFlags(api *operations.TimeofdayAPI) {
    api.CommandLineOptionsGroups = []swag.CommandLineOptionsGroup{
            ShortDescription: "Time Of Day Service Options",
            Options: &Timezone,

See that ShortDescription there? When you run the --help option for the server, you’ll see a section labeled "Application Options", and another labeled "Help Options". We’re adding a new section, "Service Options", which will include our customizations. This conceptually allows us to distinguish between routine options of a microservice, and the specific options of this microservice.

Always distinguish between your framework and your business logic. (I’ve often seen this written as "always distinguish between execution exceptions and business exceptions," and it’s great advice is similarly here.)

You can now build your server (go build ./cmd/timeofday-server/), and run it (./timeofday-server --help), and you’ll see your new options. Of course, they don’t do anything, we haven’t modified your business logic!

The Context Problem

This is where most people have a problem. How do the values that now populate the Timezone struct make their way down to the handlers? There are a number of ways to do this. The "edit main.go" people just make it a global variable available to the whole server, but I’m here to tell you doing so is sad and you should feel sad if you do it. What we have here, in our structure that holds our CLI options, is a context. How do we set the context?

The correct way is to modify the handlers so they have the context when they’re called upon. The way we do that is via the oldest object-oriented technique of all time, one that dates all the way back to 1964 and the invention of Lisp: closures. A closure wraps one or more functions in an environment (a collection of variables outside those functions), and preserves handles to those variables even when those functions are passed out of the environment as references. A garbage-collected language like Go makes this an especially powerful technique because it means that anything in the environment for which you don’t keep handles will get collected, leaving only what matters.

So, let’s do it. Remember these lines in configure_timeofday.go, from way back?

    api.TestGetHandler = operations.TestGetHandlerFunc(func(params operations.TestGetParams) middleware.Responder {
        return middleware.NotImplemented("operation .TestGet has not yet been implemented")

See that function that actually gets passed to TestHandlerGetFunc()? It’s anonymous. We broke it out and gave it a name and stuff and filled it out with business logic and made it work. We’re going to go back and replace those lines, again, so they look like this:

    api.TimeGetHandler = operations.TimeGetHandlerFunc(timeofday.GetTime(&Timezone))
    api.TimePostHandler = operations.TimePostHandlerFunc(timeofday.PostTime(&Timezone))

Those are no longer references to functions. They’re function calls! What do those functions return? Well, we know TimeGetHandlerFunc() is expecting a reference to a function, so that function call had better return a reference to a function.

And indeed it does:

func GetTime(timezone *Timezone) func(operations.TimeGetParams) middleware.Responder{
    defaultTZ := timezone.Timezone

    // Here's the function we return:
    return func(params operations.TimeGetParams) middleware.Responder {
        // Everything else is the same... except we need *two* levels of
        // closing } at the end!

Now, instead of returning a function defined at compile time, we returned a function reference that is finalized when GetTime() is called, and it now holds a permanent reference to our Timezone object. Do the same thing for PostTime.

There’s one more thing we have to do. We’ve moved our default timezone to the configure_timeofday.go file, so we don’t need it here anymore:

func getTimeOfDay(tz *string) (*string, error) {
    t := time.Now()
    utc, err := time.LoadLocation(*tz)
    if err != nil {
        return nil, errors.New(fmt.Sprintf("Time zone not found: %s", *tz))

    thetime := t.In(utc).String()
    return &thetime, nil

And that’s it. That’s everything. You can add all the command line arguments you want, and only preserve the fields that are relevant to the particular handler you’re going to invoke.

You can now build and run the server, but with a command line:

$ go build ./cmd/timeofday-server/
$ ./timeofday-server --port=8080 --timezone="America/Los_Angeles"

And test it with curl:

$ curl 'http://localhost:8020/timeofday/v1/time?timezone=America/New_York'
{"timeofday":"2018-03-30 23:44:47.701895604 -0400 EDT"}
$ curl 'http://localhost:8020/timeofday/v1/time'
{"timeofday":"2018-03-30 20:44:54.525313806 -0700 PDT"}

Note that the default timezone is now PDT, or Pacific Daily Time, which corresponds to the America/Los_Angeles entry in the database in late March.

And that’s how you add command line arguments to Swagger servers correctly without exposing your CLI settings to every other function in your server. If you want to see the entirety of the source code, the advanced version on the repository has it all.

Review of Part One

In Part One of Go-Swagger, we generated a on OpenAPI 2.0 server with REST endpoints. The server builds and responds to queries, but every valid query ends with “This feature has not yet been implemented.”

It’s time to implement the feature.

I want to emphasize that with Go Swagger there is only one generated file you need to touch. Since our project is named timezone, the file will be named restapi/configure_timezone.go. Our first step will be to break those “not implemented” functions out into their own Go package. That package will be our business logic. The configure file and the business logic package will be the only things we change.

A reminder: The final source code for this project is available on Github, however Parts One & Two deal with the most common implementation, a server with hard-coded default values. For these chapters, please consult that specific version of the code.

Break out the business logic

Create a new folder in your project root and call it timeofday.

Open up your editor and find the file restapi/configure_timeofday.go. In your swagger.yml file you created two endpoints and gave them each an operationId: TimekPost and TimeGet. Inside configure_timeofday.go, you should find two corresponding assignments in the function configureAPI(): TimeGetHandlerFunc and ClockPostHandlerFunc. Inside those function calls, you’ll find anonymous functions.

I want you to take those anonymous functions, cut them out, and paste them into a new file inside the timeofday/ folder. You will also have to create a package name and import any packages being used. Now your file, which I’ve called timeofday/handlers.go, looks like this (note that you’ll have to change your import paths as you’re probably not elfsternberg. Heck, I’m probably not elfsternberg):

<<handlers.go before implementation>>=
package timeofday


func GetTime(params operations.TimeGetParams) middleware.Responder {
  return middleware.NotImplemented("operation .TimeGet has not yet been implemented")

func PostTime(params operations.TimePostParams) middleware.Responder {
  return middleware.NotImplemented("operation .TimePost has not yet been implemented")

And now go back to restapi/configure_timeofday.go, add github.com/elfsternberg/timeofday/clock to the imports, and change the handler lines to look like this:

<<configuration lines before implementation>>=
    api.TimeGetHandler = operations.TimeGetHandlerFunc(timeofday.GetTime)
    api.TimePostHandler = operations.TimePostHandlerFunc(timeofday.PostTime)


Believe it or not, you’ve now done everything you need to do except the business logic. We’re going to honor the point of OpenAPI and the `// DO NOT EDIT“ comments, and not modify anything exceept the contents of our handler.

To understand our code, though, we’re going to have to read some of those files. Let’s go look at /models. In here, you’ll find the schemas you outlined in the swagger.yml file turned into source code. If you open one, like many files generated by Swagger, you’ll see it reads // DO NOT EDIT. But then there’s that function there, Validate(). What if you want to do advanced validation for custom patterns or inter-field relations not covered by Swagger’s validators?

Well, you’ll have to edit this file. And figure out how to live with it. We’re not going to do that here. This exercise is about not editing those files. But we can see, for example, that the Timezone object has a field, Timezone.Timezone, which is a string, and which has to be at least three bytes long.

The other thing you’ll have to look at is the restapi/operations folder. In here you’ll find GET and POST operations, the parameters they accept, the responses they deliver, and lots of functions only Swagger cares about. But there are a few we care about.

Here’s how we craft the GET response. Inside handlers.go, you’re going to need to extract the requested timezone, get the time of day, and then return either a success message or an error message. Looking in the operations files, there are a methods for good and bad returns, as we described in the swagger file.

<<gettime implementation>>=
func GetTime(params operations.TimeGetParams) middleware.Responder {
    var tz *string = nil

    if (params.Timezone != nil) {
        tz = params.Timezone

    thetime, err := getTimeOfDay(params.Timezone)


The first thing to notice here is the params field: we’re getting a customized, tightly bound object from the server. There’s no hope of abstraction here. The next is that we made the Timezone input optional, so here we have to check if it’s nil or not. if it isn’t, we need to set it. We do this here because we need to cast params.Timezone into a pointer to a string, because Go is weird about types.

We then call a (thus far undefined) function called getTimeOfDay.

Let’s deal with the error case:

<<gettime implementation>>=
    if err != nil {
        return operations.NewTimeGetNotFound().WithPayload(
            &models.ErrorResponse {
                swag.String(fmt.Sprintf("%s", err)),

That’s a lot of references. We have a model, an operation, and what’s that “swag” thing? In order to satisfy Swagger’s strictness, we use only what Swagger offers: for our 404 case, we didn’t find the timezone requested, so we’re returning the ErrorResponse model populated with a numeric code and a string, extracted via fmt, from the err returned from our time function. The 404 case for get is called, yes, NewClockGetNotFound, and then WithPayload() decorates the body of the response with content.

The good path is similar:

<<gettime implementation>>=
    return operations.NewClockGetOK().WithPayload(
            Timeofday: *thetime,

Now might be a good time to go look in models/ and /restapi/options, to see what’s available to you. You’ll need to do so anyway, because unless you go to the git repository and cheat, I’m going to leave it up to you to implement the PostTime().

There’s still one thing missing, though: the actual time of day. We’ll need a default, and we’ll need to test to see if the default is needed. The implementation is straightforward:

<<timeofday function>>=
func getTimeOfDay(tz *string) (*string, error) {
        defaultTZ := "UTC"

        t := time.Now()
        if tz == nil {
                tz = &defaultTZ

        utc, err := time.LoadLocation(*tz)
        if err != nil {
                return nil, errors.New(fmt.Sprintf("Time zone not found: %s", *tz))

        thetime := t.In(utc).String()
        return &thetime, nil

Now, if you’ve written everything correctly, and the compiler admits that you have (or you can cheat and download the 0.2.0-tagged version from the the repo), you’ll be able to build, compile, and run the server, and see it working:

$ go build ./cmd/timeofday-server/
$ ./timeofday-server --port=8080

And then test it with curl:

$ curl 'http://localhost:8020/timeofday/v1/time'
{"timeofday":"2018-03-31 02:57:48.814683375 +0000 UTC"}
$ curl 'http://localhost:8020/timeofday/v1/time?timezone=UTC'
{"timeofday":"2018-03-31 02:57:50.443200906 +0000 UTC"}
$ curl 'http://localhost:8020/timeofday/v1/time?timezone=America/Los_Angeles'
{"timeofday":"2018-03-30 19:57:59.886650128 -0700 PDT"}

And that’s the end of Part 2. If you’ve gotten this far, congratulations! Just a reminder, a working version of this server is available under the “0.2.0” tag at the repo.

On to Part 3


${WORK} has me writing microservices in Go, using OpenAPI 2.0 / Swagger. While I’m not a fan of Go (that’s a bit of an understatement) I get why Go is popular with enterprise managers, it does exactly what it says it does. It’s syntactically hideous. I’m perfectly happy taking a paycheck to write in it, and I’m pretty good at it already. I just wouldn’t choose it for a personal project.

But if you’re writing microservices for enterprise customers, yes, you should use Go, and yes, you should use OpenAPI and Swagger. So here’s how it’s done.

All of the files for this tutorial are available from the elfsternberg/go-swagger-tutorial repo at github. There are two phases to this tutorial, and the first phase is the base Go Swagger implementation. I strongly recommend that if you’re going to check out the source code in its entirety, that you start with the Basic Version, and only check out the Advanced version when you get to Part 3.

Just be aware that if you see stuff that looks like <<this>>, or a single @ alone on a line, that’s just part of my code layout; do not include those in your source code, they’re not part of Go or Swagger. Sorry about that.

Go Swagger!

Swagger is a specification that describes the ndpoints for a webserver’s API, usually a REST-based API. HTTP uses verbs (GET, PUT, POST, DELETE) and endpoints (/like/this) to describe things your service handles and the operations that can be performed against it.

Swagger starts with a file, written in JSON or YAML, that names each and every endpoint, the verbs that endpoint responds to, the parameters that endpoint requires and takes optionally, and the possible responses, with type information for every field in the inputs and outputs.

Swagger tooling then takes that file and generates a server ready to handle all those transactions. The parameters specified in the specification file are turned into function calls and populated with "Not implemented" as the only thing they return.

Your job

In short, for a basic microservice, it’s your job to replace those functions with your business logic.

There are three things that are your responsibility:

  1. Write the specification that describes exactly what the server accepts as requests and returns as responses, and generate a server from this specification.
  2. Write the business logic.
  3. Glue the business logic into the generated server.

In Go-Swagger, there is exactly one file in the generated code that you need to change. Every other file is labeled "DO NOT EDIT." This one file, called configure_project.go, has a top line that says "This file is safe to edit, and will not be replaced if you re-run swagger." That exactly one file should be the only one you ever need to change.

The setup

You’ll need Go. I’m not going to go into setting up Go on your system; there are perfectly adequate guides elsewhere. You will need to install swagger and dep.

Once you’ve set up your Go environment (set up $GOPATH and $PATH), you can just:

$ go get -u github.com/golang/dep/cmd/dep
$ go get -u github.com/go-swagger/go-swagger/cmd/swagger


Now you’re going to create a new project. Do it in your src directory somewhere, under your $GOPATH.

$ mkdir project timeofday
$ cd timeofday
$ git init && git commit --allow-empty -m "Init commit: Swagger Time of Day."
$ swagger init spec --format=yaml --title="Timeofday" --description="A silly time-of-day microservice"

You will now find a new swagger file in your project directory. If you open it up, you’ll see short header describing the basic features Swagger needs to understand your project.


Swagger works with operations, which is a combination of a verb and an endpoint. We’re going to have two operations which do the same thing: return the time of day. The two operations use the same endpoint, but different verbs: GET and POST. The GET argument takes an optional timezone as a search option; the POST argument takes an optional timezone as a JSON argument in the body of the POST.

First, let’s version our API. You do that with Basepaths:

<<version the API>>=
basePath: /timeofday/v1

Now that we have a base path that versions our API, we want to define our endpoint. The URL will ultimately be /timeofday/v1/time, and we want to handle both GET and POST requests, and our responses are going to be Success: Time of day or Timezone Not Found.

<<define the paths>>=
      operationId: "GetTime"
        - in: path
          name: Timezone
            type: string
            minLength: 3
            $ref: "#/definitions/TimeOfDay"
            $ref: "#/definitions/NotFound"
      operationId: "PostTime"
        - in: body
          name: Timezone
            $ref: "#/definitions/Timezone"
            $ref: "#/definitions/TimeOfDay"
            $ref: "#/definitions/NotFound"

The $ref entries are a YAML thing for referring to something else. The octothorpe symbol (#) indicates "look in the current file." So now we have to create those paths:

    type: object
        type: integer
        type: string
    type: object
        type: string
        minLength: 3
    type: object
      TimeOfDay: string

This is really verbose, but on the other hand it is undeniably complete: these are the things we take in, and the things we respond with.

So now your file looks like this:

swagger: "2.0"
  version: 0.1.0
  title: timeofday
  - application/json
  - application/json
  - http

# Everything above this line was generated by the swagger command.
# Everything below this line you have to add:

<<version the API>>


<<define the paths>>

Now that you have that, it’s time to generate the server!

$ swagger generate server -f swagger.yml

It will spill out the actions it takes as it generates your new REST server. Do not follow the advice at the end of the output. There’s a better way. Use dep, which will automagically find all your dependencies for you, download them to a project-specific vendor/ folder, and lock the specific commit in the record so version creep won’t break your project in the future. dep has become even Google’s recommended dependency control mechanism. Just run:

$ dep init

Dependency management in Go is a bit of a mess, but the accepted solution now is to use dep rather than go get. This creates a pair of files, one describing the Go packages that your file uses, and one describing the exact versions of those packages that you last downloaded and used in the ./vendor/ directory under your project root.

Now you can build the server:

$ go build ./cmd/timeofday-server/

And then you can run it. Feel free to change the port number:

$ ./timeofday-server --port=8082

You can now tickle the server:

$ curl http://localhost:8082/
{"code":404,"message":"path / was not found"}
$ curl http://localhost:8082/timeofday/v1/time
" function .GetTime is not implemented"

Congratulations! You have a working REST server that does, well, nothing.

For Part 2, we’ll make our server actually do things.


Engineering Notebook: More Rust Basics

Posted by Elf Sternberg as Uncategorized

Continuing on with my engineering notebook, here’s what I’ve been learning about Rust recently. Mostly still following Beingessner’s book, which is about memory management primitives in Rust.

The first thing, as I covered last time is that the book teaches Rust’s peculiar (but excellent) flavor of memory management. Along the way, it teaches Rust’s sum types (using enum as its keyword), matching with completion, and Rust’s maybe types (using Option<>).

One thing it doesn’t teach is about is tuple structs. I had to look them up; I keep encountering them in other people’s code, but they weren’t obvious to me and I couldn’t remember seeing them in The Rust Programming Language, so I kept calling them "tuple types," but no, they’re called tuple structs, and you access their internals via a dotted index:

struct Color(i32, i32, i32); // Red, Green, Blue
let lightskyblue = Color(135, 206, 250);
let amount_of_blue = lightskyblue.2;

Box<T>: The thing with Rust is that you still have to care, a lot, about whether something is on the stack or the heap. Using the Box<T> type automatically puts something on the heap, and is tracked in such a way that Rust deletes it automatically when the handle to it goes out of scope.

Rc<T> is a wrapper around Box that allows for multiple handles to point to T, counting the number of handles referring to T, and automatically deleting T when the count goes to zero.

Arc<T> is just like Rc<T> except that it also wraps the counter in a mutex handler and allows for multiple threads to reference the object safely.

RefCell<T> allows one to borrow a referenced object from inside an Rc or Arc or other immutable container. This allows the user to mutate the contents of what would other be an immutable. When using RefCell<T>, you must use .borrow() or .borrow_mut() to "dynamically borrow" and object that normally would not be borrow-able.

Rust continues to be… weird. This one in particular gets me:

pub struct List<T> { head: Link<T>, }
type Link<T> = Option<Rc<Node<T>>>;
struct Node<T> { elem: T, next: Link<T>, }
pub struct Iter<'a, T:'a> { next: Option<&'a Node<T>>, }

// In body of impl<T> List<T>:
pub fn iter(&self) -> Iter<T> {
        Iter { next: self.head.as_ref().map(|node| &**node) }

I mean, I get that Rc is a pointer, and as_ref turns the head object into a reference to the head object, and map only works if node is not None, so we have to dereference it twice, and then to make Rustc happy that we’re saving a reference mark it as a reference to this twice-dereferenced thing, but… oy. That just looks ugly.

And yes, I get that I have to internalize all this before I can even begin to write Rust meaningfully.


Programmer Competency Matrix

Posted by Elf Sternberg as Uncategorized

So I’ve been looking at Sijin Joseph’s “Programmer competency matrix,” and I’m going to go through it because I kinda want to know where I stand. But I have to tell Sijin once thing: information architecture is something he needs to improve on. By using the first person in his “about me” page, he somehow managed to avoid telling people his name throughout most of the blog! As a recovering web designer, I’m shocked whenever I see someone’s blog or website work really really hard to avoid putting its identity in the header.

So let me go through the matrix.

Computer Science

Data Structures: Level 2, almost 3

I’m a little weak on some data structures, especially those that are hybrids, such as an efficient Least-Recently-Used hashtabe, as well as those that lead to persistent storage.

Algorithms: Level 2, almost 3

More or less the same thing.

Systems Programming: Level 2, almost 3

Again, I feel like I’m weak with some of the features of systems production, such as dynamic linking, JIT compilation, and garbage collection. I know what those things are, and I’ve implemented primitive versions of them, but things like generational garbage collection and peephole optimization aren’t yet in my vocabulary.

Software Engineering

Version Control: Level 3

I use Git for everything. I started with CVS, then SVN, now Git. I use Git for tracking my documentation. Hell, I use Git Flow to track my fiction writing!

Build Automation: Level 3

I’ve written build automation systems. I understand how build systems can be partial or complete; can be topological, reordering, or recursive; can have static or dynamic depedencies; can perform a cutoff if a dependency is identical to the previous iteration and therefore its dependents don’t need rebuilding.

Automated Testing: Level 2

I love getting into TDD when a problem becomes complex enough that functional products need to be defined, described, and generated. I don’t have much strength with UI or performance tests.

Problem Decomposition: Level 3

Also: knows damn well when not to implement the super-clever hyper-abstract generic solution when the next guy who has to look at and maintain my code isn’t a level 3!

Systems Decomposition: Level 3

But… I have a tendency to work only at level 2. I don’t enjoy doing heavy integration at the process or container level; it’s abstract and distant, and feels a bit like abdication to entropy.

Communication: Level 2

It depends on the kind of communication. I have no trouble describing what I want to buid, or am going to build; I have more than once saved a project by being the only one on the team who knows how to drive a UML rendering tool. But sprint planning, negotiating for who builds what when, that’s always rough.

Code organization: Level 3

I work really hard to make my code readable and beautiful. I know the next person to read it will hopefully appreciate it. I use a thesaurus constantly to make sure I’ve used exactly the right name for a variable or other symbol.

Source tree organization: Level 3

I tend to work simply, with the best practices of my industry in mind. Kinda a reason I don’t like Go: there are two “best practices,” and they’re completely at odds with each other.

Code readability: Level 3

Cyclomatic complexity checks for the win! This one is actually a bit hard to hit in my favorite language, Lisp, because it encourages that kind of complexity with anonymous functions and efficient local dispatch, but I manage.

Defensive coding: Level 2

I’m okay at it, and I know what to look for. I use linters and other semantic checkers a lot. But I’m definitely falling back on common tools, rather than custom tools, for my efforts.

Error handling: Level 2

I’d love to be level 3, but level 3 is only possible in languages the industry doesn’t use.

IDE: Level 3

I work in Emacs. What do you think?

API: Level 2

I’m not sure what he means here by “API.” Does he mean the ones we use in common practice, like HTTP and REST and GRPC? Sure, I know those. But what about the ones built on top, like OpenAPI? Those are as varied as the use cases. Does he mean LibC? I don’t work much in LibC these days.

Frameworks: Level 2

I consider that a badge of honor. There are too many frameworks out there. The only reason to write a framework these days is to fill in a gap in a language that doesn’t already have one. Frameworks are like Patterns: holes in the language.

Requirements: Level 3

This actually goes back to “communication.” Because I know Rational Rose, I know how to extract requirements and scope them out better than most people.

Scripting: Level 3

Just go look at my Github!

Database: Level 2, almost 3

My SQL is pretty good; I’ve even published solid and testable extensions to Postgres. I do know some optimizations and can talk about storage engines, but the industry is getting pretty fuzzy these days.


Languages with professional expertise: Level 2, almost 3

And yes, I hit the bonus round. I can talk about monads.

Platforms with professional experience: Level 2

I’ve been pretty monogamous about my platforms. That gives me a lot of depth when talking about Unix-based services on AWS, but yeah, I can’t talk too much about Azure or GKE at the moment.

Years of professional experience: Level 3

Yep. More than ten years of experiences.


Tools: Level 3

Yep, written tools other people use. Knows better than to use an ORM.

Languages: Level 3

Try me.

Codebase: Level 2

Here’s the thing about Sijin’s “level 3:” I don’t believe this is a valuable skill. If your codebase has this kind of headaround problem, you have a codebase with poor decomposition.

Upcoming technologies: Level 3

I tend to be a neophile and love playing with new things. Unfortunately, I often love playing with new obscure things like Typed Assembly language and Python Lisp, which means that my neophilia often has me working in spaces where there’s not much legacy life. Ah, well.

Platform internals: Level 2

Sijin’s “level 3” seems to be all over the place. Sometimes it’s industrial, and sometimes, as in this case, it’s hyper-academic stuff only ever seen by, like, 20 people worldwide.

Book: Level 3

Yes, I’ve actually read *Structure and Interpretation of Computer Programs” and “Lisp in Small Pieces” all the way through. I like how “Mythical Man Month” is a required reading for Level 2.

Blogs: Level 3


The second thing I’ve been reading about this week is OpenAPI, a way of specifying REST APIs. Well, a way of specifying the API for a server that defines its actions in terms of well-known endpoints and HTTP verbs like GET, PUT, POST, and DELETE. (As well as HEAD, OPTIONS, PATCH, but presumably not the 32 others that Microsoft tried to burden us with for WebDAV.)

Since we’ve been using Go, we’ve been using GoSwagger, a tool for building web server frameworks with OpenAPI. OpenAPI defines a document, usually written in JSON or YAML, that defines a client, a server, or both, and the API that will be used to communicate between them.

The basic OpenAPI file has a header with some versioning and metadata information, as well as the scheme used, which is almost always HTTP (although I understand Websockets can be used here instead). After that, there are two sections:

paths: describes the URL that the user will follow to get to a resource, followed by a verb (get, put, post, delete), followed by paramaters (either in the body or in the CGI arguments), as well as the function name that will handle the transaction.

definitions: contains a collection of named objects, each with a JSON schema, that describes the payloads. You refer to these named objects in the paths: section, thus creating a relationship between the target and the schema.

A really simple one might just be “time of day” as a string, and takes the timezone as a single argument:

swagger: "2.0"
  title: clock
basePath: /clock/v1
produces: ["application/json"]

      operationId: "time"
      - in: "path"
        name: "timezone"
        required: false
        type: "string"
          type: "string"
          type: "string"

Go’s most common implementation of swagger, go-swagger, automatically generates a web server for you based on this definition. Your only responsibility is to find the file configure_clock.go, find the function time(), and fill in the details. You get the parameters as a map, with their types completely filled out, and go-swagger will enforce requirements like the timezone passed in be a string. It could be a string of anything; you’ll still have to validate that it parses to a valid timezone.

OpenAPI is really nifty: It locks down “what you mean” about verbs and objects at the HTTP/JSON layer. It is not REST. REST requires more discipline than this. OpenAPI makes that discipline fairly easy, but the developer still has to know and use it. OpenAPI does not in anyway at all enforce the “coarse-grained document handling” that was introduced by Leonard Richardson, and it’s completely possible that many developers will be introducing SOAP-like commands using HTTP verbs.

I do like how expressive Go is, at least at the simple level of “I want to do something.” I really dislike how verbose Go is, especially when you’re trying to do anything that might have a lot of side effects. It’s an ugly language, and I’m never going to enjoy working in it fully. But it’s easy to get good at it, so I guess I’m going to be good at it.

I’ve been doing poorly on my engineering notes recently, mostly because I haven’t been having many successes worth writing about. On the other hand, I did manage to get something working in Go.

At work, we had a problem: We wanted to deploy CookieCutter, but we wanted it deployed as a service and we wanted it deployed in a Golang only environment. So I ended up re-writing the core algorithm in Go and hooking into the server and it more or less works.

I… really don’t like Go. At all. It’s weird ad-hoc polymorphism with three-feature signatures just feels all wrong and broken to me. People used to call Python a "bondage-and-discipline language" because you were not allowed to be sloppy, at all, with your whitespace. Go makes Python look positively libertine. You are not allowed to experiment in Go. Warnings are errors— if you block out some code, you have to block out any imports that are referenced only in that code. It makes for an annoying development experience. Go is not only for mediocre developers1, but it genuinely makes potentially great developers into mediocre ones by constantly discouraging developers from trying any harder.

I mean, I get it. For what it does, Go does it really, really well. And it probably is the best thing in the space it occupies right now. But that space just feels dumb to me.

But here’s what I have learned:

The cheesiest way to test if a file is a binary is to open it, read in 512 bytes, and ask the UTF8 library "Does this look like valid text to you?"

The Filepath library looks exactly like Python’s. Walk() even works more or less the same way.

I really wish Go had generics. That programmers could use. It obviously does have them— lists and maps can take interface{} as an arbitrary value type!

The OS library looks exactly like Python’s.

So does the Path library.

I really wish Go had higher-kinded types, if only so error-handling monads and railway-oriented programming could be a thing.

Although it’s not well-documented, in recent versions of Go the object returned by bytes.Buffer() implements io.Writer, so we can all stop implementing our own Read() and Write() methods on top of str.

The Templating Library is awkward, using the Clojure-ism (!) of a leading dot to distinguish between a symbol lookup and a symbol invocation. But it’s nicely powerful and feels a bit like Jinja2.

The Go version of shutil.copy() does not reproduce the file permissions of the source correctly. Write your own.

For such a damnably restricted language, the documentation tool is unforgivably louche.

I really wish Go had macros.

Go has closures. It has functions as first-class objects. It’s lack of the features that came with these is inexcusable.

The ioutil.TempDir() works exactly like the Python version.

I really wish Go had an implemented version of du(1). Guess I’ll have to write that myself.

The syscall.Statfs() works exactly like Python’s os.statvfs.

Rob Pike, the principle architect of Go, wrote of it:

Our programmers are not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.

I’ve been thinking a lot about where the death’s head symbol, ☠, appears in the Python semantic analysis. The Python language, underneath all the churn and symbols, is only about 40 semantics in size (see: Python, The Full Monty), and most of those are fairly well-defined.

The problem lies in this simple example:

def f(y):
    if y > 9:
        x = True
    return x

This is where the ☠ symbol appears in the semantic specification. For the given environment Γ, the value of x is bool | ☠, meaning that it’s possible this function terminates the program. If you pass a value less than 10, this function throws a fatal exception.

This particular case is easy to identify. But Python’s class slots are stringly defined, and in some cases indistinguishable from hash tables. It’s entirely possible to pass an object to a function, have the function manipulate the object’s fields, and then return to the caller an object with unexpectedly missing fields, which trigger the ☠ semantic. This becomes even more problematic when we consider that those manipulations may come from tainted external data.

The problem is that literally all values in Python come with an implicit v|☠ as their bound value when a scope is entered. It is impossible for a Python to be provably correct. The external source of names is a significant problem, as unlike Perl, Python lacks a language-based mechanism for tracking tainted data. This has lead to some fantastic magic, such as Django’s ORM, but it also leads to fragile coding practices.

Python manages this with a robust try/catch mechanism that developers learn to just throw around broken functions. That seems to be “good enough,” and certainly Python’s popularity is a sign that Python got something right. I just can’t help but feel that there has to be a way to get one step further: to secure Python against its ☠ deathwish and retain most of Python’s flexibility. I wonder how many of Python’s 100 most popular libraries are dependent upon the v|☠ behavior, and how much of that can be identified and automatically fixed by static analysis.

I’m still deep into learning Rust, but I’m going to come out of it sometime soon. It’s been really slow, and I’ve discovered that I can’t let it go for more than a day or two or what I’ve learned really starts to fade. And it’s not something I get paid to work on, so it’s really hard to find the time, even a little, to refresh the memory cells and keep it from fading.

My objectives at this point are pretty far out. Once I get a couple of Rust projects, minor ones, under my belt, I’m going to, as the expression goes, Go For It: I’m going to write my goddamn Lisp. I’m going to follow Thorsten Ball’s Writing an Interpreter In Go, only I’m going to write it in Rust. I’m going to use Matt Might’s Parsing with Derivatives for the front end, Hy’s keywords and symbols as my starting point for syntax, The Full Monty as my core semantics, and maybe a real-time garbage collector just for fun. Then I’m going to start taking Python semantics away and see just how much of the Python standard library works without requiring the death’s head, and how much performance I can tweak out of Python when I don’t have to live with it.

And then, just to make sure that at least one tickmark on the cruel [Language Design Checklist] isn’t ticked, I’ve got a few things I intend to write in it. Just to see how far I can get. Yeah, it’s gonna be a PLOT (Programming Language for Old Timers), but so what? It has to be more fun than hacking Go.

I wonder if cloud computing is turning software development into a worethless, even dangerous, parasite for our economy and our world.

It’s common knowledge among economists that most of the financial sector just moves money around, taking a cut, without adding value. Various brokers and traders and "financial planners" are just salesmen who are there to move product and take a cut.

The point of the financial sector is to produce a single service: to encourage investment (not speculation) in known-growth endeavors. That’s it. They’re to take "extra" money you or I might have, pool it and, applying knowledge they have the time and energy to dedicate to doing the research, allocate it efficiently to those sectors of the economy that need to and can grow. Presuming the sector grows, your percentage of the pool plus the growth is returned to you, minus the brokerage fee.

Everything else the industry does is… absurd, really. It’s just moving money around from one account to another in a speculative game of musical chairs where the last institution out loses all its money, which is distributed to the winners. Little knowledge is applied, and no growth or efficiency are generated in the sectors of the economy that actually create goods and services.

And yet, that "everything else" accounts for two-thirds of the financial sector right now!

I was reading a "cloud computing" document and one of the bullet points read • Integrate, don’t Invent. The basic idea was that your company, whatever it is, is some start-up with a crazy idea. Uber for Vibrators, or Amazon for Cattle, or IMDB for Bottle Caps. You need a lot of software: web stuff, back-end, security, uptime, database, monitoring. And sure, you don’t want to have to write your own database engine or your own webserver.

But these days you don’t even have to care about what database or webserver. You don’t have to care about efficiency or performance. You just throw money at it. Instead of understanding your system you just outsource it, you pay someone outside your company to do it.

I used to think that software development was a discipline of engineering: that is, a mechanistic, outcome-driven discipline that led to a usable result. I’ve become convinced that it’s also a discipline of anthropology: we write software for human beings, and it’s our responsibility as developers to help other developers understand what we’re doing, and to help our end users acheive the results they seek.

But cloud computing has become something else entirely: Applied daemonology. We no longer care how good a database is: if it takes SQL and spits out the rows, we wrap it in a Docker container, shove both the container and some dollars at Kubernetes, and hey… it seems to work. It passes the unit tests. It’s therefore good enough. You send it input, you get the output. If not, you tinker with a few config files and maybe it works this time. Or maybe you’ve just got too many customers, so instead of tuning your queries and your environment, you throw more dollars at it. Repeat the incantation until the spell is working as expected.

In the cloud universe, a developer’s mind ceases to be a deeply informed connection machine trying hard to efficiently use the resources at hand. Instead, it becomes a broadly and thinly informed reference manual gluing together business logic and sidecars and facades without really understanding what goes on inside them. "Malloc? LOL! Fuck it, just give Jeff more money!" seems to be the order of the day.

I suspect we’re going to regret it.

Subscribe to Feed



September 2018
« Aug