I’m 53 years old and still consider myself a software engineer who’s professional job isn’t writing code, it’s producing value using coding skills. I took a year off to learn recently to go back to school, and trying to find a job when I’m 53 turned out to be a challenge. I do think I encountered a lot of ageism in the interview process, but I also think that taking a year off left me vulnerable to skill rot.

Fortunately, I did find a job, and my current employer looked past the rustiness to see that I could do the job he needed me to do. And this week I had an experience that, well, frankly, has me going, "To hell with the ones who didn’t hire me."

In my last job at Splunk, I wrote a lot of Go, a lot of OpenAPI, a little bit of Docker, a little bit of Kubernetes, a little bit of AWS, and a modest amount of React. In previous jobs, I had written tens of thousands of lines of Python, Javascript, CSS, and HTML, as well as shell scripts, Perl, C/C++, even some Java, along with all the maintenance headaches of SSH and so forth.

In my year off I spent my time taking classes in low-level systems programming: Programming Languages and Compilers, and I have a half-finished implementation of a PEG parser based on an obscure algorithm for parsing and lexing that nobody has ever fully implemented in a systems language. I chose Rust as my language of choice because its performance is close to C++’s and its compiler choices are sensible and helpful.

In my new job, part of my task involves helping the team "level up." They’re mostly SDET-1s (Software Development Engineering in Test, level 1) producing tooling for the Quality Assurance teams, who spend a lot of their time playing games (it is a gaming company) and then reporting on any problems they found. The QA folks aren’t the sort to be working in JIRA or Bugzilla, they needed a straightforward tool that understands their needs.

The tool they’ve built is in Django, which is great for prototyping, but it’s now hitting its limits. Meanwhile, we’re all "moving to the cloud," as companies seem to like to say these days, and my current big side project is to write a demo on what a cloud-based solution looks like. I decided to take two pages from the existing app (the login page, and the list of existing reports) and make them work.

I spent two to four hours a day over eight days, and in those eight days here’s what I have learned (and hopefully will document soon):

  • React Hooks
  • React Context
  • Material UI
  • Material themes
  • Golang’s LDAP interface
  • Golang’s Postgres interface
  • CORS
  • JWT & PASEO
  • Multi-stage Docker builds
  • Docker swarm
  • Docker stack

I knew a little bit about each of these. I did the demo yesterday, using live data, and jaws dropped. "That was fast" (It’s not Django) and "Have you seen our tooling? That looks better than anything else! It looks like us." (I found the company style guide and applied the colors and fonts to the Material Theme engine). Then the reveal: The Django app container is 2.85GB, but my project fit into 37MB; an automated security scanner screams in horror when examining the existing project, but mine had only one error: it’s a demo, so I didn’t have a "good" SSL certificate.

Some of these things (like multi-stage builds) I’d only ever heard about. But the simple fact is that none of this is particularly new. Hooks just wrap ordinary React; React.Context is just React.State writ large. Material is a just a facade, themes are just well-organized CSS styles, Golang’s interfaces are a lot like Python’s or Rust’s, LDAP is just a query engine with an annoying call sequence, JWT is just Web Cookies with some rules. Swarm and Stack are variations on Kubernetes and Compose. Multi-Stage builds were… Makefiles, "but, like, for the cloud, dude."

In almost every case I already had a working mental model for each feature, I just needed to test and see if my model was accurate and adjust accordingly. Some things are still fuzzy– docker networking is fun and seemingly sensible, but I haven’t memorized the rules yet.

I can contrast this with my academic pursuits, like Rigged Regexes and Seam Carving, which are both taking forever. Part of the reason they’re taking forever is that they involve genuinely new ground: no one has produced a systems-language implementation of Brzozowski’s Algorithm, and I have yet to find a multi-threaded, caching implementation of the Seam Carving algorithm for a low-level Unix environment like ImageMagick or NetPBM. (Another part now is that, hey, I’m working full time; I get like 30 minutes every day to work on my Github projects, usually during the commute.)

In short, the people who didn’t hire me didn’t hire me because, if I were a construction worker, they were an all-Makita worksite and I owned a DeWalt drill.

Which is absolutely nuts. But that’s the way this industry works.

At the ${NEW_JOB}, we’re going through the processing of leveling up, and one of the steps of “leveling up” as a software company is watching Robert Martin (aka Uncle Bob’s) Clean Code Video Series, where he presents his ideas about how code should be thought about, organized, and written. A lot of the material is okay, and there’s definitely something to be said for his ideas about separation of concerns.

In the latest one, for example, he said that when reviewing the design of a system, it should shout what it’s about. A payroll system should shout “Payroll!” Frequently, though, it doesn’t; the organization of a project in a modern framework is often about a framework. Django apps are organized around the needs of the deployment scheme, and so are Rails and so forth. (WebObjects, the original web framework from 1996, is said to have been modeled directly on Trygve Reenskaug’s 1979 paper on model-view-controller, so I wonder if it’s different.)

Bob’s organizational scheme is that there are Entities, which contain knowledge, Interactions which handle the events that cause entities to change, and Boundaries that present the the permissible set of events to end users. “The web stuff” should only talk to boundaries, and should be separate from the entire system; it should be possible to swap out a web-based presentation mechanism from the entire system and scale and price both separately.

This is a great idea, and one I subscribe to whole-heartedly. (I also love that it’s inherent in the way Rust does CLI programs as “libraries first”; that’s not in the compiler, but every example did it that way and the community has learned to do it that way.)

But there’s one thing about Bob’s scheme that’s been bugging me. And I think I know what it is. Take a look at this:

See that blue arrow pointing at the connector between the two objects? What is that?

“Well, it’s a function call. Duh.”

Great. What is it? What is a function call?

You see, in Bob’s world, a function call is something he never has to worry about. Compilers just do all that stuff for you. A compiler sets up space on the stack for the arguments to the composed object, as well as empty space for the returned value. All the complicated underlying stuff about allocating memory, protecting it from overwrites, reclaiming it when you’re done with it, all that stuff has been elided from Bob’s reasoning, and from yours and mine.

And that’s fantastic. With the exception of some extreme edge cases, usually suffered by the people who write database cores and operating systems for a living, we no longer need to worry about a lot of machine details.

Until we move to The Cloud.

Look at that blue arrow again. Now imagine that, instead of a function call where all the details about the ABI (application binary interface) are hidden under a warm comforting blanket called the CLR or the JVM or dot-DLL or dot-SO or whatever, they’re REST or gRPC or whatever network interface you want to imagine. They need to be protected from outside prying eyes by TLS-hardened pipes, walled off by private networks, secured with JWT or PASEO, operated inside Docker containers and managed by Kubernetes.  It needs to be “stateless,” meaning that it needs to be able to die and restart and recover immediately where its previous incarnation left off.

You can still write the core of your system using Uncle Bob’s UML-inflected notations about the relationships between objects. (That said, I find object orientation to sometimes be a fetish. Sometimes the elegant implementation is just a function.) But the lines between Entities, Interactions, and Boundaries are no longer handled for you by friendly compilers. You have responsibility for them.  Each object in the system, or some cluster of objects, needs to wrapped in layers and layers of security (since “the cloud” is just other people’s computers you happen to be renting), performance monitoring, and recovery management.  Each object needs you to manually specify the interfaces between them, usually using something like Swagger or OpenAPI or some hand-turned REST thing for which no documentation exists.

Sometimes I think this is where cloud-oriented programming has gone terribly wrong. We dove into this highly performant and redundant system without thinking harder about how we could achieve the ease of use toward which that blue arrow once pointed.

David J Prokopetz asks, “What’s the most asinine technical requirement you’ve ever had to deal with?” I don’t know if it qualifies as “asinine,” but one of the most idiotic requirements I ever ran into during an assignment was simply one about contracts, money, and paranoia about the GPL.

It came down to this: in 1997, the Seattle-based company I was working at had been “acquired” by CompuServe (CIS) to be CIS’s “Internet division,” and as part of the move we were required to move to RADIUS, the Remote Access Dial-In User Service, an authentication protocol for people who dialed into an ISP using their landlines, so that our customers could dial in through CompuServe’s banks of modem closets. That was fine.

What wasn’t fine was that, at the time, CompuServe’s central Network Operations Center (NOC) in Columbus, Ohio, was 100% MicroSoft NT, and we were a Sun house. The acquisition required a waiver from Microsoft because CIS was getting huge discounts from MicroSoft for being a pure MS play. We were told that, if we had to run on Solaris, then we also had to run a pair of RADIUS servers written for NT and ported to Solaris, and we also had to run a pair of Oracle servers (CIS had a lot of contractual obligations about who they purchased software from as a result of their NT centricity), and in order to make them line up we also had to buy these ODBC-on-Solaris shims that would let our ODBC-based RADIUS servers talk to Oracle, despite all of this running on Solaris.

So we had four machines in the rack, two running this RADIUS hack and the ODBC drivers, and two running Oracle. Four machines and the software alone was $300,000.

And it crashed every night.

“Yeah, it’s a memory leak,” the RADIUS server vendor told us. “We’re aware of it. It happens to NT too. We’ll get around to fixing it, but in the meantime, just reboot it every night. That’s what the NT people do.”

Now, at the time, there was a bit of pride about Unix programmers: we don’t reboot machines unless lightning strikes them. We could refresh and upgrade our computers without trauma without having to engage in therapeutic reboots. We had uptimes measured in years.

The counterpoint is that there was a GPL-licensed RADIUS server. We were allowed to use GPL-licensed code, but only under extremely strict circumstances, and in no case could we link the GPL-licensed RADIUS server to Oracle. That was a definitive ‘no.’ We had to use the ones CompuServe ordered for us.

So Brad, my programming buddy, and I came in one weekend and wrote a shim for the RADIUS server that used a pair of shared memory queues as a full-duplex communications channel: it would drop authentication requests into one, and pick up authentication responses in the other. We then wrote another server that found the same queues, and forwarded the details to Oracle over a Solaris-friendly channel using Oracle Pro*C, which was more performant and could be monitored more closely.

We published the full-duplex-queue for the RADIUS server, which was completely legit, and legal let it go without wondering why we had written it.

A couple of months later my boss calls us in. In his fine Scottish brogue he says, “I haven’t seen any case reports coming out of the RADIUS server in a while. I used to get one a week. What did you do?”

Brad and I hemmed and hawed, but finally we explained that we’d taking the GPL radius servers and put them on yet another pair of Solaris boxes, in front of the corporate ones. We showed him the pass from legal, and how we’d kept our own protocol handler in-house and CIS IP separate (he was quite technically savvy), and how it was ticking over without a problem and had been for all this time.

“But we’re using the corporate servers, right?” he asked.

“Oh, sure,” I said. “If ours ever stops serving up messages, the failover will trigger and the corporate RADIUS boxes will pick up the slack.”

He nodded and said, “Well done. Just don’t tell Columbus.”

Ultimately, we had to tell Columbus. A few months later CIS experiened a massive systemic failure in their RADIUS “wall,” a collection of sixty NT machines that served ten times as many customers as our foursome. Brad and I were flown to Columbus to give our presentation on why our system was so error-free.

After we gave our presentation, the response was, “Thanks. We’d love to be able to implement that here, but our contract with MicroSoft means we can’t.”

There are many reasons CompuServe isn’t around anymore. This was just one of them.

One of my biggest roles at my new job is mentor, and I’ve quickly learned a five-word phrase that my peer-students have come to understand and respect: "I don’t know that yet."

The team I’m mentoring consists mostly of QA people who are moving into SDET roles, with a healthy side of SDET tooling development. I did a lot of mentoring at Isilon, and I did a lot of SDET tooling at Splunk, but I don’t think I did both at the same time so this has been interesting. There are two groups inside this team: one is developing reporting tools using Django and React, the other is developing DevOps tools using Docker and Python.

I know both of these worlds very well, but having been out of both of them for three years, two because Splunk needed me to write a lot of Go and Kubernetes and one because I wanted to learn Rust and systems programming, I’m having to reboot those skills.

The team knows this, but they’ve learned that "I don’t know that yet" from me doesn’t mean I can’t know the answer. What it means is that I will come up with it quicker than they will. Often I go back to my desk, type in some magical search term, and walk back saying, "Search for this. We’ll find the answer in these keywords."

Experience at my stage isn’t about knowing things; it’s about knowing how to know. This is what I’m trying to teach them, more than anything else. How do you search for the answer, and how will you incorporate it into your code? I’ve been a little more harsh on the Docker people; they’re in the habit of cutting and pasting from Stack Overflow without really understanding the underlying mechanisms, and so I ask a lot of questions about "I’m curious, how do you expect this works?" and "I’m curious, why did you choose this?"

By the way, that phrase, "I’m curious," is super useful; it creates a consensual context for a question that has no "bad" answers, only ones with differing degrees of correctness, and sets the person answering it up for success. Learn to use it.

I spend a lot of time walking around the work "pod," answering questions. A do a lot of code reviews where I say things like "The next person to look at this, who might even be you, might have no idea why this is like this. Please add a comment explaining why you chose to do it this way" and "Could you please include the JIRA ticket in the pull request?"

This is the biggest lesson I’ve learned in my second major mentorship role: it’s not just okay to say "I don’t know," it’s necessary. I seem to know everything: from kernel modules to the basics of graphic design and CSS. I quote Captain Kirk a lot: "You have to learn why things work on a starship." (We will ignore the fact that the cryptographic key to every Federation Starship’s command console is a five-digit PIN with no repeating digits; although I would hope that was just the password to the real key and requires executive biomarkers just to type in!)

"I don’t know that yet." indicates vulnerability. I don’t know everything. They respect that. Better, they trust when I give them answers, because they trust that I do know what I’m talking about at those times. The "yet" is also a huge part of the lesson: "I don’t know, but I’m going to find out." And when I do, I’ll share it, document it, and keep it for other people.

So be fearless about admitting ignorance. If you’re a greybeard developer like me, your number one skill should be learning faster than anyone else. That’s it. You know how to know. You know how to learn. When someone on the team is lost, you know how to find the path, and how to show them where it is. That’s what they’re paying you for: the experience needed to know how pull other people up to your level.

The big news

So, the big news: I’ve been hired!. This is a huge relief. I was worried for months that, having taken a year off for schooling, I was going to be out-of-phase and out-of-date with respect to the industry and of no damn used to anybody. Being over fifty also means that I’ve got that “aged out of the industry” vibe all over me, and while that’s a real thing I know plenty of greybeards in this industry who manage to work hard and keep up.

It turns out, there are plenty of places that need someone with my skills. And so I’ve landed at Big Fish Games, where my job is to take all the enterprise-level experience I’ve developed over the past twenty years and teach their QA Infrastructure team a few simple but not easy lessons:

  • There’s more than one way to do this
  • Think more broadly
  • Be more daring
  • Taste and judgement matter

A lot of that has involved showing rather than telling. I’ve been doing a lot of code pairing exercises where I end up asking questions like, “What does the system really care about?” and “Is nesting components a recommended solution in this case?” and “Is there a linter for this?” Every little step makes them better and faster and, frankly, it’s one hell of a lot of fun to blow the rust off my web-development skills. In the two weeks I’ve been there I’ve written code in Perl, Sed, Haskell, Javascript, Python, and Bash.

Project Status

Obviously, I’m not going to be as productive as I was during my time off. Even with the classes and family obligations, I managed to get a few things done in the nine months I was taking classes.

I’m still hacking on the PamSeam thing, so let me recap and maybe I can explain to myself how the AviShaTwo algorithm works.

A seam, in this context, is a wandering path of pixels either from top to bottom or left to right, in which each pixel is connected to one of the three pixels below it, and the total path is judged by some algorithm to be the least destructive to the visual quality of the image.

In the image here, we say that the surfer and the white foam are highly “energetic”; removing them would damage the image. So we try to find seams of calm sea and remove them first. The two images underneath represent the results from the paper, and my results implimenting AviShaOne.

AviShaOne

AviShaOne (Avidan & Shamir) is pretty easy to understand. For every pixel, we calculate the difference between the left and right and top and bottom hue, saturation, and value, and come up with an “energy” for every pixel. (Pixels on the edge just use themselves instead of a neighbor).

The clever part is realizing that, after the first row, for every pixel on a row it had to be reached by one of the three pixels above. So rather than calculate every possible path through the image, AviShaOne says, “For this pixel, which of the above pixels contributes least to the total energy of the seam to which this pixel belongs? Record the total energy of the seam and the pixel above that contributed.” Because that pixel has to belong to a seam including one of those three. We end up at the bottom with an array of seams and total energies, pick the lowest energy, and then trace that seam’s progress back up to the top.

Now you have a column of row offsets that can be used to carve that seam out, reducing the image’s width by one.

Interestingly enough, the YLLK (Yoon, Lee, Lee & Kang) algorithm proposes modifying AviShaOne to not use all the energy of the pixel, but to bias the energy calculation along the perpendicular of the seam, that is, for a vertical seam to make the horizontal differences more signficant than the vertical ones. YLLK demonstrate that AviSha’s algorithm using all the energy around the pixel as the metric of comparison can cause serious horizontal distortion when removing vertical seams from an image with strong vertical components (such as a picket fence). Removing the vertical information biases the preservation of vertical image elements by giving the algorithm only the horizontal changes. I may have to experiment with this.

AviShaTwo

AviShaTwo is a little different. In AviShaOne, we calculate the seam that has the lowest energy, the lowest difference between its neighbors, before removing it. AviShaTwo asks us to consider, “after we remove it, will the new pixels pushed together be compatible with each other?” Since we’re doing this backwards, we look at the three pixels above and ask, “If we removed that and the current pixel, what would the resulting energy between the new neighbors be?” We then pick the parent that creates the least energy change, as that creates the seam that does the least damage. This solution is called the “forward energy” algorithm because it looks forward to the results, rather than backward from the expectations.

The edge cases remain a serious pain in the neck to manage.

Challenges

Directionality

This is a strange problem. The basic interface for carving a seam is two functions: carveVerticalSeam:: Image -> Image and carveVerticalSeam:: Image -> Image Internally, these functions and their supporters look so damn close to one another that I can’t imagine why I need two of them other than that I’m just not clever enough to come up with an algorithm that maps one directionality to another.

Speed.

The algorithm is slow. The energy map has to be recalculated every single time since the seam you removed overlapped with other seams, meaning the energy of a large part of the image has changed.

There are two possible solutions:

Caching

It might be possible to cache some of the results. Starting from the bottom of the removed seam, we could spider up every left and right seam in the derived seam map, creating a left-to-right or top-to-bottom range of seams that we know are in the damaged list. We could terminate a single scan early for those scans that we know cannot possibly reach beyond the current range; it’s entirely possible that the upper half of an image results in no scans at all. We then remove the seam from the energy and seam maps, move the remaining content upward or leftward, and record the range of seams that needs recalculation in case the client asks for another seam.

Whether this is actually a win or not remains a subject of investigation.

One big problem with caching, though, is about reducing an image in two dimensions simultaneously. We’d need to maintain two maps: one representing the vertical seams, and one representing the horizontal seams. Negating and mutating both of those after each carve might end up costing more in processing time and memory than it was worth.

Threading

Threading could speed up the algorithm by a linear amount in terms of how many CPU cores you happen to have lying around (or are willing to let the algorithm use). But threading has several problems.

The first is edge cases… again, literally. Let’s say we’re looking for the best vertical seam. We break the image up into columns, and then each thread gets one column. But each row has to be completed before the next row can be addressed, because each row is dependent upon the previous row for its potential parent energy values. That’s a lot of building up and tearing down threads. I wonder if I can build a thread pool that works on a row-by-row basis, and a queue generator that builds out “work a row” jobs in chunks.

There’s also a weird internal representation issue associated with directionality. Computer memory is only one-dimensional; a two-dimensional image is a big block of memory in which each row is streamed one after another. We can break a target array up into read/write chunks for row-based processing; we can’t do the same thing for column-based processing, as the in-memory representation of columns is “a stream of bytes from here, and then from over there, and then from over there.”

If I could solve the directionality part of the problem, then the working maps (the local energy map and the seam/parent map) could represent the columns as rows by rotating the working map 90 degrees. That still doesn’t obviate the need to cost the cache management operations.

Extensions

There are a number of alternative algorithms that may create better seams. The YLLK paper introduces more sensitive energy algorithms to preserve directional features, and there’s a paper that has a different energy algorithm that supposedly creates better averaging seams for upscaling, although that one’s in Chinese (but there’s a power point presentation in English with the bulk of the algorithm explained). There are machine-learning extensions to absolutely score seams that will pass through faces to remove them. And there are lower-resolution but much faster (like, real-time, video-editing faster) algorithms that I’m probably not all that interested in.

Status: Floating Point Escapes Me, and Carving Seams

This week I’ve been involved in two different things: the first is giving up on the Buddhabrot generator.

There were two implementations. The first is the "Naive" implementation, which works okay, but is, well, naive. Ultimately, we want to render an image, and in the naive implementation I assumed (incorrectly) that what I wanted was to find a pixel, map it to a point on the complex plane, iterate that, and then map the result back to the image. The problem is that the pixel plane is too crude to give me all the really useful points. I ended up with a cloud that was recognizably the Buddha, but had significant asymmetries due to the imprecision of the map/map pair.

The second was named the "Cupe" implementation, after Johann "Cupe" Korndoerfer’s lisp implementation. This one was terrible, and I think I’ve reached the point where I’ve stared at it so long the details are starting to gloss over in my head and it’s time to walk away from it for a while and let it rest, then come back with a fresh attempt at an implementation, perhaps some slam of the Naive and the Cupe.

One interesting detail of the Cupe version is that he asserts, and I’m not currently math-savvy enough to know why, that the only interesting "starting points" for Buddha-related orbits are those within a very small distance of the border of the Mandelbrot set, those pixels that live right on the edge of the black zone. I was able to generate a gorgeous image of the border (no, really, it was my best work), but could not turn that information into a working Buddhabrot implementation.

The good news is that for the last two days I’ve been knocking together a basic seam-carving algorithm based on Rust’s image library. I’m trying to keep the algorithm separate from the image processing part of the library, as my hope is to eventually port this to C (yes, sucker: C) and then submit it to NetPNM as an alternative resizing feature. I’ve got the energy processor and seam recognizer worker, now I just need to proof-of-concept an image-to-image resizer.


The other thing that happened this week is that I went to the used bookstore closest to the Microsoft campus. Someone there had just unloaded all of their database development textbooks, and for some reason I’ve been obsessed with them. The ugliest part appears to be the transaction manager, and I’ve been kinda wondering something.

I believe, perhaps irrationally, that at some point every program should be written as a library. I write most of my rust programs that way, as a library with a ‘src/bin’ for the final executable, which is often little more than an implementation of a command line parser and an invocation to one of the library’s main entrance points.

I also believe that many things which are command-line programs ought to be client/server. That is, that the library/executable pattern in which I write my binaries is just a strong variant of a basic client/server model. For example, the Unix commands ‘find’, ‘mlocate’, and ‘gdmap’ are all the same routine under the covers, a file-system tree-walker algorithm ruled by a collection of predicates. So I have basic questions, like "Why is there no library API for mlocate?" and "Why do you have to leave GDMap up and running to get the map?"

So now I intend to try an answer some of those questions. I don’t know if ‘find’, ‘mlocate’, and ‘gdmap’ need to be client-server, but it seems to me that a file-system tree-walking routine into which you inject the expected outcome, or perhaps from which you extract the expected outcome as an iterator, would be an ideal separation of concerns, and then accessing mlocate can be a library, and gdmap won’t need to be running in order to generate the map itself.

I’ve been looking deep into the storage manager for Postgres, and as I’m reading through it, and looking at various announcements of features added and bugs fixed throughout the Postgres ecosystem, I’m seeing two things: there is a lot of Postgres code that basically exists because C cannot make the same promises as Rust, and many big fixes involves touching many different parts of the code. I’m wondering how much smaller some parts of Postgres could be made using the native features of Rust and the better-built libraries around it, and how different its internals would be if, somehow, the relationship between all the different parts could be teased apart.

Like my belief in the client/server-ness (or library/cli-ness) of the programming universe, I’ve long suspected that a lot of programming comes down to just a few moving parts. Go, for example, is a runtime with a fast garbage collector and a smart concurrency model, on top of which is built a syntactically ugly language. There’s no reason it couldn’t be a library and we’d build other things on it, as if it were a VM. A traditional database is a query language with a fast optimizer built on top of a smart manager for moving data from warm storage (disk & spindles). A spreasheet is a functional, reactive programming language with a funky display mechanism.

Anyway, I’m obsessing about transaction managers and storage engines right now. Not sure how that’ll pan out.

At the last interview I did, I was not given a take-home test. Instead, I was sat down with someone to do a pair-programming exercise where the objective was, "Given this collection of unit tests and this mess of a class, refactor it into something more maintainable." The class was in Python.

I quickly identified what the class did. It represented the relationship between a user and their role-based access, and the capabilities of that access, to an unidentified system. Refactoring consisted of identifying every method that had a real-world impact according to the unit tests, throwing everything else away, and then abstracting out the relationship between a role and its capabilities.

Funny thing about the roles: they were static. So I moved the role and capabilities into a table at the top, commented it to say "If you want to add a new role or capability, here is where you do it," and wrote a single function outside the class to represent the role/capability lookup.

The last interview of the cycle was a code review. They were mostly shocked that I had written a function instead of a class to represent the role/capability relationship. I pointed out that the function could be scoped at the module level, that in Python modules were themselves objects, and that this was a simple, static, binary relationship: role:[capability]. That was it.

John Carmack said it best:

Sometimes, the elegant implementation is just a function. Not a method. Not a class. Not a framework. Just a function.

Indeed it was. A class definition and so forth might have satisfied some arbitrary sense of encapsulation, but it was also clutter. Until and unless they needed a more dynamic system for describing their RBAC, mine was the clearest variant possible. And they agreed.

One asked, "What would you have done differently?"

"If I’d had my own environment, I wouldn’t have been struggling with an unfamiliar editor and a QWERTY keyboard. I could have gone a lot faster then." The lack of a Dvorak option was a serious struggle. And while that may sound facetious, it does point to something more important: how would this interview cycle have handled someone with a disability?

It occurred to me the other day that there’s more than one thing professional programmers can learn from kinky people: we can learn how to ask things of our peers.

The first thing the kink community taught software developers (and other groups) is the code of conduct. Kinky people have been dealing with offensive, intrusive, and disruptive conduct within our public spaces pretty much since the founding of the modern community in the late 1980s. We know how to manage that stuff. The very first codes of conduct for software conferences were written by a kinky person cribbing off an S&M dungeon’s rulesheet, and many of the codes of conduct for all manner of conferences and conventions since have descended from her work.

The other thing kink can teach software developers, the thing they can use every day, is the Consent Communication Pattern. The consent communication pattern is how kinky people ask for specific things. It’s extremely useful; you might even call it “manipulative,” in that you’re much more likely to get what you want if you use it versus the more common ways of approaching someone and making a request.

The consent communication pattern is so useful that my children’s school district teaches it to seventh graders in their sex education curriculum. It’s ridiculously simple, and has nothing to do with sex. Ready?

Concrete context. Concrete request.

That’s it. In sex ed classes, the pattern is used to help people negotiate relationships and specific activities that happen when dating. The examples from my school district’s workbook read:

  • “I really like when we X. Can we do more of that?”
  • “I really don’t like when you X. Could you not do it?”
  • “I really don’t like when you X. Could we do Y instead?”

The working examples are innocuous, things like “Hold hands in public,” or “Call me by than nickname,” or whatever. But you can see where the textbook really wants to go.

The listener to this request is actually free to negotiate. In the latter two examples, the person making the request is setting up a boundary, a limit on what he or she will accept. But in all cases the listener can make specific counter-proposals in order to get his or her desires met.

In my time as an engineer, I’ve been the recipient, and sometimes the deliverer, of an awful lot of bullshit passive-aggressive “asks.” Questions to the team of “What are we doing to hit the deadline?” are in this category; there’s an implicit aggressive “We’re not doing enough, but it’s not my fault we’re not doing enough” in the question. “We don’t know enough to move forward” is in the passive mode, in that the person saying it is implicitly asking the others for help, but hasn’t come out and said so.

In English, we learn about subject-verb-object relationships in sentences. But conversations don’t have subjects; they have topics, and often subtopics. The consent communication pattern establishes a topic, and then makes specific requests of people for information or action about that topic. It works with the human mind’s fast-and-slow thinking patterns, giving the listener time to accept the topic, switch their mind to deal with it, and then drives their ability to respond thoughtfully and coherently.

The user story template is an example of a bullshit ask. “As a [role], I want to [do something] so that [reason/benefit].” It has a template, but it has no concrete request. I know, we’re supposed to assume that the request is, “Can you do that?” But it’s still a bullshit, passive-aggressive request.

The next time you’re at a planning meeting, re-write them in a hyper-concrete form: “The customer needs to be able to do X. Do we know enough about X to talk about a solution?” And then, “I know estimates are only estimates, and these may be harder than they look, but can you give me initial estimates for how long each task will take?” Notice how the pattern sets a concrete topic and then makes a specific request.

One important aspect of the consent communication pattern is its concreteness.  If you can’t establish an specific context and craft a request that’s attainable and relevant to the context, you don’t actually have a request: you have a vague feeling that something is wrong, and you want the other person to help you figure out what that something might be.  And that’s fine!  But you should back up say it that way.  “I’ve been reading over X, and I have a feeling that something is wrong with it.  Can you help me understand it better?”  Note that the request is concrete, attainable, relevant, and open ended.  While it does have a “yes” or “no,” it’s worded in a way that leads to further conversation.  If the request were worded, “Do you feel the same way?” someone already pressured by time and responsibility might not give you the conversation you’re looking for.

Programmers are a notorious bunch for having ADHD and other neurodivergencies. The consent communication pattern works with the programmer’s brain. A request without a topic just… is. It exist without any surrounding context. People, especially those for whom context is difficult, are much more willing to jump in and respond to requests when their brains aren’t being racked with the effort of trying to make sense of the context.

Hanna Thomas recently wrote:

Agile is described as a mindset. But let’s call it what it is and skip the middleman. Successful organisations aren’t ones that adopt an ‘Agile mindset’. Successful organisations are ones that adopt a feminist, queer, anti-establishment, progressive mindset — one that is flexible, experimental, pushes boundaries, self-organises, and acts in service of community.

I’d add “kinky” to that list. Kink has a very strong ethos of respecting other people, both their wants and needs and their boundaries and limitations. Kink has a strong, evidence-based and experience-driven system for creating safety, enabling personal growth, and asking for potentially personally embarrassing or emotionally vulnerable moments in public spaces where deeply intimate and possibly dangerous things are happening. If we can do it where intimate or physically risky things are going on, then software developers can learn to do it in a professional space that (I hope) is not quite so intimate, vulnerable, or physically risky.

30Aug

Status: Fixing The Buddha

Posted by Elf Sternberg as chat

I was cleaning up my Github repositories and trying to remove some things that aren’t relevant or are somewhat embarrassing. One of them is the "programming_rust" repository, which I made while I was going through the book but no longer need. (Technically, I didn’t need one but for the fact that I have a laptop and a desktop and like to use both and different times.) I do want to keep the Buddhabrot generator, but I also want to upgrade it a bit to Rust 1.37.

So imagine my horror when I discovered that it throws all kinds of overflow errors. I think there’s some weakness in my thinking here, something I did incorrectly and that’s why I’m getting these exception and backtraces.

I started to decompose the program with better organization and separation of concerns. That’s all of my status right now, as my day was derailed by good news: I got a job! I can’t reveal too many details yet, but I’ll let you all know on September 9th what the work entails. I spent most of yesterday running around getting ready for the job, upgrading the office-based toolset that I use to commute and stay productive. I’m now drooling over a lovely electric bicycle that, unfortunately, will take two weeks to ship here. I spent my afternoon shopping for a power supply for my laptop and telling all the helpful recruiters thanks but I’m working.

Any evening work was derailed by a family member getting food poisoning bad enough that a run to the emergency room was merited. While they were sedated I managed to run through a couple iterations of the Buddhabrot command line handler, but wasn’t happy with any of them. Try again tomorrow.

I’ve been reading Gerald Weinberg’s "The Psychology of Computer Programming," written in 1971 (!), which is considered something of a classic. It’s written as a textbook and meant to be used in a higher-level programming course for both programmers and their managers.

Chapter one has some interesting passages. First, there are the potential HR violations. At a previous employer, the in-house code of conduct included this gem:

Examples of unacceptable behavior include:

  • The use of sexualized language or imagery and unwelcome sexual attention or advances,

Weinberg is big on learning by example. Even in 1971, he wants you to read other people’s code to get used to it. But his examples are laden with randy imagery: "Late at night, when the grizzled old-timer is curled up in bed with a sexy subroutine or a mystifying macro, the young blade is busily engaged in a dialogue with his terminal." In a passage about learning how to read other people’s code, he writes,

[Programs] are unlike novels, and the best way to read them is not always from beginning to end. They are not even like mysteries, where we can turn to the to the penultimate page for the best part — or like sexy books, which we can let fall open to the most creased pages in order to find the warmest passages.

Now, in 1971, Weinberg was definitely having a bit of a wink with his audience. The "professionalization" (read: the ejection of women) from the white-collar job of programming was more or less complete, and if there was the odd woman in the room (in many senses of the word "odd," as far as Weinberg’s cohort was concerned), well, she’d just have to suck it up, honey. Guys are like that.

And yet, I’d like to push back on the idea that no degree of carnal language is acceptable in the software development space. I mean, in neither case does Weinberg discuss the gender of the participants; there’s no unwelcome attention here. Besides, we talk about "slaying bugs" and "crushing the spec" and "burning down the backlog," and I find that kind of violent language crude and offensive, but it’s perfectly acceptable.

Secondly, Weinberg is hip to "slow learning." He bemoans the age of "the terminal," and that people don’t learn to read programs anymore, they just write them. He cites "a young novelist who claimed he never read as his work was too original to find inspiration from other books. His career misproves his claim." He talks a bit about how COBOL was meant to tell executives they could not only read code, but that they might someday write it, eliminating expensive and quirky programmers altogether.

I can’t help but wonder what Weinberg thinks of the modern IDE with all its bells and whistles. I find myself liking LSP-enabled editors, and writing in Rust or Python is much better now that we have these tools. Instant feedback that the code is syntactically incorrect or has type errors or, in the best case, that there might be a cleaner way to write something, is awesome.

Weinberg would probably have mixed feelings about GitHub. He absolutely wants you to read other people’s code, but sometimes you do so not out of an intersest in learning, but as an exercise in mockery. The Daily WTF lives strictly to do the latter. But there is a lot of high-quality code on there, which brings me to the questions at the end of the book:

  • When was the last time you read someone else’s code? What took you so long?

The last time I read someone else’s code was two days ago. I was reading the inside of the Rustlang image library in order to better understand how those developers thought about image processing, so that the code I as writing could fit into their development process witohut too much difficulty. I didn’t read any yesterday; I had runaround appointments and other priorities, which in the evening were hijacked by my partner getting food poisoning.

  • When was the last time you read code you, yourself, wrote that was more than six months old?

Yesterday. While "upgrading" an old project, I discovered that the latest revision of the compiler throws a ton of maths warnings and, you know, maybe I should investigate those. What I learned was that, six months later, better decompositions than the ones I first used are now available to my mind, and I should think about a refactor that will save me grief and pain.

In all, my summary of chapter one is this: Time has moved on. Programming languages have gotten more ergonomic. When Weinberg was writing, GOTO was still a perfectly acceptable way of managing a subroutine. LSP and IDE assist in maintaining an ambient awareness of the correctness of code. Repositories like GitHub provide an enormous library of code, some of which is highly readable, and it’s always good to point out when you find some code that makes great sense. The discipline of test-driven development means that, if followed well, you end up with highly readable code because Step Three is always "clean up the mess you just made" and make it fit the "Context, Container, Component, Code" model.

I’ll probably enjoy the other chapters just as much. There’s just something fascinating about delving into a book 48 years out of date about the work I do every day and seeing what’s still relevant, and what has changed.

Subscribe to Feed

Categories

Calendar

November 2019
M T W T F S S
« Oct    
 123
45678910
11121314151617
18192021222324
252627282930