I’m going to use the word “abstraction” in two different ways, but bear with me for a moment and consider this:

  1. Every startup is based on an insightful abstraction of a complex idea.
  2. Every software abstraction is leaky in some critical way.

Uber is an abstraction of taxis. How do we abstract “what a taxi is” using the higher-level technologies we have now? Amazon is an abstraction of mail-order sales, which have been around since Sears pioneered them in 1897. Google is an abstraction of card catalogs and yellow pages. And so forth.

In programming, an abstraction is a higher-level description of a process or mechanism that is designed to hide some level of complexity behind a simple set of controls. The menu on a word processor hides the complexity of the software behind it. The API we use to “log in with Facebook” or “log in with Google” hides the complexity of securing your authorization across multiple websites and applications while revealing your identity to advertisers.

We have a saying is softwae, “All non-trivial abstractions are leaky.” What this means is that all important abstractions can’t really hide what they do; if an API or programming library does something significant, then the programmer who uses them has to know at least some of the underlying details in order to use it well.

Many years ago when I worked for CompuServe, my boss used to joke that I had a superpower: I could look at almost any system and, with a minimum amount of exposure, explain in exceptional detail the underlying data structures and basic algorithms in use. In abstraction terms, I have X-ray vision for leaks; I could see where the pieces were fit together.

Like a lot of people nowadays, I’m besotted with Roam, a note-taking application. I have an Emacs Org-mode folder with, literally, thousands of notes, and I’ve been tinkering with my own note-management app for, like, ever. I’ve downloaded a couple of “Roam clones” from Github, and none of them really worked for me.

And then I watched the How to Use Roam videos on You-tube, and I spotted the leaky abstraction. I’m gonna show it to you right now. Here:

Do you see it? This is a screenshot from the “how to embed a note in another note” section, and the leak is in the upper right hand corner. It’s an obvious abstraction, once you see it, but it takes a second or two to recognize what you’re looking at. It’s that symbol, Qm2N2Uapn. That’s the leak: Every note is its own entry in the database. Once I saw it, all of the algorithm came apart in my head almost instantly. Most clones think in terms like Evernote: “Pages have notes.” But no, that’s not the abstraction at all. The abstraction is simpler than that: “Notes exist, have unique identities, and may have content which includes links to other notes. Notes may have child notes. A page is a note with a title that can be linked to.” That’s the whole abstraction on the server. Clients impose semantic meaning on pages by giving them “page” views, and queries to the API will almost always start with “a page,” but that’s almost irrelevant to the core functionality of “big notes have small notes upon their backs to bite ’em, and small notes have even smaller notes, and so on ad infinitum.”

Implementation is a slightly harder beast, but it’s really no more than getting the CRUD (Create, Retrieve, Update, and Delete) right on top of a few simple relationships: Pages, Notes, and Note->Note relationships. Notes have types, you need a very good autosuggest toolkit for the Page table, and you’ll need a custom editor for your front-end, but that’s a different engineering challenge from getting the back-end correct.

In fact, it occurs to me that this is a fantastic place to use RDFa as a way to make clear what operation you’re performing. Hmm…

24Jun

Is Agile compatible with Clean Code?

Posted by Elf Sternberg as Design, programming

I’ve been through Agile training several times before at different jobs, and the current job is no different. June is Agile Training Month, and since I started last September I’m obliged to go through this again. Previously, we had a Clean Code Training period that lasted two months, and I realized today why I’m having such a hard time with the Clean Code part of the training.

The Agile process says that the product should have value to the customer at the end of the first month, and that the value to the customer grows as the development team puts more intellectual work into building out its functionality.

In Agile Training, there’s a popular metaphor of “building a personal transportation system.” The story goes that in the days before Agile (and before Clean) a team would figure out what a fully realized personal transportation system was, then build that final product: an automobile. Once the automobile was designed, they’d build a frame in the first week, add tires and a steering wheel in the second, then the motor, then the cabin, and so on, until at the end they delivered a fully functional automobile.

Agile points out that in all that time it’s taken you to build the automobile the customer hasn’t been able to go anywhere. The customer gets no value. So Agile starts by building a two-wheeled scooter in the first week, then refining the scooter with a bigger frame to become a skateboard, then even bigger to a golf cart, and eventually to a fully realized automobile. At every step of the way, the customer can go somewhere, if at first awkwardly, slowly, and without protection from the elements.

Agile further argues that since the built the car iteratively, with forethought about future recombination, the team has built a componentized system in which parts can be swapped in and out without too much pain. Agile advertises that if your customer changes platforms, databases, operating systems, or display mechanisms, you should be able to adapt to those changes quickly because of the componentized nature of your system.

Clean, in contrast, focuses on a single component of the system: the programming language. And not just any programming language: Clean focuses on noun-oriented programming. Everything else is somewhat secondary.

Both systems claim that their way is best, and that if you adopt one your code will be flexible and adaptable to the realities of the modern world. Agile encompasses everything, but Clean only looks at the program written in the programming language, and more importantly, Clean says that that programming language cannot be changed, not without great pain to the developers and the customers, which is why everything else must be mocked.

Most data is related to other data, which is why the relational data model is the most popular model, and the databases that support it, such as Postgres and MySql, are among the most battle-tested pieces of software in the world.

A mocked backing store can’t deliver customer value.

Delivering the prototype with a hand-written relational data engine is professional malpractice akin to delivering a prototype in a programming language the team wrote in-house, in assembly language, for a CPU nobody has used in twenty years, without any documentation.

In the end, there are only two units of “value” in the system: the abstract future-proof value of the UML that describes the system in a complete end-to-end format, with accurate descriptions of the protocol individual units of the system rely on, and the real-world value of a product the customer is using. If your choice of language constrains your delivering value, use another language.

19May

Review: Clean Code, by Robert Martin

Posted by Elf Sternberg as programming

It might seem like I’ve been harsh on Robert Martin’s Clean Code for the past couple of posts, and that’s valid. I have been. It’s such a good book, full of strong advice on any number of topics.

It’s just that it feels old. Programming is a young discipline in the world, probably one of the youngest, and one of the most consequential. It changes with absurd speed, and everyone in it struggles to keep up. Clean Code came out in 2006 and already there are dusty corners within that feel out of date, even irresponsible.

So here’s what’s really great about the book:

The first chapter is, as necessary, an introduction. Martin introduces the idea of low-level programming discipline as a necessary precursor to writing large systems, and recommends the Boy Scout Rule: Always leave code better than when you found it. “Better” is a judgment call, and not everyone has good judgment, but it can be taught, and Martin has guidelines as to what “better” should be in the long run. And 90% of the time, I think he’s right.

Naming Things.

The second chapter is on naming things. Naming things is one of the hardest topics in software development, and there have been whole books just about how to pick a good name. Good names are about good taste, and while that can be taught it’s one of those things the learner has to want to learn. It has to be conscious.

Unfortunately, Martin’s book is wrapped deep into the Java / C# paradigm of object oriented programming, and his advice comes from that world, and only that world. This means that his section on naming methods is based, as he admits, on “the javabeans standard.” A better lesson is from Smalltalk: a method is a message to the object, which should be treated like a black box; what messages can you send it, what responses do you expect to receive back, and what state do you expect to find the system in after this request/response cycle is complete?

Functions

The chapter on functions made me queasy. He’s right that functions should be small (no, smaller than that), and that you should trust a good compiler to inline when it’s wise to do so.

His argument against switch statements makes sense until you see his response: prefer polymorphism over switching based on types. Given how much pain we’ve discovered in the years since due to polymorphism and inheritance in general, I’d rip both polymorphism and switch out and replace that sort of programming with a lookup table: cheap, easy, direct, comprehensible.

The Anti-If Patterns have more to teach about this topic, and I highly recommend them.

One thing that really made me mad, though, was the sentence “The ideal number of arguments for a function is zero.” No. Gods, no, this is horrible. A function of zero arguments is a mystery. You don’t know where the data is coming from. In the functional languages, functions of zero arguments are considered downright evil, and their type is often “TheWorld,” because that’s where the data they return comes from: the world outside your program. getLine() is a function of zero arguments, and it has to be wrapped in layers and layers of paranoia because you have no idea what your users are going to send you. Functions of zero arguments are where your security holes live. Whenever you write a function of zero arguments you should feel a cold chill run up your spine. (This is why I think Haskell is a valuable language to learn.)

(Methods of zero arguments… aren’t. They have an argument: the object with which they’re associated. .getWidth() is a message to the object to return a width value it contains (whether by storage or calculation), and .clear() can be a message to a collection to drop all of its contents. This is why I believe Smalltalk is a valuable language to learn.)

The section “a function should have no side effects” confused me until I realized that his definition of side effects was different from mine. A functional programmer knows that a side effect is anything that changes the outside world and has unpredictable consequences; Martin’s definition is that of a function that does two things, one of which isn’t clear from the function’s signature.

I’ve already discussed how DRY can sometimes lead to problematic obfuscation. DRY is great advice, but again, this is an issue of good sense and good taste, and an example of “debugging takes more smarts than developing; if you were at your limit writing the code, you don’t have the smarts to debug it.” The problem is that hacking a VM I’ve been using for 20 years isn’t the limit of my skills, but it’s already beyond the limits of the people who will maintain it. Downgrade your cleverness accordingly.

Comments

The comments section is just good advice. All of it. Comments are often clutter. Use them judiciously.

Formatting

Again, this is solid advice. Modern languages (Python, Rust, Go) have default formats, and good developers stick with them. It is a little funny that Martin has a long not-quite-sneer at Knuth for Literate Programming, and then tries hard to convince you that the layout of a source code file should “read like a book.”

Objects and Data

This chapter was pretty good, but again, as someone who has learned a lot from Haskell and Rust, I feel like this is an old chapter, well past its due date. Trait-based systems and pure functional languages have superseded much of this advice, and I genuinely feel sorry for anyone that has to work in a language like Java or C#.

Exception Handling

Again, this is one of those chapters that’s wedded to a single language paradigm, the one the big boys (Sun, Oracle, Microsoft) all tried to force upon us in the late 90s through the 00’s. We’ve seen through it. There are better worlds. Golang has adopted the “handle errors locally, or just die and let Kubernetes fix it for you,” Rust and F# and Haskell all have railway-oriented programming and scoped errors that make you think in onion architectures about what your system is doing.

Boundaries

The chapter on boundaries is pretty good. (Except… Jesus, Bob, what is it with the sexist illustrations? This stuff wasn’t acceptable when Weinberg wrote similar crap in 1974.) Keeping separate functionality separate is a keyword of all good design, which may explain why this chapter is short.

Testing

This is one of those chapters where the book really shines. I’m a huge fan of Test Driven Development. I really do believe in it, and my last project, in which I wrote the tests first, really did work well and I finished quickly. I even experimented with Martin’s Transformation Priority Premise and it worked pretty well! Tests must be in the required format and follow the required rules. I was taught test driven development back in the early 90’s, thanks to my Systems Design mentor, and I’ve been a fan of it ever since. Not every gist I’ve ever published has been wrapped in a TDD shell (they’re gists, after all), but I do prefer having tests to not.

The one thing I disagree with here is Fast. Tests should be no faster than the system requires; anything else gives you a false sense of reassurance.

Classes

The chapter on Classes is more or less a repeat of the Functions and Objects chapters. It’s okay, but it could have been distributed with some responsibility. This chapter is WET.

Three chapters: Systems, Emergence, and JUnit Internals.

These three chapters are, frankly, not very informative. The chapter on Systems is about designing in the large, but it tries too hard to convince you that aspect-oriented programming is a responsible development model, and that cross-cutting concerns can be dealt with in this manner without obfuscating the intent of the code.

The chapter on Emergence is basically “big systems behave in weird ways,” which is not surprising to anyone who’s ever worked on a large system. Much of this chapter has been superseded by the development of microservices, the discipline of system observability, and the understanding that very large systems live in a state of constant degradation; the only question is, how do you handle it, compensate for it, and structure your code to recover from it?

The JUnit Internals is… well, it is what it is.

Concurrency

This is a straightforward guide on handling multi-threaded environment, but again, its age shows through. So many pixels have been spent on this, and yet the biggest problem is that Martin (well, the contributor here, Brett Schuchert) does a terrible job of explaining how “clean code” interacts with concurrency.

If you want examples of how to do concurrency well, F#, Rust, Go, and Haskell all have powerful abstractions that help users develop concurrent code without pain. We know what the real pain is in concurrent development: multiple mutators having access to a single resource, and the associated locking nightmares that come with trying to control that access. Go and Rust have handled it in two different ways: Go by having a message-passing architecture, and Rust by having a model that forbids multiple references to a mutable resource by default, only enabling it with a permissive model that enforces locking. Python’s nursery model is another good way to make concurrency work.

Code Smells

The “code smells” in Bob’s document come across as good advice, language specific advice, outdated advice, and just generally annoying.

His section on comments is fine and expected. You don’t want comments that lie about code, get outdated, repeat the code, or are just badly written.

The section on environments is, well, I agree with him, a lot. If your build tool can’t handle everything in one step on the first go, it’s not really a build tool (I’m looking at you, npm and yarn). If you’ve downloaded a package written in C# but don’t have dotnet installed, that’s not the package’s fault. But once it’s in, there should be one obvious command you issue to build the thing, and that command should not require that you have an IDE up and running to build it. If the command is esoteric, wrap that in a Makefile. The user shouldn’t have to know the full command to build a simple, standalone C# command-line program for production, because that command is:

dotnet publish -c release --self-contained --runtime linux-x64 /p:PublishSingleFile=true /p:PublishTrimmed=true

The general guidelines are necessarily good; a program that doesn’t do the obvious thing implied by its name and placement by default is a bad program. Boundary tests are essential. Code safety is important.

Martin also says, “You shouldn’t have multiple languages in one source file.” Well, define multiple languages: do you have SQL embedded in your source code? LINQ? How about a few regular expressions? Heck, every C# file is four different languages in one file: the preprocessor, the templating language, the C-based semantics at the bottom and, if you really need it, a little in-line assembly language– and this doesn’t even scratch issues with programming peripherals like GPUs, something a systems programming language is supposed to do well. Every compiler in the world violates Martin’s “no multiple languages” rule, intermingling lexing, parsing, and semantic analysis. At the tip of the spear, JSX is popular, intermingling Javascript, CSS and HTML into coherent units of deployable objects that can be assembled like tinkertoys into fairly reasonable UIs for interactions that aren’t terribly performance-bound.

We’ve decided that “multiple languages in a source file” is okay.

There are some good guidelines in there: I especially liked the one on surfacing temporal couplings.

Conclusion

Clean Code is a book of advice, not gospel. The people who treat it as gospel miss the point, and often end up applying the rules either arbitrarily or with too much determination. The book itself is a prime example of inappropriate levels of abstraction, with some chapters sprawling all over the discipline, trying to do too much. Strong advice about naming, function design, system design, and source repository organization is intermingled with extremely language-specific advice about writing in Java; some advice is delivered as if they were rules of law that apply everywhere, even as the programming language community has, in the past 15 years since publication, moved on and wrapped many of the headaches he describes in syntax and semantics that alleviate most of the pain.

Read it and take its advice only the way the Buddha taught: “When you know for yourselves that these qualities are skillful; these qualities are blameless; these qualities are praised by the wise; these qualities, when adopted and carried out, lead to well-being and to happiness – then you should enter and remain in them.”

Robert Martin has a lot of experience. But his wisdom goes only so far. Think for yourself before adopting any programming advice– even mine.

Test everything. Even the advice you’re given.

In some programming languages there is an essential, powerful tension between two common pieces of advice: Don’t Repeat Yourself and Meaningful Names over Code Comments. Both are really good pieces of advice.

“Don’t Repeat Yourself” (DRY) means that if you can find an abstraction that allows you to avoid repetition in your code, you can remove the need to debug multiple code blocks if you find an error, and you can test the abstraction more reliably and more efficiently. The opposite of DRY is, of course, WET, “Written-out Every Time.”

“Meaningful Names over Code Comments” means that if you have strong, descriptive names for classes, functions, and variables, then code comments are often not merely unnecessary but possibly harmful as they drift out-of-date with the actual content of the code.

At my ${DAY_JOB}, I ran into this conflict in a big way. This example is in Python, but it applies to any language with metalanguage capabilities, which includes Ruby, Lisp, Rust, and even C++.

Let’s say you’re developing a CMS for lawyers, so you’ll have Courts, Cases, Lawyers, Tags, Outcomes, etc. Someone on the team has written an OpenAPI file describing the URLs that will allow authorized users to access these items, along with the expected data format of any objects that come out of it.

In test-driven-development, your next step is to handle the following story: “Write a unit test that accesses the URL for Lawyers, verifies that at least one Lawyer exists in the test data, and validates that body of the message matches the expected data format for Lawyer.” Another story would be “Fetch one Lawyer and verify…” Then, naturally, you write the code that does those things.

There are a few Python libraries that parse OpenAPI for you. And there are Python libraries that convert the OpenAPI specification into JSON, and since you’ve decided your CMS will speak mostly JSON, we’ll use that. Eventually we’ll get down to the schema collection.

The thing is, other team members want to have access to those schema objects as individual library items, so they can be precise and meaningful about the objects they’re manipulating. The openAPI library produces an object that looks like spec["components"]["schemas"] that contains your schema items, and maybe you don’t want to be passing around or importing the entire specification when all you really want to know is “What does a Laywer look like on the wire?”

So you devolve the library into exportable names:

That can get to be a lot of repetition. So I decided to DRY the code:

There. Completely DRY. Hugely problematic.

First, unless you’re really comfortable with this kind of meta-programming, reaching into the Python VM and modifying the in-memory representations of library objects at run-time, this sort of thing looks like (and frankly, is) just pure shenanigans.

Secondly, this means that there now exist, in the running Python program’s actual, functioning namespace, several new object names that exist nowhere in the source code. You can’t search for where the object lawyerCollectionSchema is first constructed. It and its sibling objects are being built out of an external resource, a file written in YAML no less, read into the system the first time schemas.py is imported.

As someone who’s spent a lot of time studying the internals of Python’s import statement, I’m both comfortable with and completely aware of how arbitrary the import mechanism really is. My team is not made up of people like me and, indeed, they couldn’t parse this code without a walk-through and an introductory paragraph. They’d never seen this kind of metaprogramming before.

I went back and just did the repetition. Okay, I wrote a python script to write it for me, and committed the script to git so others could follow along. Which is, itself, a kind of meta-programming, it’s just one that leaves a permanent, visible trace of its output where not-so-senior devs can see and understand it.

Modern languages with this kind of accessibility to their internals allow developers to be succinct to the point of misleading obfuscation, even when not intended. Sometimes, to help less experienced devs out, you have to make your code explicit and WET.

One thing that irks me beyond all reason is Robert Martin’s seething dislike for databases. In every presentation he’s ever given, the one thing he’s sneered at is people who “write their code around a database.” In one of his lectures he says, “I don’t want to see a database in your design. I want to see the objects you’ll use, and I want their names and locations in your project file to reflect how you’ll use them.”

This is probably the lousiest piece of advice he’s ever given. Because let me say this once and simply:

SQL is a programming language, not a storage mechanism.

In one of Uncle Bob’s presentations, he talks a lot about a payroll system, and in his one biggest example he talks about a function that calculates how much someone should get paid for travel expenses. The problem with the function is manyfold; it embodies in compiled code business rules about how much someone is allowed to be reimbursed (literally encoding dollar values into the code), it calculates the difference between expensed costs and reimbursed costs, and it finishes by printing out a report of one an employee’s expenses, with reimbursements and a flag that shows when the amount reimbursed is less than the amount expensed because the amount went over those hard coded maximums.

One of Uncle Bob’s mantras is that “the design of a system, and the layout of its code, should reflect the purpose and architecture of its domain, not its framework.” In short, a system about Payroll should talk about Payroll.

So take a look at the number one, top example on Google of Martin’s Payroll demo. Specifically, scroll down to the section labeled “My Implementation.”

I see “entities,” “use_cases,” “adapters,” “controllers,” “views,” “boundaries,” and so forth. You know what I don’t see?

I don’t see Payroll mentioned anywhere.

I see a framework. A framework in the dictionary sense: “a basic structure underlying a system, concept, or text.” That is exactly what Clean Code is. This example is in the “Uncle Bob’s Framework,” because it exists only in the minds of Uncle Bob’s minions, and is imposed ad-hoc by his loyal followers..

The Travel Expenses example is a precious one, because it exposes a lot of the tedious assumptions within Clean Code. The number one assumption is that, if you can encapsulate all of the functionality of a subsystem within a class declaration, this is somehow “better” than if the functionality is distributed across a couple of different files.

As I said in my previous example, an awful lot of code is nothing more than formatting data into human-readable results. So let me set out my thesis simply:

The difference between class Expenses { ... } and CREATE TABLE Expenses ( ... ) exists only in the minds of Clean Code fans.

Let’s look closely at Uncle Bob’s Travel Expenses example. In detail, the problem set is this:

Given a list of categorized expenses made by a user during a period of time, write a function that prints out a list of those expenses along with the category and type, the amount reimbursed, and highlighting those expenses which went over the maximum reimbursed amount.

So, we need the following objects in our system: User, Expense Category, Expense Type, and Expenses. And we can formulate this in the following way: “Expense types have an Expense category; Expenses have an Expense Type; Users have Expenses.” You can even diagram that out, but it’s the most basic Entity Relationship Diagram in the world.

What would that look like in SQL? Well:

Now, just look at the amount of domain-driven knowledge we get for free! All because we chose a language that has the relational algebra and its set-theoretic basis at its heart, we get: guaranteed uniqueness of users, guaranteed uniqueness of expenses, categories, and types; guaranteed non-empty strings for documentation purposes, and we get a built-in money type that’s absent from just about all damned languages. (Haskell and C# have money types, just to cut you off if you’re going to “actually…” at me.) No references to users can exist that point to non-existent users. An expense that references a non-existent expense type can never happen, and the expense types can be updated without having to recompile an entire module. In a general-purpose language like Java or C#, all of those guarantees would require hand-coding, and the writing of tests, and the checking of assertions. One of the best things about modern languages like Haskell or SQL, or like well-written Rust, is that not only do you get those guarantees for free, you can’t even write tests for those guarantees because violations are impossible to generate in those languages.

A system like this requires a lot more to keep track of the changes made to the expense accounting system, but the history of changes and decisions made about how expenses are to be tracked and monitored is better done within the system, rather than being some developer’s casual annotation in the body of some version control system somewhere.

Given that this is what we have (and it emulates Uncle Bob’s code exactly), what would the expense report function look like? We have these relationships that we would like to assemble together into a report. That report is simple:

A couple of things of note here: the two CASE statements are meant to fulfill strictly illustrative features of the report: if the maximum was exceeded, show that only the maximum will be reimbursed, and put a star next to that value. These two functions are independent of the SELECT and could be isolated into their own PSQL functions, and also note that they are independent of the values on which they operate: they do not need to be recompiled, or even touched, if the company decides to change the maximum values of a reimbursement.

Also note that this is a VIEW, that is, a separate function that only runs if we trigger it; this table does not exist, and the contents are only generated on-demand.

Want to know the expenses for a given user?

And then there’s that expense summary:

And that’s the whole thing. Four statements describing the shape of objects, and three functions describing how those objects should be processed to make it easy for humans to understand them. That’s all that matters.

A completely runnable version of this code, for Postgres, is available at Github. And the output is pretty!

 username  |        event        | expense_type | expense_category | amount  | maximum | reimbursed | over 
-----------+---------------------+--------------+------------------+---------+---------+------------+------
 Uncle Bob | 2020-01-08 04:05:06 | Dinner       | Dining           |  $45.00 |  $50.00 |     $45.00 |  
 Uncle Bob | 2020-01-09 09:05:06 | Breakfast    | Dining           |  $12.00 |  $10.00 |     $10.00 | *
 Uncle Bob | 2020-01-09 16:05:06 | Air Travel   | Travel           | $250.00 |         |    $250.00 |  
 Uncle Bob | 2020-01-09 19:05:06 | Taxi         | Travel           |  $27.00 |         |     $27.00 |  
 Uncle Bob | 2020-01-09 21:05:06 | Dinner       | Dining           |  $53.00 |  $50.00 |     $50.00 | *
 Uncle Bob | 2020-01-09 22:05:06 | Other        | Other            | $127.00 |   $0.00 |      $0.00 | *

I’ve met all of Bob’s requirements: every object has a single responsibility, and the messages it takes reflect that reality. Every object is OPEN for extension, but CLOSED for modifiability. We’ve avoided subtypes, going for composition instead. Our interfaces are appropriately segregated, and we are completely dependent upon the interfaces here, not the implementations.

We don’t even know what the implementations are. And we don’t care.

So we’re SOLID, as far as SOLID goes. The ‘D’ in SOLID is a deceit: interfaces are language-specific, and to claim that C++, or C#, or Java, or whatever languages, has “more pure interfaces” than SQL is to buy into that deceit. My UML is the only truly abstract description of the system, and SQL implements the full interface described in the UML because, again, SQL is a programming language and not a storage mechanism.

Use it as such.

Uncle Bob has a passage early in his book where he criticizes the function below, calling it “too long” and “missing context”. I agree that it’s cluttered and hard to read, but his representative solution is, frankly, absurd. He turns this into a C++ class with static methods for providing the modifiers to the text, all the while ignoring the huge elephant in the code: it does two things.

Here’s how you should write this code:

This is a look-up table. That’s all it is; using an algorithmic guide, you’re looking something up, translating one thing into another. Formatted this way, this is shorter, more readable, and more extensible. This version is a little performance and memory-wonky, using the format() macro twice and string-ifying statics, but worrying about that is a premature optimization waiting to take root; as it is, this function pair is literally an ideal until profiling tells you otherwise.

And while I’m being snarky about using a grown-up language, there’s nothing in C++ that says you couldn’t achieve the same results. Rust’s match is nothing more than a switch statement turned into an expression and using the move-semantic to avoid memory leaks. C++ has both lambda expressions and move semantics. Java, as of Java 9, has tuples and lambdas and garbage collection. Go is, well, Go; you get what you pay for.

I’m not asking for much. I just want you to stop writing code like it’s 1998.. I was there. It wasn’t fun. Writing code isn’t a privilege, and lots of extra lines isn’t extra value, it’s extra liability.

Oh, notice something else? My version separates the formatting from the printing: it can be tested. Write the tests first.

Lab notes for the first two weeks of January.

I’ve been fiddling with a branch of the PamSeam project named Lattice in an effort to streamline the base and speed up the runtime. There are three algorithms enabled and all of them have similar processes. The basic engine is also very slow, with lots of large allocations and copying the image over and over.

The basic idea of the Lattice project is to turn the image into a graph, one node for each pixel, with eight pointers to the neighboring nodes.

The Lattice graph is a generic container; we can put anything at all into it, so the data structure the Lattice holds can contain the image, any metadata maps (such an energy or flow map), the accumulative energy or flow data from processing, and the upward pointers that calculate the final seam.

The Lattice can remove a seam without copying; that is, the pointers from left-to-right or top-to-bottom can be set to “jump” the removed seam without having to copy the data. This can be done all at once, removing the current need to “seam and copy” both the energy map and the original image. The resulting image can be extracted from the Lattice data at the end.

I believe this process will make a single seam slower, but will significantly speed up the process of removing multiple seams. Also, known problem with the energy map is that removing a seam “damages” not just the pixels immediately adjacent to the seam, but also the pixels of any residuals seam the removed seam traversed. With the lattice, before a seam is removed the energy map regions to the left & right of (or above & below) the scheduled seam can be recalculated prior to removing the seam, enabling the algorithm to detect that the recalculation is no longer creating changes and halting any further processing. It might even be possible to limit recalculating seams themselves only to the damaged region.

There are problems with this solution. The first is that it takes significantly more memory to store a single image for processing. While we may be able to eliminate the multiple-planes problem for pixels, energy, and total energy, we’ve replaced those with a [u32; 8] array of neighbor offsets, and that’s a lot of memory.

Next, while the graph itself is represented by a vector and a node’s neighbors are stored as offsets into the vector (a common representation technique in Rust), the graph itself is not coherently “chunkable” the way a vector would be. This makes multi-threaded processing problematic, as Rust absolutely does NOT want you to have multiple threads writing to the graph simultaneously. Having atomic sub-graphs doesn’t seem to be an answer, as even linear processing relies on having access to neighboring nodes that, regardless of which algorithm is used, at least one other thread must also have read-access. This could theoretically be solved by having one graph for reading and another for writing, but the memory usage quickly becomes burdensome.

Third, the “graph walking” algorithms become fairly complex. The current solution is to create a Step and Walker; the Step knows where you are and the direction you are headed, and as the Walker takes Steps, it returns the desired contents of the node at that point. This solution allows the Step to modify its direction, which is needed for the meandering path a seam may take, and it allows the Step to terminate early, which is needed for the damaged-seam management pass. The biggest headache here is lifetime management, as the Step and Walker need similar lifetimes, but they’ll also be taking the Lattice as an argument, and I have yet to figure out the lifetime relationships between the two.

I’m confident I can get this to work, but as I now have a full-time job, it’s going much slower than I’d like. It’s not exactly something they need at the office, so I’m restricted to working on it only in my spare time, of which I have very little these days.

31Dec

Math is no Harder than Drawing

Posted by Elf Sternberg as Uncategorized

I recently read an article on the economics of ancient Rome that suggested that, while the written arts, especially those that involved education or erudition, were highly valued, the visual and

My great legacy to the world, a small bit of observability in web server configuration.

performance arts were not. The visual arts, especially, were regarded as the work of the lowly and demeaned, as almost all the arts we see from Rome, Pompeii, and Herculaneum, all of the frescoes and mosaics that have survived to this day, were made by slaves.

Skilled slaves, but slaves nonetheless. The visual arts could be bought and sold just as readily as a loaf of bread, and unlike bread, a good mosaic would last for decades.

This mentality is still at play. And I don’t think it’s going to go away anytime soon. The reason artists struggle is because a lot of people look at visual art and say, “I could do that. With enough time and study, I could do that.” Textbooks remind us time and again that the illustrative arts “can be learned by anyone,” that we could all learn to draw what we see, or even what we imagine, with just a few dozen hours to get all the basics down. You don’t find people paying for art much because there’s just so much of it, and only so many walls to hang it on.

Here’s the thing about computer programming: it’s exactly the same. Somehow, because it involves math (although really, most programmers only use arithmetic; I only recently started to use actual maths, and I’ve been doing this for 30 years), and a lot of people went through school and made the decision that they “didn’t have a head for math,” and so decided that “no matter how much time and study I give it, I couldn’t learn that.”

It’s not true. But as long as the majority of the population believes it to be true, and continues to be opposed to learning it, they’re going to keep paying computer programmers a lot more than they are artists.

Which is a shame. I rarely feel like I’ve contributed much great to the world, but I love art and artists and have a lot of paid-for art that, sadly, I haven’t yet had the time to mount and display. Artists consistently make me happy in ways coders only sometimes do.

I realized the other day that my role in my current job requires that I do something very, very strange, as far as I’m concerned. I realized there are some things I have to avoid learning, and I have to avoid them quite strenuously. I have to know they exist, but I have to not know any more than that.

One of my tasks is to help software engineers write their own tests and documentation. To be good at that, I have to help them focus on the kind of documentation they’re writing, and at the moment that documentation is “pager duty” how-tos: short instructions for how human beings must respond to problems and issues with the running system.

To that end, I have them focusing on “What are the symptoms we’ve seen? What are the correct responses? If the response can be automated, why hasn’t it been?” (Answers such as “It would take too long” or “It’s too rare to justify the development cost” can be debated; “No one knows how” is not acceptable.) And then “Who knows how to fix the issue?”

I have to not know the answers to these questions. Very deliberately, I must avoid knowing the names and faces associated with the answers. Because that way, when I’m proof-reading the documentation, these issue jump out easily. Questions like “Who would I go talk to?” comes easily to mind, and I can pass that back to the engineer.

The best part is how good they’ve been about all this. I really appreciate and like the people I’ve been working with. People diss millennials all the time for their odd, new work-ethic, but I like it: it’s very emotionally aware and deliberate. These people know that emotional labor is work, but the work has a great pay-off. I just have to work hard to keep up and keep participating.

I had an insight the other day, while working with one of the younger colleagues at work, about why I come up with answers and write code so much faster than he does. It’s not that he’s not as smart or as talented as I am, it’s that we look at programming problems in a completely different way.

In a paper from 1974 entitled How Big is a Chunk, Herbert Simon attempted to codify what makes beginning chess players different from grand masters. The answer, Simon believed, is that beginners see pieces and masters see “chunks:” collections of pieces that form a strategic unit on the board, and that can be exploited against other strategic units. Simon built on George A Miller’s notion of “the magical number seven plus or minus two”– the biggest “chunk” that a human brain can handle. Miller argued that we remember a phrase of “seven words,” which could have as many as two dozen syllables, so the “word” was a piece of the chunk, not the syllable, and so on. Simon believed that a “chunk” of chess could have seven “pieces in position” in it, and that a grand master of chess could have anywhere between 25,000 and 100,000 chunks in his head, ready to deploy at a moment’s notice. This is why a grand master can play dozens of games of chess against beginners simultaneously; he doesn’t have to memorize the state of every board, he just has to look at the board and to identify its riskiest chunks and respond accordingly.

My colleague is a huge fan of test driven development (TDD), but what we were developing as a pair programming exercise was a view: there was minimal logic to be written, only some dynamic HTML (well, React) to take data that was already available and present it to the end user in a highly digestible format. Our goal was to remove the repetitive legends automatically generated for many graphs on a page, and replace it with a deliberately generated legend for all the graphs.

“Let’s do this TDD style. Put an empty object on the page,” I suggested. “Okay, make it show the wrong thing. Now, we know the legend was there before with the right labels. We had to supply those labels, let’s go find where they’re generated. How are they cross indexed with the colors? Write that down. Okay, let’s change the wrong thing to show the names. Now add a rectangle with the colors.” When we had that, I said, “Okay, I think you know how to use Flex to lay out the legend, add a title, all that, right?” He agreed that he did. “Oh, and people print this out. Remember that thing we did where we made sure that there were only two graphs per page, and used the CSS ‘no page break’ thing to make sure?” He nodded. “Each page will need a legend, but you’ll have to hide all but the last one in display, and show all of them in print. You know how to do that too, right?”

It all took us about two hours. When it was done, he shook his head and said, “I would never have gotten there that fast. This would have taking me days.”

I assured him he’ll get there. But I realized that the reason I got us there so quickly is that I had a chunk in my head about how HTML works, and another chunk about where the data came from, and a third chunk about how Javascript merges the two. He doesn’t have that experience yet, so he sees rows and <divs> and arrays and classNames and color codes. But that’s the underlying stuff, the pawns and rooks of the game. That’s not the chunks.

The only question left is how to teach these people how to chunk stuff, and what to chunk, and why.

Subscribe to Feed

Categories

Calendar

August 2020
M T W T F S S
« Jun    
 12
3456789
10111213141516
17181920212223
24252627282930
31