17Sep

Notes on using the Rust image library

Posted by Elf Sternberg as programming, Rust

I have finally figured out how to use the images library for Rust, and it’s not obvious, especially not for internally generated images. The library does a very good job of hiding complicated implementation details when you load a file off the disk, automatically figuring out the file format and providing some baseline utilities. Zbigniew Siciarz has a pretty good writeup of the basic implementation, but for the Mandelbrot renderer, I wanted to get a little more specific.

So here’s the basic. Images are made up of Pixels, and every pixel is an array of 1 to 4 numbers (of any machine size; common ones are u8 and u32, but you can create Pixels of type f64 if you like). A single item Pixel is greyscale; a 3-item Pixel is RGB, and a 4-item Pixel is RGBA (alpha).

Pixels go into implementations of GenericImage, the most common of which is ImageBuffer. ImageBuffer controls access to the underlying representation, ensures bounds checking and provides limited blending capabilities.

To colorize my image, I needed to create the colormap, and then map the colors to a new image. And then I learned that the PNM colormap handler doesn’t handle Pixels.

PNM is an ancient image format, which may explain why I enjoy it. It’s simply a run-length encoding of bytes: RGBRGBRGBRGB… with a header explaining how to dimension the image by width and length. The PNM Handler for images.rs can only handle flattened arrays that look like a PNM array.

So for the Mandlebrot colorizer, the solution was to create a new array three times as long as the underlying Vec for my original image, and sequence through the pixels, mapping them to a color and then pushing the pixel subcolors to the new array. Which is annoying as heck, but it works.

I’m sure there’s a better way to do this. And the constant remapping from u32 (which the image array understands) to usize (which is what slices are made of) and back got tiresome, although the compiler was helpful in guiding me through the process.

Brad Delong recently pointed me at Susan Dynarski’s article For Better Learning, Lay Down the Laptop and Pick up a Pen, in which she reviews the evidence that laptops, because of their speed and utility, actually inhibit learning by allowing students to take notes too quickly, and by giving students a universe of alternative distractions should the instruction get too boring. I’ve found Dynarski’s article to be absolutely true.

I recently finished a small homework assignment. After three days of working with it on the computer, I sat down with a sheet of paper and worked it out in an hour. I wrote it snowflake fashion: I described it, then iterated on the description until I had a complete description, with the public API underlined. It took another hour to implement it.

This is my lesson: if I can’t explain it to myself on paper, then I can’t explain or implement it on the computer. Ever. It just doesn’t work that way, at least not with software. The ideas don’t stick unless I’ve written them out. Every few years I have to re-learn this lesson, believing that I can short out the learning curve and get straight to the meat of the problem. But the meat of the problem isn’t in the computer, it’s math and common sense, and those take time, and paper, and a pencil.

One of the horrifying follow-on realizations this led me to was that, when I was at my last job, I was very rarely writing software. I wasn’t working out algorithms or implementing new and innovative processes. I was exploiting lenses.

In software development, a lens is an an algorithmic function that allows the user to focus in on a specific part of data without the user having access to, or having to know about, the rest of the data. (There’s much more to it than that, including a deep mathematical description, but that’s the basic idea.) And most of what I was doing was writing shims, to be part of a Kubernetes ecosystem, to help internal users monitor the health and well-being of the system.

Which is all well-and-good, but when you realize that you’ve married a massive protocol like HTTP to a very simple collection of filters, then come up with your own peculiar way of specifying to the filters who you are and what you want, over and over again… well, that can lead to a bit of burnout. Every job is easy, but every job whispers, "This is stupid and should have taken three days, not four weeks."

With very rare exception, this is what programming is these days. All software is little more than views into specific arrangements of data. This is one of the reasons functional programming is becoming so prevalent: that’s what functional programming believes in its heart. (There is a hard part, dealing with philosophical notions of time, but until other people can understand what the heck Conal Elliot is talking about, I’ll just leave that there.) Sometimes getting the data into that specific arrangement is a story about performance, timeliness, memory, and storage, but that’s pretty much all it is.

13Sep

Mostly studying this week.

Posted by Elf Sternberg as Uncategorized

Happy Thursday!

Thursday is the day where I look back upon the week and consider what I’ve learned. Last week I completed the main Buddhabrot algorithm and got it to work.

Studying: The Little Schemer

This week was primarily studying, so there’s not a lot of code to consider. Instead, I worked my way through The Little Schemer, a book that teaches how to write Scheme and, by doing so, master the Y Combinator (the mathematical formula, not the orange site people), and eventually meta-circular interpretation. The latter wasn’t as interesting to me as I’d hoped, as I’d already done it once, writing a meta-circular interpreter on top of a Lisp I’d already written which, in turn, was written in Coffeescript, because that’s how I was rolling back then.

It’s a rite of passage to finish The Structure and Interpretation of Computer Programs and then write your own programming language in it. I’m up to section 3.3 and am currently a little stuck on the “write your own deque” section. One thing I dislike intensely about MIT Scheme (and Schemes in general) is the way objects and pointers just exist, willy-nilly, and knowing when you’re using the original handle and when you’re using a reference is all a matter of keeping it in your head. I think I’m becoming more of a Rust partisan every day.

Speaking of Rust, I started an implementation of Brzozowski’s Algorithm in Rust. It’s a very naive sort of thing, based on my earlier attempt with Python. It doesn’t work yet and, according to Might it will probably have terrible performance and memory issues, but it’s a first draft. I may have to figure out how do memoization and some macro handling / inlining. The first time I wrote it, I realized where the memoization would have to occur to make it performant, but that knowledge seems to have faded from my brain and I’ll have to re-learn it all over.

Plans: Next Week

Next week, fate willing, I’m going to:

  • Implement colorized versions of the Mandelbrot and Buddhabrot implementations.
  • Read three chapters of The Seasoned Schemer
  • Finish chapter three of Structure and Interpretation of Computer Programs
  • Write some damn documentation for the images.rs library.

Future Plans

There are a number of other, unfinished projects on my plate. Most notably, I haven’t finished porting Daniel Givney’s Assembly Tutorials to 64-bit ASM, as promised.

And then I’m going to dive into writing an interpreter, a virtual machine, and a compiler. All the stuff I never learned how to do because my parents didn’t believe, in 1985, that a programming career would ever be lucrative and would only pay my college if I majored in Accounting– excuse me, I meant “Business Computing”– instead. My ideas for it are incredibly vague, but there are some basic ideas.

Excessively Ambitious:

  • It is going to be a Scheme.
  • It is going to be strongly typed.
  • It is not going to be beholden to the naming history of Lisp or Scheme. My first thought was be to steal from Hy, using Ghostwheel or Compact Notations as inspiration for typing annotations.
  • It will use type inference wherever possible.
  • It will use a sane import scheme.
  • It will rest upon an Intermediate Representation that will not be married to the syntax. Alternative syntaxes should be easy.
  • I want it to have the following:
  • Garbage Collection: A tri-color parallel garbage collector.
  • Green threads implementation, like Go routines.
  • Batteries included
  • Computation Expressions
  • Fat executables (all files and libraries included in a single, runnable format)
  • It would be silly for me to want:
    • Software-transactional memory
    • Type Providers
  • It would be madness to provide domain-specific stuff like:
    • A Grammar interface
    • A type-driven graph engine (the underlying technology of spreadsheets)
    • A type-driven pluggable buffer engine (the underlying technology of databases)
    • A high-performance rope or piece table implementation (the underlying technology of text processors)
    • A document object model (the DOM is an obvious one, but what about OpenDocument?)

For all I know, there is a strong conflict among these different goals, in that, for example, grammars might be incompatible with strong typing, or at least difficult to implement correctly. This is a learning experiment.

And here’s the thing: I don’t just want to write a programming language. I want to write useful things in my programming language. That’s the whole point. If I can’t write useful things in it, then it’s not a useful language. Yes, there are days when I feel like this plan goes over Programming Language for Old Timers, but that’s also the point; I know enough to feel modern, but I want something that is expressive to me.

This week, I finished off the basic Programming Rust exercises by extending the Mandelbrot example in the book to produce Buddhabrots instead. In my last post, I mentioned that I’d gotten the Buddhabrot working, but only in a very basic way. I was unhappy with the result; my point-to-pixel mapper was giving me weirdly squashed results, and what I was producing didn’t look like many of the examples found in the wild. The O’Reilly Mandelbrot exercise extended the code to work on multi-core machines, so I tried to do that with the Buddhabrot.

The Mandelbrot set divides the complex plane into two sets: for a given complex point c and a zero-zero complex point z, those points in which the equation z = 

z * z + c iterated infinitely goes to infinity, and those for which z remains within a bounded region, usually the circle starting at the origin with a radius of 2+2ᵢ. For those starting points that go to infinity, the velocity at which it goes gives a grayscale number, from which we generate the mandelbrot set.

The Buddhabrot says, “Wait a minute, for all those points that don’t go to infinity, what if we colored those instead?” For those, for each iteration, we track the points it makes within the black center of the Mandelbrot set, and the number of times a point corresponds to a pixel gives us a number we can then render as a greyscale.

So: the problem here is easy. For Mandelbrot set, each point is not dependent on any other points on the screen. For the Buddhabrot, this isn’t true: most points are dependent upon the behavior of some previous point. The Mandelbrot set could be divided up into different regions and each region handed to a different core. For the Buddhabrot, we have to build up each map in total.

Even on my monitor, which is 3840×2160, using 16-bit greyscale and 8 threads, that was still under a gigabyte of memory. Yay for modern computers!


One of the biggest problems I had learning this was the notation for Rust mutation. And this is before we get to things like reference counters, boxes, and reference cells!

Here’s what I need to remember:

let x = 5; // defines a location, 'x', and puts 5 into it.
// x = 6; // illegal: x isn't mutable.

let mut x = 5; // legal: This shadows the value above.
x = 6; // redefines 'x' to be 6, which is legal because x is now
       // mutable.  The types must match!
{
    let y = &mut x; // 'y' is a mutable reference to 'x'. 'y' cannot be
                    // reassigned, but what it points to can be mutated:
    *y += 1;
}

That covers the basics, and helps a lot. The other thing is that the &mut syntax inside function signatures has a subtle meaning: that the function only takes mutabl

e references, and there can only be one of those at a time.


The rules about mutability in Rust just killed me. I wanted to run my Buddhabrot generator the way I would expect in any other object-oriented language: multiple planes on which to operate, each one running independently. I couldn’t get it to work with a vec of Plane objects, because Rust really didn’t want to move those massive planes around.

The only way to get it to work was to generate one gigantic vec, then use Vec::chunks_mut to break it up into subplanes, one for each thread, and hand them over to an immutable processor that did all the work of calculating the Buddhabrot for that plane. It worked, but what a pain in the neck.


The other thing I did this week was rip through the first 100 pages or so of The Little Schemer, a classic book teaching you how to use Scheme. I went through it using Racket, which was lovely, and taught me a lot about how Scheme works inside. I have two thoughts as I return to the Scheme land after wandering around in Rust: I really, really want better guarantees in my language that I won’t be passing around non-containers to functions that expect containers, and I really want better guarantees that the types I pass to functions are understood by those functions ahead of time.

Maybe I want Haskell. Or some kind of typed scheme. Or something. But this exercise was just… weird.

27Aug

The Mandelbrot and the Buddhabrot

Posted by Elf Sternberg as Uncategorized

Last week, I started knuckling down and praticing Rust for seriousness. I’ve been kinda skating along the top, not learning it in any real way; I’d been doing that for a while at my last job, where they insisted I use Go instead. I’m not fond of Go; I think it’s an amazingly powerful idiomatic garbage collection and inter-thread communications framework on top of which is build the godawfulest ugly language I’ve ever seen.

Last week I figured out how to render the Mandelbrot set. It’s the last exercise in chapter two of Programming Rust, and after I was done writing it, I decided to see if I could make a few improvements.

So here’s the basic thing about the Mandelbrot set: it’s a little arbitrary. The Mandelbrot set lives in the ℂ plane in a subplane ℤ described by the points 2+2i to -2-2i; the objective is to pick a subregion of that square, and map it to an arbitrary pixel plane, say an image 1024×768. For each pixel in that pixel plane:

  • figure out what point c on the ℂ plane most closely maps to that pixel.
  • Start with a complex number z representing the origin: 0+0i.
  • iterate on z = z2+c
  • if z does not leave the subplane ℤ, leave the region black
  • if z does leave the subplane, the iteration number when it violated the subplane border can be used to map to a color

Note that the base Mandelbrot algorithm returns only a single digit, and is appropriate only for greyscale images. The typically colorful Mandelbrot images you see are mappings of arbitrary colormamps to the grayscale values, much the same way GIFs are colored. There are other algorithms for choosing coloring, and I haven’t looked much into them.

The source code for my efforts on my Github under programming_rust/mandelbrot. There are two changes to the example in the book:

  1. I realized early on that every function was receiving the same two variables: a tuple describing the pixel plane, a tuple describing the complex plane. These values never change during the operation of a single render, and since I have programmed a bit of Rust over the past year or so I realized that these could best be put into a struct and have implementations range over them, so now we have an object Plane. This makes the function pixel_to_point() something that just takes a pixel and returns a complex number. It also means that every iteration through render() no longer has to recalculate the mapping every time; it’s done once and done for good.
  2. I decided to make my output into PNM. I’m a big fan of the Portable Anymap format. I know it’s not most people’s favorite, but it works for me and I know how to process it well.

I also did the concurrent version of the algorithm. This involves slicing up the plane into subplanes; points on the Mandelbrot set are independent of their neighbors (unlike, say, a Conway Game of Life), so making it concurrent is simply a map of slicing the plane into uniform subplanes based on how many cores your CPU can support, and giving each core a plane to process.

At first, I worried about how this was going to work with my Plane object, but then I realized that the Plane itself had the capacity to provide its own slices, and the method subplane() does exactly that. Then it’s simply a matter of mapping the subplane to a band of pixels in the image buffer, creating a thread having the thread call render() on the subplane.

I believe this code is easier to read than what’s in the book. Yes, it relies on structs, which aren’t introduced for quite a few chapters, so it’s cheating, but it’s fun.

There are two things I’d like to do with this: one is colorize it. My problem so far is that I don’t know how to read Rust type signatures, so figuring out how, using Rust’s most popular image processing library, to colorize an image has failed me so far.

On the other hand, generating a basic Buddhabrot turned out to be fairly easy.  That’s what you’re seeing in that illustration above.

A Buddhabrot says that for iterating function z = z2 + c, each resulting z represents another point inside the plane ℤ. Instead of counting when an iteration point leaves the plane ℤ, a Buddhabrot simply increments every pixel z represents during a set number of iterations. The resulting images are ghostly and strange, and looked to be a lot of fun. So I did it, and it worked, and I’m happy with the result

I’ve decided, for the sake of insanity, to work my way through the O’Reilly Programming Rust book, since I feel like my Rust has become, shall we say, rusty. And I’m doing it with my usual sense of attitude, that "There has to be a better way!"

First, I took this mess from the very first example (another greatest common denominator function… you’d think tutorial writers would figure out most of us would rather write something useful, ne):

let mut numbers == Vec::new();
for arg in std::env::args().skip(1) {
        numbers.push(u64::from_str(&arg)
                     .expect('error parsing argument'));
}

And said, "I know there are way to initialize and map values automatically in other languages. Rust should too." And I dug through the documentation and discovered, why yes, Rust does:

let numbers = Vec::from_iter(
    std::env::args().skip(1)
        .map(|x| u64::from_str(&x).expect("Error parsing argument")));

That’s pretty dense, but it does the job, and it uses the exact same allocations, too.

The next step was the actual reduction (I’d already written the GCD function in a library):

let mut d = numbers[0];
for m in &numbers[1..] {
    d = gcd(d, *m);
}

Could I one-liner that? Why yes:

    let d = &numbers[1..].iter().fold(numbers[0], |acc, x| gcd(acc, *x));

I know there are people who hate this kind of programming, but I dunno, I find it much more readable. It says that d is a reduction via GCD of all the numbers in a container. It’s actually less confusing than the loop.

To me, at least.

I spent much of the first day of my sabbatical at Go Northwest, a conference sponsored by my previous employer, and one for which I already had tickets. It was somewhat informative, although I learned more about JSON Web Tokens and Macaroons (thank you, Tess Rinearson) than I did anything at all about Go, the programming language. Most of the Go-related stuff seemed to be fairly high level and well within my skillset. I did learn a bit about blockchain applications and still remain skeptical that there are any use cases for them that don’t involve trying to hide stuff from government agencies.

One thing that I did pay strong attention to was David Crenshaw, a guy who had done what I did: spent a year not working for anyone, only to start up a whole bunch of passive income opportunities. And one thing he said stuck with me: "Almost all of the solutions being sold today are distributed systems but, except for a small class of business problems, even smaller than you think, nobody needs them. Very few problems exceed the size of a single large instance." (An Amazon "large" is a two-core CPU with 8GB of RAM.) This has often been my experience, too; there’s nothing in the world that needs a fifty-instance fleet.

The only reason to really worry that much about this sort of thing is when you need geographic distribution for either performance or reliability reasons. But geographic distribution and contend delivery networking is a different problem from what most distributed systems are trying to solve. You don’t even need that much observability if your system is fairly small.

I firmly believe Crenshaw’s right, and that the state-of-the-art in medium-sized deployments is being ignored while people chase the Kubernetes money. Google and Amazon have every right and need to do that; they have money and engineering time to burn. The rest of us don’t.

So, today I did I thing I’ve never done before. I quit.

In all my career as a software developer, I’ve never quit from a position like this. In college I quit a few jobs that weren’t software development, such as the warehouse job, the data entry position, and the pizza delivery service. I’ve quit a few small start-ups that weren’t paying me what I was worth, but then at the time nobody was getting paid what they were theoretically worth, and every single one of those start-ups was still incredibly valuable: they let me keep my resume alive, and they let me learn useful skills. But all the big job endings, from CompuServe, F5, and Isilon, had been either shutdowns or layoffs. CompuServe was just axed by some holding company owned by AOL. F5 and Isilon let me go during the massive layoffs of the Big Tech Bubble and the Great Recession, respectively.

I’ve never just… left. Certainly not when the opportunities at the company were pretty good. At Isilon I was definitely going stale, but the same can’t be said for F5 or CompuServe. At those, I’d been learning great things up until the very end. At F5 we’d had a great ANTLR project underway, and at CompuServe I’d been knee-deep into making contributions to the Python standard library and maintaining the original mod-logging library documentation for Apache.

Today, I quit because I’d lost my niche.

Today I left my job a Splunk for no other reason than that I wanted to. I’d been learning a fantastic array of new stuff. I had Kubernetes deployments working reliably, was deploying them on AWS, had transitioned away from Ansible onto something using Terraform and Ksonnet, was writing a Kubernetes-based microservice in Golang to gather billing information for tenanted systems, doing internal logging and using the Prometheus client libraries for observability (but actually scraping the data into Splunk instead, which was hella fun), and had a Grafana-to-Splunk dashboard transformer kinda sorta working. Codeship, Gitlab, Jenkins, Swagger, Docker, Make, Go, you name it, I’m doing it. And a lot of Python, too, as it remains the official language of Splunk Enterprise Engineering. Oh, and the front-end of our proxy is written in React, too, which they gave to me because “You’re a UI guy, right?”

I was a UI guy. In fact, that was why Splunk hired me in the first place. I’d literally written the book on writing SPAs (Single Page Applications) using Backbone, JQuery, and REST, after learning how to do that at a previous start-up. I had a niche as an expert single-page-application developer using Python middle tiers and SQL back-ends. I could, in a pinch, even do it in Ruby on Rails, but it would take a week, whereas if you gave me Django and Postgresql I would have it up and running in an hour.

I had an elevator pitch. You can still read it. I was someone who could do the whole stack, even on AWS, well enough to get your start-up off the ground. I did it five times: two failures, two lifestyle successes, and one runaway success that allowed me to survive the 2008 recession, paying for Omaha’s medicines and the kids’ schooling, without having to tighten my belt too much.

Looking at the list of skills I’ve got from Splunk, I can’t say that I have an elevator pitch anymore. Everything I’ve been doing for the past six months has been too fast, too new, too much all at once for me to develop a sense of expertise in anything— well, except for writing a Prometheus-instrumented microservice in Go. Maybe that’s not a bad niche, since the bulk of software development is just plumbing anyway these days, a paint-by-numbers process of input, map, filter, output. The problem is that anyone can do plumbing. I like to think I’m better at the really hard problem in that area of development than most people– but then I’m reminded that most people don’t care about readable code anymore. It’s all replaceable anyway.

Part of this coming year will be to figure out what my next elevator pitch will be, and if there’s a place in the industry where I want to pitch it. I was good at industrial websites, and I was good at server design, and I was good at UIs for embedded systems. The question is: what will I be good at in 2020? What will I need to be good at in 2020?

I guess I’m about to find out.

29Jun

Simple At The Bottom

Posted by Elf Sternberg as Uncategorized

There’s a quote by Rich Hickey, the creator of the Clojure Programming Language, floating around the Internet that goes like this:

Simplicity is hard work. But, there’s a huge payoff. The person who has a genuinely simpler system – a system made out of genuinely simple parts, is going to be able to effect the greatest change with the least work. He’s going to kick your ass. He’s gonna spend more time simplifying things up front and in the long haul he’s gonna wipe the plate with you because he’ll have that ability to change things when you’re struggling to push elephants around.

I’ve been thinking about this in the context of Elon Musk’s venture and adventures, especially the gushing press coverage from more recent books that talk about Musk’s dedication to "first principles," and the idea, which so far seems to be paying off for Musk, that people are doing things not because they’re correct, but because they’re familiar.

SpaceX is the obvious example. Musk ran the numbers, realized that the material cost of a launch vehicle is 4% of its cost, asked why launches are so damned expensive, and set out to prove that they shouldn’t be. He’s trying to do the same thing with tunneling machines, subways, electric cars, solar roofs, and batteries.

I’m a software developer who’s long felt more of a kinship for computer science than for computer engineering. And the longer I work in this business, the more I feel like there’s a first principles issue that is missing somewhere.

For example, in the early 2000’s there was a lot of buzz about "object oriented databases." They were all the rage until someone tried to implement them at scale, and the processing cost of updating all those rows every time a change happened was enormous; the relationship between objects in an OODB constituted an incredibly expensive forest of directed acyclic graphs to maintain. And yet, at the same time, everyone was working with spreadsheets, and underneath the grid, a spreadsheet is just a forest of directed acyclic graphs. The secret, it turned out, was to only update the parts you could see; calculated cell values that were out of sight and didn’t affect the view didn’t matter.

The technical term for updating only what output the user currently wants is laziness. OODBs work just fine so long as the results are lazy.

I have this nagging notion that at the user layer, almost everything is over-engineered to be easy rather than simple. That there’s a missing idea. That many of the features we see in applications: the DAGs of spreadsheets and garbage collectors, the page catalogs of databases, the piece tables and gap buffers of word processors, and so forth, would be significantly easier to understand if all the weirdness of it, the humanness of it, were boiled down to a couple of declarative tables that explained to the machine what the human thought these terms meant.

Because underneath it all, every programming language in the world is semantic sugar around memory allocation, assignment, loops, and conditions. And if your language has first-class functions, tail calls and pattern matching, you’ve replaced your loops and conditions with something smarter.

And Clojure isn’t it. Because Clojure isn’t simple at the bottom. Clojure is Java at the bottom. The Lisp Reader in Clojure is LispReader.java, and to me, that screams that there’s more work to be done.

I have a problem with the shiny. It’s the whole ADHD/Interictal thing interacting. There are so many things I want to learn and I haven’t got the time to learn all of them. Right now, I’m going back to a well I’ve gone to a number of times and dived deep into interpreters and… other things.

Current Learning Spree:

SICP (Again!)

Last week I made my way up to the end of Chapter 2 of The Structure and Interpretation of Computer Languages. My impression, after finishing Chapter 2, is I now get why Haskell and Lisp are lumped together as "functional languages," but, as a writer, I can say that the theme and premise of both languages is very different. I can start to see just how easy it would be to implement both an Object Oriented language in Lisp, and how easy it would be to implement Hindley-Milner in classical McCarthy Lisp or its derivatives like Racket and CL, and also why it would be a mistake to do so.

The generic interfaces of classical Scheme seem like a lot of typing. The amount of typing that one has to do, as well as the mystery types of classic Lisp’s unlabled tuples, are both ergonomic hitches that a postmodern Lisp has to overcome, and I’m not sure how.

Build Systems à la Carte

I also read the first ten pages of Mokhov, Mitchell, and Peyton-Jones’ paper Build Systems à la Carte, a lovely little paper about 30 pages long in which the authors try to prove (and do a pretty good job, all things considered) of trying to find the abstraction in build systems. They do a pretty good job, creating a common vocabulary for build systems that not only encompasses classical systems like xmkmf and make and even ninja, but somehow manages to encompass Microsoft Excel as well!

It occurred to me as I was reading it that if "the store" is the unification of the (local) repository and the filesystem, then version control systems are also build systems with narrow task capability: the tasks function’s job is to, for a given hash, drive the filesystem to match that hatch. There’s an abstraction layer here that’s backward looking, rather than MMP-J’s forward looking, but I can feel there’s a commonality here.

Parsing With First Class Derivatives

I’ve been ooh-shinied a lot this week. My Rust skills are getting remarkably rusty as I neglect them, but I want to get back into them. The paper Parsing With First Class Derivatives might be just the hook I need. The examples look Ocaml-ish, but I think I can parse them enough to get a Rusty version working, if I’m crazy enough. What attracted me most to the paper was section 3.3, which seems to imply a principled way to tackle Landin’s "Offside Rule," which is important for whitespace-delimited languages like Python, YAML, and Coffeescript.

Don’t ask why I care. You won’t like the answer.

Subscribe to Feed

Categories

Calendar

September 2018
M T W T F S S
« Aug    
 12
3456789
10111213141516
17181920212223
24252627282930