I wonder if cloud computing is turning software development into a worethless, even dangerous, parasite for our economy and our world.

It’s common knowledge among economists that most of the financial sector just moves money around, taking a cut, without adding value. Various brokers and traders and "financial planners" are just salesmen who are there to move product and take a cut.

The point of the financial sector is to produce a single service: to encourage investment (not speculation) in known-growth endeavors. That’s it. They’re to take "extra" money you or I might have, pool it and, applying knowledge they have the time and energy to dedicate to doing the research, allocate it efficiently to those sectors of the economy that need to and can grow. Presuming the sector grows, your percentage of the pool plus the growth is returned to you, minus the brokerage fee.

Everything else the industry does is… absurd, really. It’s just moving money around from one account to another in a speculative game of musical chairs where the last institution out loses all its money, which is distributed to the winners. Little knowledge is applied, and no growth or efficiency are generated in the sectors of the economy that actually create goods and services.

And yet, that "everything else" accounts for two-thirds of the financial sector right now!

I was reading a "cloud computing" document and one of the bullet points read • Integrate, don’t Invent. The basic idea was that your company, whatever it is, is some start-up with a crazy idea. Uber for Vibrators, or Amazon for Cattle, or IMDB for Bottle Caps. You need a lot of software: web stuff, back-end, security, uptime, database, monitoring. And sure, you don’t want to have to write your own database engine or your own webserver.

But these days you don’t even have to care about what database or webserver. You don’t have to care about efficiency or performance. You just throw money at it. Instead of understanding your system you just outsource it, you pay someone outside your company to do it.

I used to think that software development was a discipline of engineering: that is, a mechanistic, outcome-driven discipline that led to a usable result. I’ve become convinced that it’s also a discipline of anthropology: we write software for human beings, and it’s our responsibility as developers to help other developers understand what we’re doing, and to help our end users acheive the results they seek.

But cloud computing has become something else entirely: Applied daemonology. We no longer care how good a database is: if it takes SQL and spits out the rows, we wrap it in a Docker container, shove both the container and some dollars at Kubernetes, and hey… it seems to work. It passes the unit tests. It’s therefore good enough. You send it input, you get the output. If not, you tinker with a few config files and maybe it works this time. Or maybe you’ve just got too many customers, so instead of tuning your queries and your environment, you throw more dollars at it. Repeat the incantation until the spell is working as expected.

In the cloud universe, a developer’s mind ceases to be a deeply informed connection machine trying hard to efficiently use the resources at hand. Instead, it becomes a broadly and thinly informed reference manual gluing together business logic and sidecars and facades without really understanding what goes on inside them. "Malloc? LOL! Fuck it, just give Jeff more money!" seems to be the order of the day.

I suspect we’re going to regret it.


Engineering Notebook: More Learning Rust

Posted by Elf Sternberg as Uncategorized


In the Learning Rust With Too Many Linked Lists, there’s a bit of code that looks like this to build a new element and append it to the head of the list:

let new_node = Box::new(Node {
       elem: elem,
       next: mem::replace(&mut self.head, Link::Empty),

self.head = Link::More(new_node);

The problem here is that if we just had next: self.head, what happens is that next accepts self.head as a move, not a pointer! self.head would be invalid, and Rust doesn’t like that. So using mem::replace() puts a value into self.head that’s valid, and then on the next line we construct and move the Link(node) into self.head.

mem::replace() returns the value to be moved (here, self.head), so you can continue to work with the old value in your new variable (and lifetime) while keeping the old variable available.

Rust is the only language I know of where:


… results in an error message. But because the first call moves x into f, and it is no longer invalid to use in the parent scope. I’m starting to really wrap my head around this, but it’s not easy at all.


In Rust, the Option<> is a “it might be here, or it might not be. We’ll track that for you.” This “If it’s here, give it to me and make the original temporarily None so I can work with it,” is so common for Option<> that there’s a method, Option.take(), that does exactly the above, but much cleaner.

The method above becomes:

let new_node = Box::new(Node {
   elem: elem,
   next: self.head.take(),

self.head = Some(new_node);



In the linked lists, the head function returns the head value, if any. The short form of this is:

pub fn peek(&self) -> Option<&T> {
        self.head.as_ref().map(|node| { &node.elem })

The thing about peek() is that we don’t want to actually get the element, that is, we don’t want to move it, we want to borrow it. So we need tell rust that we don’t want to own the head node, just borrow it. But the head object isn’t a reference, it’s a value. .as_ref() turns the value into a reference-to-value: this tells the compiler we don’t want to own the value, we want to borrow it. It turns a compiler-based own into a borrow. That’s all it does.


In the “Linked lists with many possible heads” example, the Drop is interesting because it can get expensive: the reference counters need to be incremented during the check for dropping, then decremented immediately afterward since, no really, we want to drop the element and its cell.

The tutorial says that there’s a function, Rc::try_unwrap(), that exploits an interesting feature of Rust. Start with this:

list1 -> A --------v
list2 ------> B -> C -> D

If you drop list2, you want it to stop after only dropping B. You don’t want it proceeding, because list1 has two objects pointing at C. Rc::try_unwrap() will perform a move if and only if the reference count of an object is one, that is, no other object or scope cares about the object. By repeatedly using Rc::try_unwrap() on list items, the Drop function can be halted when the try_unwrap fails, because move semantics can’t be applied anymore.

Yes, D has only one reference. That’s okay, but the Drop function halts upon hitting C, because try_unwrap fails and you can break when it does; all the correct items are dropped, and everything else remains.

The tutorial says that Rc::try_unwrap() is only available in nightly. I’m using rustc Stable 1.23.0 and it works just fine.

As I’ve been reading Rust code and learning how to write it, I’ve become frustrated with one detail that’s been driving me a little crazy from time to time.

How do other developers know when some side-issue management is necessary? In my specific case, it’s about Drop.

In two different contexts, I’ve now seen highly specialized Drop implementations that, quite honestly, I don’t understand how the developer knew they were going to be necessary.

The first is in Beingessner’s Too Many Lists. It’s obvious, in hindsight, that a list with ten thousand entries would fit just fine on the heap, but the stack would get blown up by the recursive drop, so an iterative, hand-written drop was necessary. Which is great, but how did Beingessner know it was necessary without experiencing the blowup? I’ve written code for decades, much of it in C or C++, and I’ve never experienced a circumstance where I had to care about blowing the stack with a recursive unroll. Maybe I’ve never worked with tens of thousands of objects, but that seems unlikely given that I work in a big data field.

The second is in Jerome Froe’s LRU Cache. I was writing an LRU cache for Rust with some specific needs (basically, my ejection rule was based on an arbitrary scale provided by an object with a functor, whereas Jerome’s uses a standard volume-based rule. I struggled for days with my drop rule until I finally cheated and looked at his and… I don’t get it. I mean I get what it is and how it works, but I’m still bothered by… how did he know how to do that? The mem::forget trick is nifty: those head and tail values point to cache items that are going to be dropped automatically, so we need to not drop them. But I still don’t know how Froe knew that.

I know it seems… I dunno, petty, I guess. But I always feel as if there’s a detail missing that I’m not getting, which is really odd because I’m considered a “thorough” developer according to my peers. But maybe only within a safe space, and Rust these days is a very new, and very awkward, territory for me to be crossing.

I’ve been reading about implementing advanced collections in Rust thanks to Alexis Beingessner’s wonderful, if perhaps excessively opinionated, lecture on “exceptional” collections1. Rust’s enforced memory correctness requires the programmer to make intelligent decisions about references and move semantics, stuff no one’s ever really had to think about in C or C++. I’m having a hard time internalizing this stuff, so here’s my stab at recording what I’ve learned. His first structure is the singly linked list.

In Lisp, a list is just a data structure, often called a CONS, with two slots, the CAR and the CDR. The CAR slot holds your data. The CDR slot holds a pointer to the next CONS, or NIL, which means “The end of the List.” A CONS that is only NIL is the empty list.

In Rust, you have to be concerned with both the heap and the stack. So, in Rust, you need a data structure to point to the head of the list, a sum-type to differentiate between the NIL case and the CONS case, and a CONS structure to hold the CAR and the CDR in the non-NIL case. The head can go on the stack; the list can be on the heap after that.

In fact, the head has to go onto the heap, because otherwise your list has a recursive structure, and Rust doesn’t know how to size it. By using a Box, which is a handle to a heap object, you tell Rust the CDR is the size of a pointer, now Rust knows how to size and allocate it.

When doing a cons, there’s a moment when you create the new node, attach existing list to it, then retuns the new node as the head. Because we have a separate object to manage pointing to the head, that’s a two-step operation. Rust objects: at some point, the head object exists in two places, and Rust doesn’t like that. To overcome that, Beingessner uses the mem::replace function to make sure that the head temporarily has an acceptable NIL value when building the new node, so that the head object can then be made to point to the new node later.

Popping them off the list (Beingessner is more building a stack here at the moment) has a similar problem, and a similar solution: so that we can work with the recently-popped node, we have set the head to NIL temporarily, then fix it once we’ve extracted the data. But because they’re boxed, we need to move the node to the stack (and yes, this apparently results in a bitmap copy!) so that we have a local copy we can work with.

The last thing Beingessner shows us is the Drop trait. Normally, this wouldn’t be an issue; Rust is very good about cleaning up after itself, and this stack would clean up after itself when the head goes completely out of scope. But, Beingessner writes, that clean-up would be recursive. Since our list is on the heap2, it can grow to occupy all the memory available, which is fine. But the stack does have an upper limit, and a recursive function would blow the stack, especially since in this case there’s a follow-on operation that can’t be tail-call optimized: we need to drop the contents inside the Box first, then drop the Box. (Rust currently doesn’t do tail-call optimization anyway.) Beingessner teaches how to write this function manually and iteratively so that the default recursive version doesn’t blow you up.

So those are the things I’ve learned so far: how to read debugging output from the borrower, how to handle the stack and the heap, how to manage temporary objects to keep the borrower happy in multi-step memory management code, how to unbox items to get at their insides, and how to re-write recursive Drop functions to make them iterative and efficient.

1 Beingessner apparently believes the only thing you need aside from the traditional scalars is arrays and dictionaries; trees, graphs, tries, lists, stacks, queues, priority heaps are all unusual and unneeded by most programmers. He’s not wrong about that: 99% of my professional life I’ve never needed anything more advanced than a dictionary. But right now, I do need these weird things.

2 I avoided writing “… since our stack is on the heap,” because that would just be confusing.


I’ve been trying to grok Matt Might’s Parsing With Derivatives for a while, and one of the better, clearer explanations for it that I found is Yehonathan Sharvit’s Clojure version. Since Clojure isn’t a language I know very well, if at all, although I do know some Scheme, I decided to try and re-implement it in Python.

The tangled edition of this document (i.e. the source code in runnable form) is available on my GitHub as a gist.

Literate Program

A note: this article was written with the Literate Programming toolkit Noweb. Where you see something that looks like this, it’s a placeholder for code described elsewhere in the document. Placeholders with an equal sign at the end of them indicate the place where that code is defined. The link (U->) indicates that the code you’re seeing is used later in the document, and (<-U) indicates it was used earlier but is being defined here.

What is a parser

A parser is something that recognizes languages. A language is a collection of zero or more symbols arranged in a regular way, where “regular” means “arranged in a definite pattern.” The simplest parser, the kind illustrated here, simply says that a set of symbols corresponds to a language. All programming languages are regular languages, in that a parser says that your source code corresponds to the language.

A language is a collection of strings, formed from an alphabet. The simplest language is Empty, it has no language. Then we have the null language, which consists of the Empty String. (Yeah, that can be confusing. Bear with me.) A language of one letter, say “a”, will only recognize a string of one letter, “a”.

You can have languages like char("a"), char("b"), char("c"), and so forth. To string them all together, you concatenate them, like so:

cat(char("a"), char("b"), char("c"))

This will recognize the string “abc”, and no other.

Parser derivatives

The hard part for me was understand what Matt Might meant by “the derivative of a regular expression.” It was originally defined as “after the recognition of a single token, the set of all possible regular expressions that could follow.” Which is one of those phrases that intimidate the neophyte developer, like when dealing with continuation passing you read that “the continuation represents the entire rest of the program“, and you’re left wondering how the heck you’re even supposed to know what that is, much less construct it.

But let’s start with an example.

In a regular expression, we might write “(abc)*” to represent that we want to match zero or more representations of the string “abc”. So it would match the empty string, or “abc” or “abcabcabc”, but would fail on “abd” or “abcabcabe”, because those don’t match. How would we represent this? Let’s say there’s a function that represents “repetition of zero or more”, and this function returns a function that matches things.

abc = cat(char("a"), char("b"), char("c"))
abcstar = rep(cat)

In the parsing lingo, those are both languages: one is a repetition of another. Let’s say we call abcstar("abcabc"). The first pass-through, it matches the “a”. What’s left? Well, we can’t continue with the repetition; we’re no longer matching it’s internal language, the concatenation. In parsing with derivatives, the concatenation will return a derivative of itself, a function that will match “bc”.

But repetition can’t work with that, because “bc” isn’t part of repetitions, er, repertoire. We need to take the derivative of abc, and concatenate it with the existing repetition. So after one iteration, our function looks like:

cat(derivative_of(abc), abcstar)

Which, in turn, is:

cat(cat(char("b"), char("c")), abcstar)

See how this is the derivative of our initial language with respect to having matched the first character? That’s all that parsing derivatives is about.

The trick is to generate these derivatives automatically. Let’s start with the basics of our language: an empty language.

<empty language>= (U->)
class Empty(Derives):
    def __init__(self, *args, **kwargs): pass
    def __str__(self): return "(Empty)"

A language of the empty string.

<empty string>= (U->)
class Eps(Derives):
    def __init__(self, *args, **kwargs): pass
    def __str__(self): return "(EPS)"

What are the basic atoms (in this case, characters or, given that this is the 21st century, runes) of our language? Here’s a class that describes a language of one character.

<char>= (U->)
class Char(Derives):
    def __init__(self, c, *args, **kwargs):
        self.c = c
    def __str__(self): return "(char '{}')".format(self.c)

As I said, languages are built out of smaller languages. It’s not good enough for a language to have “a” and “b”, we want it to have words like “function” and “return”. For that, we have concatenation. Since I’m kinda a lispy guy, I do it as a linked list:

<concatenation>= (U->)
class Cat(Derives):
    def __init__(self, l, r, *args, **kwargs):
        if len(args) > 0:
            self.l = l
            self.r = Cat(r, args[0], *args[1:])
            self.l = l
            self.r = r

    def __str__(self):
        return "(cat {} {})".format(self.l, self.r)

You’ll notice that we’re not dealing with characters here, but with languages, in this case the sub-languages of Char or… whatever. We can concatenate other concatenations, or unions, or repetition.

Union is similar, but the rules for applying are different:

<union>= (U->)
class Alt(Derives):
    def __init__(self, l, r, *args, **kwargs):
        if len(args) > 0:
            self.l = l
            self.r = Alt(r, args[0], *args[1:])
            self.l = l
            self.r = r
    def __str__(self):
        return "(alt {} {})".format(self.l, self.r)

And finally repetition:

<repetition>= (U->)
class Rep(Derives):
    def __init__(self, r, *args, **kwargs):
        self.r = r

    def __str__(self):
        return "(rep {})".format(self.r)

One of the things we have to be able to figure out is if the string is exhausted; that is, having reached the end of the language, is the rest of the string to be analyzed the empty string? Also, note that we need to be able to return the empty string for repetition in the “zero examples left” case. We also need to know it because there are two alternatives for analyzing concatenation depending on which kind of language we’re using. The term for this is nullability:

<nullability>= (U->)
def nullable(l):
    if isinstance(l, Empty) or isinstance(l, Char): return False
    if isinstance(l, Eps) or isinstance(l, Rep): return True
    if isinstance(l, Alt): return nullable(l.l) or nullable(l.r)
    if isinstance(l, Cat): return nullable(l.l) and nullable(l.r)
    raise "Not a language."

You may have noticed that all my classes derive from a class called Derive. Derive actually does all the work of traversing the regular expression and the string at the same time, building derivative regular expressions as it goes.

To make it easy to use our regular expression engine, I’m going to make all regular languages callable, like so:

<derive>= (U->)
class Derives(object):
    def __call__(self, w, l = None):
        if l == None:
            l = self

        if (w == ""):
            return nullable(l)

        return self.__call__(w[1:], self.derive(w[0], l))

    <derive definition>

And now we move onto derivation. The really tricky part of the derivative is in the issues like union, concatenation, and repetition. Let’s show the basic atoms first:

<derive definition>= (<-U)
    def derive(self, c, o = None):
        if o == None:
            o = self
        if isinstance(o, Empty): return Empty()
        if isinstance(o, Eps): return Empty()
        if isinstance(o, Char):
            if (o.c == c):
                return Eps()
            return Empty()
        <derive repetition>
        <derive union>
        <derive concatenation>

Here, we’re using Eps to indicate that this substring has matched, otherwise we’re returning Empty, which means we’ve run out without a match.

For the tricky part, let’s start with repetition, since we’ve covered that. I’ve already described the derivation for you: A concatenation of the derivative of the inner language (“what’s left to match from one repetition”) with the repetition of the inner language (which can be zero or more, so it’s okay if there’s only one instance to match). Here it is in all its glory. That inner ‘r’ is from the class defined above.

<derive repetition>= (<-U)
if isinstance(o, Rep):
    return Cat(o.r.derive(c), o)

Union is easy too: it’s just derivative of its components with respect to the next character:

<derive union>= (<-U)
if isinstance(o, Alt):
    return Alt(o.l.derive(c), o.r.derive(c))

This one took me while to understand. If we’re dealing with a concatenation and the next element of the concatenation can be nullified, we have to deal the fact that both it and the next language can be true: it’s a hidden union. Otherwise, we can deal with just the derivative of the current language and concatenate it with the rest of the concatenation’s language. This is one of those places where it’s painfully obvious how we’re constantly building and rebuilding the derivative regular expression as the string gets consumed.

<derive concatenation>= (<-U)
if isinstance(o, Cat):
    left_derivative = Cat(o.l.derive(c), o.r)
    if nullable(o.l):
        return Alt(left_derivative, o.r.derive(c))
    return left_derivative

This “constantly building and rebuilding” is, in fact, where the first optimization can happen: we can memoize the produced derivatives and put them into a lookup, so the next time we encounter this particular language, we don’t have to build the regular expression derivative, we just re-use the one we had last time.

To test, I’m going to follow from Sharvit’s example, create a parser for floating point numbers, and then test it against our use cases.

<testing>= (U->)
digit = Alt(*[Char(i) for i in "0123456789"])
floater = Cat(Alt(Eps(), Alt(Char("+"), Char("-"))),

for i in [("-2.0", True), ("1", False), ("", False), ("+12.12", True), ("1.0", True)]:
    print i[1], floater(i[0])

It occurs to me that this is how you bootstrap a regular expression parser. You don’t have a parser for the parser yet, so you create one using the syntax above that describes your language for your regular expression, and then you can bootstrap upward to a full-on regular expression handler.

Assembling the file for Noweb looks like:


<empty language>

<empty string>








On Monday, work announced that we would be adopting Google Go as a third official internal language. Most software houses have two official languages: a highly demanding systems language with strong performance and memory guarantees, and an easy-to-write scripting languages with mediocre performance and few memory guarantees, so a third language was a bit unusual. Work’s language pair was C++ and Python, a highly conventional and conservative choice1.

My commute takes me past a bookstore so on the walk home Monday I bought Introducing Go and read it on the train, and the next morning read the rest. It’s a tiny book. We’re at the end of a project at work so I had spare time. I chose a little toy project.

I’ve been trying learn Rust, and I have a little server project called Monologued that I’ve been hacking on since early December. I haven’t gotten it working. I already had some major components working, and I decided to try at write Monologued in Go.

It took about six hours.

To be fair, I had work’s support at learning Go, and I don’t have it for learning Rust, so I could use office time to write the Go version, whereas my Rust time has to compete with my writing time, which is about 35 minutes twice a day, in a crowded transit train.

Here’s what I discovered:

Go is a good language for writing middling-performance servers. It’s concurrency model is brilliant and easy to use, and the garbage collection makes it easy to forget about memory management. The error handling is old-school but clear and fairly concise. The compiler is clearly optimized for the cognitive support of modern, "smart" IDEs, but I was just using Emacs and still managed to write the server in a short period of time.


I hated it.

Have you ever watched one of those music videos where very talented musicians take children’s musical instruments and do something utterly and mind-bogglingly amazing with them? There is something quite astonishing about watching someone take all the skill they’ve spent thousands of hours mastering only to wring lovely music out of toy pianos, plastic recorders, kindergarten bongos, and little wooden xylophones. It’s only when you step back for a moment and realize that if they sound that amazing on toys, they must be even more amazing when playing on instruments meant for adult hands.

That’s how I feel writing in Go. I didn’t learn anything writing in Go. It’s a language with guard rails, a language of Lego pieces. It’s "batteries included" library is so complete it has all the parts needed to build web servers instantly. Go is such a simple language that if you’re coming from Python the syntax enforcement will drive you crazy for a while but then, like Python’s whitespace, you’ll get used to it, and you’ll like the lack of the global interpreter lock. If you like Python 3, Go’s Unicode management is lousy and you won’t enjoy it, but it’s workable. If you’re coming from C, the tuple return type and memory management will make you very happy, and otherwise you’ll be completely in your element. All the headaches of POSIX-style networking are hidden behind a pretty API.

No one will ever write great software in Go.

Look at the great programs of the last 25 years. Look at the LAMP stack: Linux, Apache, MySQL, PHP— all written in C. The alternatives: FreeBSD, Microsoft Windows; Nginx or Lightspeed; Maria, Postgres, or Mongo; Perl, Python, Ruby— all written in C or C++. The MS Suite of Word, Excel, Access, Powerpoint, OneNote, Publisher— all written in C++. Almost all the great software of the last 25 years was written in C or C++.

This is not to say that I think C and C++ are perfect languages. They’re not; their closeness to the operating system and the hardware, the features that give them their immense power, also make them notoriously hard to master and exceptionally difficult to get right. D attempts to correct C++’s issues but hasn’t gotten much traction. But Go gets something right: We need something faster and smarter than Python without the nightmares of C and C++.

Rust takes a different tack: it learns from C and C++’s mistakes without hiding from your awareness the underlying issues of talking to an OS. Its rigid memory model takes a lot of effort to master, but it’s completely worth it. When you master Rust, you’ll have mastery over the bare metal, the machine itself. Go is the dull knife of programming.

I fully acknowledge that Go is a very good language for what it does. It meets a very important business need: it makes developers productive, and it lets them get product out the door in very short periods of time, and it’s stupid simple. It makes shareholders happy. It takes all the lessons learned since Pascal and reifies them into a usable language; it takes all the lessons learned since Lisp and ignores them completely.

motivation-001 Working in Go does not suck. The problem is that if you’re not working in something that sucks, you are not growing. You’re stuck. Rob Pike, the co-inventor of Go (and Unix, and UTF-8, and a ton of other things) once said

The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt. – Rob Pike

And that’s my problem with Go. It doesn’t encourage growth. A developer working in Go and only Go will help his business’s bottom line, and the shareholders will be happy, but he will always be a mediocre programmer which is why mediocre coders are so fond of Go. That quote above is on the wall in my office, and I try to live by it. You should too.

The alternative is that you will not be Rob Pike, a guy who’s still making changes in the industry even though he’s ten years older than I am. You’ll be writing the same thing for twenty years and then, when you’re forty, you’ll know nothing else… and then what will you do?

1In almost every job I’ve head, the pair has always been C or C++, Python or Perl. The oldest job I ever took had just COBOL; the oddest pair was Java and Perl in 2010, but that can be explained by the fact that it was a publishing toolchain and there’s a ton of Perl for managing LaTeX. Isilon’s choice of Python was a bit idiosyncratic; in 2000 chosing Python was considered radical, but 18 years later Python has completely overwhelmed Perl as the scripting language of choice, so the choice now seems prophetic. I’m pleased that I’m the one who made that choice.


Logbook: Working on Monologued this week.

Posted by Elf Sternberg as Uncategorized

Problem: I’ve been trying to get a simple, single-threaded server working under Rust, using the MIO (Metal I/O) abstraction over select/poll/epoll/kqueue since that’s what most closely tracks with my experience as a server writer. Unfortunately "my experience as a server writer" more or less came to halt with the collapse of Sprynet back in 1998. The objective here is to write a framework supporting older line-based protocols such as NNTP, POP3, and Gopher.

Attempted Solution: I’ve tried to abstract the sever out into several different layers: the server itself, the connection the server has to any given client, and the protocol. The connection marshals the input into lines, which are then placed in the protocol’s queue. The protocol generates responses, which are then placed in the outbound queue. The connection sets itself writeable if the protocol says there’s content to be delivered, and read-only if not.

Attempted Process: It looks more or less like any mid-grade developer’s server, with ridiculous craptons of guard statements, automatic panics, and at least one weird construct to "get around" the borrower. Inside the event loop I had to use an indexed get() into the array because the borrower didn’t like me using an iterator, claiming I’d borrowed a mutable value twice. I’m still waiting for the Rust "borrower insight" everyone claims is coming but I have yet to see.

Result: Well, it works. The only protocol generated so far is an embedded line-based Echo protocol. It’s still ugly. The number of additional libraries dragged in (net2, bytes, log, env_logger) balloon the server to almost 12MB in size; the equivalent C program, without all the niceties, comes out to about 18KB. Something is wrong here, but I’m not sure what.

Future work: Clean this thing the hell up. It’s neither functional nor SOLID, which means it’s just a heck of a mess. It’s a manageable mess, I suppose, but still a mess. Investigate if the executable size is the result of static fat binaries, to which I don’t object too strongly. Unembed the protocol.

New work: autorotate. As I mentioned earlier, I managed to score a brand new Microsoft Surface Pro 3 for $230 (not including the additional cost of a new keyboard, stylus, and power supply… sigh) from a woman who much preferred her MacBook and was moving and had to get rid of everything she didn’t want to carry. I even checked with Microsoft to see if it was registered as stolen, and bought it on the spot.

If you see my earlier post about getting it working, you can see that I liked Ayko Poel’s autorotate script. And it’s true, I did… until I bought a stylus and tried to draw on the thing in portrait mode. Poel’s scripts broke. There’s a bit of deep, almost occult esoterica about mapping screen touches to drawing coordinates, especially when rotating the screen, and Poel’s scripts failed at the intersection of advanced touch devices (the pen and the eraser) and coordinate transformation.

So I fixed it.

Along the way, I removed the huge horkin’ if statement and replaced it with a transformation matrix which encapsulates all the rules quite nicely. It actually doesn’t change the line count (sadly): Detecting if you’re in landscape or portrait is two expressions, and which landscape or portrait is two more expressions each, followed by a four-line table that maps these rules to transformations and labels. But I think it’s a hell of a lot more maintainable. I also fixed the palm rejection code (which is now a heck of a lot shorter) to understand that the pen and eraser both require palm rejection.

I changed device recognition to be a heck of a lot more robust, and I used Python’s namedtuple to make this all much more readable. I used a lot of regular expressions— but then, I’ve been good at regular expressions since 1993— and that lets me recognize the devices, the drivers, and everything else without any fiddling or meddling.


What I’m Doing Right Now: Monologued.

Posted by Elf Sternberg as Uncategorized

For the past couple of weeks, I’ve been learning a lot of Rust. As is my wont, my usual way of learning a language is to decide to write something in it, and see what comes from doing it. A while ago, I learned that John Carmack still regularly updates his .plan file, which is a text file that was once available through the Remote User Identifier Protocol documented in RCF-1288, popularly known as the finger protocol. The project is named Monologued.

I decided I wanted to write a finger server. The version I’ve decided to write is very stripped down. It doesn’t support user lists by default, it doesn’t support host-hopping by default, and it doesn’t even support user presence at all, which was the original intent of the protocol. What it does support is returning the user’s .plan file. It’s fairly paranoid, honoring .noplan over .plan (if the former was present, finger wouldn’t even acknowledge that the user requested existed), taking care not to leak usernames, and prioritizing the /etc/finger.conf exclude list over the include list.

I have learned a lot about the finger protocol and it’s threadbare implementation during my research. For example, there’s a Unix program, who, that will tell you who’s currently logged in to a given host. The fingerd server doesn’t even run on its own; it’s launched by inetd, a helper program for small network utilities not much in use these days, which hooks up finger to the internet via stdin/stdout. fingerd in turn runs the local finger program, which doesn’t even do any work– instead, it figures out who you asked for, runs who to spew the user’s identity and presence, and then runs cat on the .plan file if it exists. That seems like a bit of a security nightmare.

My object is to learn Rust, and to learn MIO (Metal I/O, the lowest-level Rust library for networking that’s not just wrapping the C network library yourself). Everything will be contained in a single executable. The progression I’ve planned is straightforward:

  1. Write a simple echo server, as shown in the MIO tutorials.
  2. Separate the Connection object from the Server object.
  3. Write a parser so that instead of returning an echo, we return the command found.
  4. Implement the command found.
  5. Secure the command found with the rules described above.
  6. Implement the rules as a configuration file.
  7. Implement the rules as command-line overrides.

Stretch goals:

  1. Implement a cache with arbitrary Sized<>-based ejection rules for smaller .plan files.
  2. Implement an fstat-based layer to eject .plan files that have expired.
  3. Implement an Inotify-based layer for MIO to eject .plan files that have expired.

Super Stretch goals:

  1. Implement opportunistic encryption (STARTTLS) for the Fingerd protocl.
  2. Modify the configuration so opportunistic encryption can be required.
  3. Write a client to demonstrate opportunistic encryption of RUIP.
  4. Document and submit an RFC for "Using Transport Layer Security (TLS) with the Remote User Identity Protocol (RUIP / Finger)."

Right now I’m working on 4, but I may have to backtrack as I think I’ve become overly entangled via 2, and may need to think harder about separating the read/write phases.

An acquaintance of mine claims that the last one is an “epic troll,” but I don’t know; not all of those old protocols are worthless.

One thing I forgot: in the book Ansible Up & Running, in chapter 3, there’s a Vagrantfile describing three containers with consecutive ports exposed and mapped to the host OS.  Each VM is defined individually, by hand.  I had read that a Vagrantfile is just Ruby, so I thought, “Screw that.  I’m gonna use a loop.”

It turns out you can’t just use a loop.  Because the VMs are booted asynchronously, they need closures.  You can replace the entirety of the Vagrantfile in chapter three with this:

config.ssh.insert_key = false

(1..3).each do |host|
  config.vm.define "vagrant#{host}" do |vagrant|
    vagrant.vm.box = "maier/alpine-3.4-x86_64"
    vagrant.vm.network "forwarded_port", guest: 79 + host, host: 8079 + host
    vagrant.vm.network "forwarded_port", guest: 442 + host, host: 8442 + host
    vagrant.vm.synced_folder ".", "/vagrant", disabled: true

The do |host| syntax creates a closure that ensures that the host variable will be different for each VM, run in a separate scope. Using a for loop, the host variable would be “3” at the end and when vagrant picks up the entry, all three VMs will try to allocate the same port, resulting in conflict and crash.  Again note that I’m using Alpine here.  If you’re using Ubuntu as the book recommends, make sure you change the container name.

Subscribe to Feed



July 2018
« Jun