I have finally figured out how to use the images library for Rust, and it's not obvious, especially not for internally generated images. The library does a very good job of hiding complicated implementation details when you load a file off the disk, automatically figuring out the file format and providing some baseline utilities. Zbigniew Siciarz has a pretty good writeup of the basic implementation, but for the Mandelbrot renderer, I wanted to get a little more specific.

So here's the basic. Images are made up of Pixels, and every pixel is an array of 1 to 4 numbers (of any machine size; common ones are u8 and u32, but you can create Pixels of type f64 if you like). A single item Pixel is greyscale; a 3-item Pixel is RGB, and a 4-item Pixel is RGBA (alpha).

Pixels go into implementations of GenericImage, the most common of which is ImageBuffer. ImageBuffer controls access to the underlying representation, ensures bounds checking and provides limited blending capabilities.

To colorize my image, I needed to create the colormap, and then map the colors to a new image. And then I learned that the PNM colormap handler doesn't handle Pixels.

PNM is an ancient image format, which may explain why I enjoy it. It's simply a run-length encoding of bytes: RGBRGBRGBRGB... with a header explaining how to dimension the image by width and length. The PNM Handler for images.rs can only handle flattened arrays that look like a PNM array.

So for the Mandlebrot colorizer, the solution was to create a new array three times as long as the underlying Vec for my original image, and sequence through the pixels, mapping them to a color and then pushing the pixel subcolors to the new array. Which is annoying as heck, but it works.

I'm sure there's a better way to do this. And the constant remapping from u32 (which the image array understands) to usize (which is what slices are made of) and back got tiresome, although the compiler was helpful in guiding me through the process.