As a teenager trapped in the American suburbs, I spent hundreds of hours exploring computers and the Internet: trying to run Linux on my thift-store PC, learning to write shell scripts, and customizing Emacs hotkeys. I found all this captivating because it was, in a way, the first time I had a space of my own — a little world I could explore and change, without needing to ask for permission or beg for a car ride.
Unlike grade school classes, team sports, and summer camps — structurally disempowering environments that sorted me only by age into limited, designed-for-kids activities — on the computer I could use the same tools as real adult people doing real, serious work!
Unfortunately, although I was learning to wield these powerful tools, I lacked the taste, ambition, or worldly experience to apply them to substantial problems. Instead, I chased computing trivalities: alternative linux distros, hotkey window managers, optimized keyboard layouts, note-taking systems, opinionated build tools, and LaTeX. I wrote everything from a ZFS-inspired deduplicating filesystem to a ZeroMQ-based version of dbj’s redo build system (for long startup time languages like Ruby and Clojure).
However, by my mid-20’s I realized I’d spent a decade trapped in Strange Loop groundhog day, and decided to shift away from self-referential computing to explore other fields.
I stopped paying attention to the Hacker News language/framework of the month and started experimenting with electrical engineering, industrial design, CAD, machining, etc.
There was still plenty of computing involved, but it was mostly confined to what I already knew well: Clojure/ClojureScript, the web platform, notetaking via plaintext files, and how to write a ./deploy.sh
shell script.
This decision seems to have worked out well so far: I’ve missed multiple waves of frontend fashion churn (Babel, WebPack, ESBuild, and something about…hooks?) and don’t seem to have suffered for ignoring the past decade of cloud infrastructure (the newest AWS service I know how to use is Route 53).
While my frozen-in-2015 workflow continues to serve me well, it’s not particularly accessible for collaborators. Sure, dedicated fellow programmers may have enough cultural overlap and experience to install dependencies and run tests/linters, but what about electrical engineers, biologists, and designers?
Teaching these folks basic “tools of the trade” like Git and the terminal may be necessary, but it doesn’t feel sufficient — individual projects always seem to accrete complexity.
What starts as a single script grows to include some dependencies (and thus package manager), then a small portion is rewritten in faster language (just remember to compile X before running Y), then someone adds a documentation generator, website, etc., etc.
The only people capable of fighting this complexity are, thanks to the curse of knowledege, those least likely to recognize it.
So, despite my aversion to computing meta, I’ve been exploring how to highlight and tame computing complexity to foster more effective collaboration.
In particular, I’ve been wondering how to set up a code repository with embedded processes sufficient to allow, for example, a designer to fix a typo and re-deploy the website; for a control engineer to adjust PID settings and reflash a test board’s firmware; or just for myself revisiting 5 years later on a new computer, to have a fighting chance of actually running the project.
I’m cautiously optimistic about containers, which (for my typical project needs) seem to be in a sweet spot on the Pareto frontier:
$PATH
and a prayer.One tool I’m particularly excited about is Toast, which makes it straightforward to run commands within containers and cache intermediate steps. (I may be getting a bit too excited, as I spent my Saturday morning shaving a few hundred milliseconds off its execution time.)
Conceptually, I’m thinking about the workflow as a “local continuous integration server”, as much of the value of CI is notifying you when you’ve forgotten to check-in a file, lint, or run the tests.
But there’s even more value in running such checks locally.
Not just from test and reproducibility failures being discovered faster (no latency from waiting for CI to run new commits), but from eliminating the possibilty of failure entirely: If linters and formatters run automatically via Git pre-commit hooks, there’s no way for contributors to forget to do it. (Thus sparing everyone the clutter and eternal embarrassment of “oops” commits in version control history.)
I’m sketching out both the scope/philosophy and technical implementation of such a workflow as the tidy codebase starter kit and I’d love to hear what you think.
Is it futile or misguided?
Do you know of an already-baked project / blog post / process that I should check out?
(See in particular the TODOs at the bottom of the readme.)
Until next time, have a great day!
Doomberg writes about energy with a refreshingly numerate, aware-of-physical-reality foundation. I enjoyed their podcast discussions on crude oil and natural gas and how they deliberately started a substack newsletter.
“Pijul is the first distributed version control system to be based on a sound mathematical theory of changes.”
Building a Lego-powered Submarine 4.0 - automatic depth control
“This is going to sound wild to anyone who lives in the US, but for any two-story rowhouse in Tokyo, the owner can by right operate a bar, a restaurant, a boutique, a small workshop on the ground floor — even in the most residential zoned sections of the city. That means you have an incredible supply of potential microspaces.”