Why I rewrote my Rust keyboard firmware in Zig: consistency, mastery, and fun
← Back to Kevin's homepagePublished: 2021 March 7I’ve spent the last year building keyboards, which has included writing firmware for a variety custom circuit boards.
I initially wrote this firmware in Rust, but despite years of experience with that language I still struggled quite a bit. I eventually got my keyboards working, but it took an embarrassingly long time and wasn’t fun.
After repeated suggestions from my much more Rust-and-computing-experienced friend Jamie Brandon, I rewrote the firmware in Zig, which turned out swimmingly.
I found this quite surprising, given that I’d never seen Zig before and it’s a pre-1.0 language written by a fellow PDX hipster with basically just a single page of documentation.
The experience went so well, in fact, that I now feel just as likely to turn to Zig (a language I’ve used for a dozen hours) as to Rust (which I’ve used for at least a thousand hours).
This, of course, reflects as much about me and my interests as it does about either of these languages. So I’ll have to explain what I want from a systems programming language in the first place.
Also, to explain why I struggled with Rust I’ll have to show a lot of complex code that I’m obviously unhappy about. My intent here is not to gripe about Rust, but to establish my (lack of) credibility: It’s so you can judge for yourself whether I’m using Rust’s features in a reasonable way or if I’ve totally lost the plot.
Finally, while it risks falling into the dreadfully boring “language X is better than Y” blog trope, I feel that it’d be more helpful to some readers if I explicitly compare Rust and Zig, rather than write a wholly positive “Zig’s great!” article. (After all, I’d steadily ignored six months of Jamie gushing about Zig because, “that’s great buddy, but I already know Rust and I just want to get my keyboard done, okay?”)
What I want from a systems language
I was educated as a physicist and learned programming so I could make data visualizations. My first languages were PostScript and Ruby (dynamic, interpreted languages) and I later moved to JavaScript so I could draw on the web. That led me to Clojure (using ClojureScript to draw on the web), where I’ve spent much of my career.
In 2017 I decided to learn a systems language. Partly this was intellectual curiosity — I wanted to become more familiar with concepts like the stack, heap, pointers, and static types which had remained mucky to me as a web developer. But mostly it was because I wanted the capabilities that systems languages promised:
To write code that was fast; that could take advantage of how computers actually worked and run as fast as the hardware allowed.
To write applications that could run in minimal environments like microcontrollers or web assembly where it just isn’t feasible (in time or space) to carry along a garbage collector, language runtime, etc.
My interest was not (and still isn’t) in operating systems, programming language design, or safety (with respect to memory, formal verifiability, modeling as types, etc.).
I just wanted to blink the little squares on the screen on and off very quickly.
Based on its growing popularity in the open source community and tons of beginners-to-systems-programming documentation, I picked up Rust around version 1.18.
Since then, Rust has undoubtedly helped me achieve those capabilities I was after: I was able to compile it to WASM for a layout engine, build and sell a fast desktop search app (Rust shoved into Electron), and compile Rust to an stm32g4 microcontroller to drive a track saw robot (I even found a typo in the register definitions; the full “hard-mode” embedded debugging experience!).
Despite all this, I still don’t feel comfortable with Rust. It feels fractally complex — seemingly every time I use Rust on a new project, I run into some issue that forces me to confront a new corner of the language/ecosystem. Developing my keyboard firmware was no exception: I ran into two problems, and each required learning a completely new language feature.
These problems aren’t really specific to embedded, but they’re representative of the sorts of challenges I’ve run into using Rust over the past three years.
If you want the gory embedded details or understand why I’m writing my own firmware using newfangled languages at all, see my notes on building keyboards.
Conditional compilation
The first challenge I ran into with Rust was getting my firmware to run on hardware varying from 4-button dev-kit PCBs to the left/right halves of a wireless split to a single Atreus:
Varying the features of firmware at compile-time is known as “conditional compilation”. (It needs to be done at compile-time rather than run-time because microcontrollers have limited program space, roughly 10–100kB in my case.)
Rust’s solution to this problem is “features”, which are defined in Cargo.toml
:
[dependencies]
cortex-m = "0.6"
nrf52840-hal = { version = "0.11", optional = true, default-features = false }
nrf52833-hal = { version = "0.11", optional = true, default-features = false }
arraydeque = { version = "0.4", default-features = false }
heapless = "0.5"
[features]
keytron = ["nrf52833"]
keytron-dk = ["nrf52833"]
splitapple = ["nrf52840"]
splitapple-left = ["splitapple"]
splitapple-right = ["splitapple"]
# specify a default here so that rust-analyzer can build the project; when building use --no-default-features to turn this off
default = ["keytron"]
nrf52840 = ["nrf52840-hal"]
nrf52833 = ["nrf52833-hal"]
For example, the keytron
feature is enabled for a specific keyboard hardware design.
That hardware depends on the nrf52833
feature (representing a kind of a microcontroller), which depends on the nrf52833-hal
crate (actual code which maps that microcontroller’s peripheral memory addresses to Rust types).
My Rust code can then use attribute annotations to conditionally enable stuff. E.g., a namespace can import the microcontroller-specific crate:
#[cfg(feature = "nrf52833")]
pub use nrf52833_hal::pac as hw;
#[cfg(feature = "nrf52840")]
pub use nrf52840_hal::pac as hw;
or call the appropriate key scanning routine:
fn read_keys() -> Packet {
let device = unsafe { hw::Peripherals::steal() };
#[cfg(any(feature = "keytron", feature = "keytron-dk"))]
let u = {
let p0 = device.P0.in_.read().bits();
let p1 = device.P1.in_.read().bits();
//invert because keys are active low
gpio::P0::pack(!p0) | gpio::P1::pack(!p1)
};
#[cfg(feature = "splitapple")]
let u = gpio::splitapple::read_keys();
Packet(u)
}
Getting this conditional compilation working required learning a lot of stuff:
- the attribute annotation conditional mini-language (the
any
in#[cfg(any(feature = "keytron", feature = "keytron-dk"))]
) - the
optional = true
that must be added to the device crates inCargo.toml
(even though the source is already conditionally requiring them!) - how to enable features when building a static binary (
cargo build --release --no-default-features --features "keytron"
)
I still have many unresolved questions, too!
At some point I gave up trying to pass device peripherals as function arguments because I couldn’t figure out how to add conditional attributes to types — the “obvious” thing doesn’t work:
fn read_keys(port: #[cfg(feature = "splitapple")]
nrf52840_hal::pac::P1
#[cfg(feature = "keytron")]
nrf52833_hal::pac::P0) -> Packet {}
There’s a neat embedded framework, RTIC, whose main entry point is an app
annotation that takes the device crate as an, uh, argument:
#[app(device = nrf52833)]
const APP: () = {
//your code here...
};
How does one conditionally vary this argument at compile-time? I have no idea.
Types and macros
Rust also proved challenging even within a single hardware configuration.
Consider scanning a keyboard matrix: If we don’t have enough microcontroller pins to connect each keyboard switch directly to a pin, we can arrange the switches with diodes (one-way valves) into a matrix:
We then set a single column high and read out the the rows to find state of the switches on that column.
In this example, if we set pin 1.10 high (col0) and then read pin 0.13 (row1) as high, we know that switch K8 is pressed.
Pretty simple in theory, but complex in Rust because:
- Device crates expose hardware peripherals as distinct types
- One does not simply compute with distinct types in Rust
Say I need to initialize all the columns as output pins.
Doing this for a single pin, say peripheral port P0’s pin 10, is simple enough:
P0.pin_cnf[10].write(|w| {
w.input().disconnect();
w.dir().output();
w
});
But my column pins are spread across two ports, so what I want to write:
for (port, pin) in &[(P0, 10), (P1, 7), ...] {
port.pin_cnf[pin].write(|w| {
w.input().disconnect();
w.dir().output();
w
});
}
isn’t going to fly because now the tuples have different types — (P0, usize)
and (P1, usize)
— and so they can’t hang together in the same collection.
Here’s the solution I came up with:
type PinIdx = u8;
type Port = u8;
const COL_PINS: [(Port, PinIdx); 7] =
[(1, 10), (1, 13), (1, 15), (0, 2), (0, 29), (1, 0), (0, 17)];
pub fn init_gpio() {
for (port, pin_idx) in &COL_PINS {
match port {
0 => {
device.P0.pin_cnf[*pin_idx as usize].write(|w| {
w.input().disconnect();
w.dir().output();
w
});
}
1 => {
device.P1.pin_cnf[*pin_idx as usize].write(|w| {
w.input().disconnect();
w.dir().output();
w
});
}
_ => {}
}
}
}
Yup, good ol’ copy-paste to the rescue.
But wait, I hear you asking, what about macros? Oh yes, my friend, I shaved the macro yak in the actual scanning routine:
pub fn read_keys() -> u64 {
let device = unsafe { crate::hw::Peripherals::steal() };
let mut keys: u64 = 0;
macro_rules! scan_col {
($col_idx: tt; $($row_idx: tt => $key:tt, )* ) => {
let (port, pin_idx) = COL_PINS[$col_idx];
////////////////
//set col high
unsafe {
match port {
0 => {
device.P0.outset.write(|w| w.bits(1 << pin_idx));
}
1 => {
device.P1.outset.write(|w| w.bits(1 << pin_idx));
}
_ => {}
}
}
cortex_m::asm::delay(1000);
//read rows and move into packed keys u64.
//keys are 1-indexed.
let val = device.P0.in_.read().bits();
$(keys |= ((((val >> ROW_PINS[$row_idx]) & 1) as u64) << ($key - 1));)*
////////////////
//set col low
unsafe {
match port {
0 => {
device.P0.outclr.write(|w| w.bits(1 << pin_idx));
}
1 => {
device.P1.outclr.write(|w| w.bits(1 << pin_idx));
}
_ => {}
}
}
};
};
//col_idx; row_idx => key ID
#[cfg(feature = "splitapple-left")]
{
scan_col!(0; 0 => 1 , 1 => 8 , 2 => 15 , 3 => 21 , 4 => 27 , 5 => 33 ,);
scan_col!(1; 0 => 2 , 1 => 9 , 2 => 16 , 3 => 22 , 4 => 28 , 5 => 34 ,);
scan_col!(2; 0 => 3 , 1 => 10 , 2 => 17 , 3 => 23 , 4 => 29 , 5 => 35 ,);
scan_col!(3; 0 => 4 , 1 => 11 , 2 => 18 , 3 => 24 , 4 => 30 , 5 => 36 ,);
scan_col!(4; 0 => 5 , 1 => 12 , 2 => 19 , 3 => 25 , 4 => 31 , 5 => 37 ,);
scan_col!(5; 0 => 6 , 1 => 13 , 2 => 20 , 3 => 26 , 4 => 32 , 5 => 38 ,);
scan_col!(6; 0 => 7 , 1 => 14 ,);
}
#[cfg(feature = "splitapple-right")]
{
scan_col!(0; 0 => 1 , 1 => 8 , 2 => 15 , 3 => 23 , 4 => 30 , 5 => 37 ,);
scan_col!(1; 0 => 2 , 1 => 9 , 2 => 16 , 3 => 24 , 4 => 31 , 5 => 38 ,);
scan_col!(2; 0 => 3 , 1 => 10 , 2 => 17 , 3 => 25 , 4 => 32 , 5 => 39 ,);
scan_col!(3; 0 => 4 , 1 => 11 , 2 => 18 , 3 => 26 , 4 => 33 , 5 => 40 ,);
scan_col!(4; 0 => 5 , 1 => 12 , 2 => 19 , 3 => 27 , 4 => 34 , 5 => 41 ,);
scan_col!(5; 0 => 6 , 1 => 13 , 2 => 20 , 3 => 28 , 4 => 35 , 5 => 42 ,);
scan_col!(6; 0 => 7 , 1 => 14 , 2 => 21 , 3 => 29 , 4 => 36 , 5 => 22 ,);
}
keys
}
There’s a lot going on here!
Basically, each scan_col!
macro invocation expands into code that sets that column pin high, reads out the rows, and pushes their statuses to the appropriate bits of the mutable keys: u64
variable at the top of the function.
If you want to understand in more detail, grab your favorite beverage and spend some quality time with the Rust book’s macro section or Rust’s macro reference docs.
I’m not happy with either the pin initialization or matrix scanning code I came up with here, but they were the clearest I was able to write. From the first page of Google results for “rust keyboard firmware”, it looks like other Rustaceans solved this problem by:
iterating over usizes and matching to destructure tuples; I like this macro-free approach (I took it for my luxury touchpad), though identifying switches by row/column coordinates implies that each row/column has the same number of switches, which isn’t always the case.
relying on (their words) a macro to implement a iterator on trait objects from a tuple struct; I’m not sure exactly what’s going on here.
a truly astral level of understanding; I’m really not sure what’s going on here.
While there’s certainly a lot of language complexity in all these solutions, Rust deserves a lot of credit for being more palatable than the traditional approaches.
Unlike C’s infamous textual preprocessor macros (#define
, #ifdef
, etc.), for example, Rust’s macros won’t lead to inexplicable syntax errors on expansion.
(And all of the expanded code is type checked!)
Rust’s tooling is much better too — Rust Analyzer is competent enough to understand the feature annotations when jumping around code, something I never could figure out for C.
Given how smart the Rust contributors are — check out all the thoughtful discussions and weighing of tradeoffs they make in the public RFC process — I was tempted to conclude that, well, all this complexity must be inherent. Perhaps it’s just hard to do compile-time configuration and to iterate over distinct types efficiently in a safe, compiled language?
Perhaps, but Zig makes a compelling case that — at least for my pandemic-hobby-project keyboard firmware — I can get by with far fewer concepts.
Zig, a simpler language
Here’s how I tackled these two problems of conditional compilation and iteration over distinct types using Zig. (See Jamie’s post for a more comprehensive comparison of Rust and Zig.)
Full disclosure: This is pretty much the first code I’ve ever written in Zig, so there may be more idiomatic or tidy solutions.
For conditional compilation, I moved hardware-specific details into separate files.
E.g., dk.zig
usingnamespace @import("register-generation/target/nrf52833.zig");
usingnamespace @import("ztron.zig");
pub const led = .{ .port = p0, .pin = 13 };
and atreus.zig
usingnamespace @import("register-generation/target/nrf52840.zig");
usingnamespace @import("ztron.zig");
pub const led = .{ .port = p0, .pin = 11 };
each import their microcontroller-specific register definitions and defines PCB-specific LED pin assignments.
The common ztron.zig
file then imports those public constants via @import("root")
(“root” is the compiler entrypoint, so this is a circular reference; it’s fine!) and uses them directly:
usingnamespace @import("root");
export fn setup() void {
led.port.pin_cnf[led.pin].modify(.{
.dir = .output,
.input = .disconnect,
});
}
There’s no special “feature” semantics to learn, Cargo.toml
to rearrange, or flags to pass to the compiler.
Cargo.toml
doesn’t even exist!
To specify what code you want to compile you, uh, just tell the compiler: To compile the devkit hardware, run zig build-obj dk.zig
; for the Atreus, zig build-obj atreus.zig
.
This works because Zig only evaluates code as-needed. (Not just imported files either — the compiler doesn’t mind half-written, ill-typed functions as long as they’re not invoked.)
As for the keyboard matrix pin setup? Well, the peripherals are still distinct types but that’s…fine:
const rows = .{
.{ .port = p1, .pin = 0 },
.{ .port = p1, .pin = 1 },
.{ .port = p1, .pin = 2 },
.{ .port = p1, .pin = 4 },
};
const cols = .{
.{ .port = p0, .pin = 13 },
.{ .port = p1, .pin = 15 },
.{ .port = p0, .pin = 17 },
.{ .port = p0, .pin = 20 },
.{ .port = p0, .pin = 22 },
.{ .port = p0, .pin = 24 },
.{ .port = p0, .pin = 9 },
.{ .port = p0, .pin = 10 },
.{ .port = p0, .pin = 4 },
.{ .port = p0, .pin = 26 },
.{ .port = p0, .pin = 2 },
};
pub fn initKeyboardGPIO() void {
inline for (rows) |x| {
x.port.pin_cnf[x.pin].modify(.{
.dir = .input,
.input = .connect,
.pull = .pulldown,
});
}
inline for (cols) |x| {
x.port.pin_cnf[x.pin].modify(.{
.dir = .output,
.input = .disconnect,
});
}
}
the inline for construct generates an unrolled loop at compile-time.
It’s not that I care about the generated machine instructions here — that the loop is actually unrolled — but rather that the language lets me express my desire to “loop” over a heterogeneously-typed collection.
The same trick makes the actual key scanning code much clearer too:
const col2row2key = .{
.{ .{ 0, 1 }, .{ 1, 11 }, .{ 2, 21 }, .{ 3, 32 } },
.{ .{ 0, 2 }, .{ 1, 12 }, .{ 2, 22 }, .{ 3, 33 } },
.{ .{ 0, 3 }, .{ 1, 13 }, .{ 2, 23 }, .{ 3, 34 } },
.{ .{ 0, 4 }, .{ 1, 14 }, .{ 2, 24 }, .{ 3, 35 } },
.{ .{ 0, 5 }, .{ 1, 15 }, .{ 2, 25 }, .{ 3, 36 } },
.{ .{ 2, 26 }, .{ 3, 37 } },
.{ .{ 0, 6 }, .{ 1, 16 }, .{ 2, 27 }, .{ 3, 38 } },
.{ .{ 0, 7 }, .{ 1, 17 }, .{ 2, 28 }, .{ 3, 39 } },
.{ .{ 0, 8 }, .{ 1, 18 }, .{ 2, 29 }, .{ 3, 40 } },
.{ .{ 0, 9 }, .{ 1, 19 }, .{ 2, 30 }, .{ 3, 41 } },
.{ .{ 0, 10 }, .{ 1, 20 }, .{ 2, 31 }, .{ 3, 42 } },
};
pub fn readKeys() PackedKeys {
var pk = PackedKeys.new();
inline for (col2row2key) |row2key, col| {
// set col high
cols[col].port.outset.write_raw(1 << cols[col].pin);
delay(1000);
const val = rows[0].port.in.read_raw();
inline for (row2key) |row_idx_and_key| {
const row_pin = rows[row_idx_and_key[0]].pin;
pk.keys[(row_idx_and_key[1] - 1)] = (1 == ((val >> row_pin) & 1));
}
// set col low
cols[col].port.outclr.write_raw(1 << cols[col].pin);
}
return pk;
}
Conceptually Zig’s inline for
is solving the same problem that Rust’s syntax macro solves (generating type-specific code at compile-time), but without the side quest of learning a lil’ pattern matching/expansion language.
In fact, because row/column/switch layout exists in a const struct, it’s possible to compute with it. E.g., to calculate (at compile-time) the number of switches on the keyboard:
pub const switch_count = comptime {
var n = 0;
for (col2row2key) |x| n += x.len;
return n;
};
I have no idea how one might do this from the Rust syntax macro invocations:
scan_col!(0; 0 => 1 , 1 => 8 , 2 => 15 , 3 => 21 , 4 => 27 , 5 => 33 ,);
(Though I’m sure it’s possible — experts have discovered that Rust macros can count to around 500 and may, perhaps one day, reach even larger numbers.)
Why do I struggle with Rust?
Using Zig for just a few hours has highlighted to me aspects of Rust that I’d never before considered. In particular, that much of the complexity I’d unconsciously attributed to the domain — “this is what systems programming is like” — was in fact a consequence of deliberate Rust design decisions.
For example, it’s now quite clear to me that Rust is a language which has a dedicated feature for everything. In addition to its famous borrow checker, Rust has modules, packages, generics, traits, two kinds of macros, attribute annotations, and a dozen other things.
Heck, even defining immutable variables is done with different language features depending on whether it’s in a function context or module context:
fn main() {
let message = "hello world"; // a regular immutable variable definition
}
let message = "hello world"; // doesn't work at toplevel
const message: &str = "hello world"; // you have to write `const` and declare the type yourself.
Now, I’m certain there are good reasons why all these design decisions were made. I’m not a historian of the language, but I can speculate:
Maybe running-arbitrary-code macros would be too powerful, so the more limited syntax macros were chosen to keep programs easier to reason about and faster to compile.
Perhaps type annotations are required at the top-level because inference would be too “spooky action at a distance” for variables referenced widely across a large codebase.
Maybe it’s
const
instead oflet
because there’s a guarantee thatlet
is always on the heap or stack andconst
s are always in the data-segment of a binary.If you’re building a web browser with 100 co-workers, yes absolutely, all code must be packaged into crates with elaborate type constraints that prove specific safety properties, etc.
However, when I use Rust as a physicist-turned-web-developer, none of these reasons are clear to me. (See language designer Evan Czaplicki’s excellent talk On Storytelling for more on this.)
So one aspect of the struggle is motivational: I have to pay the upfront cost of learning language complexity, but can only take on faith that this complexity ultimately serves me. (I get the same vibe doing my taxes: There’s a sort of fractal complexity of documentation and concepts which, presumably, reflect carefully considered trade-offs made by smart people doing the best they can given historical accidents, conflicting requirements, etc.)
Even putting aside this motivational angle, why over the past three years have I struggled to just learn Rust?
A helpful lens is provided by the Cognitive dimensions framework’s notion of “consistency”:
a particular form of guessability: when a person knows some of the language structure, how much of the rest can be guessed successfully?
Rust has many language features and they’re all largely disjoint from each other, so knowing some doesn’t help me guess the others.
Nothing I knew about if
expressions helped me predict or understand the attribute annotation / feature system, even though they’re both fulfilling a conceptually similar need (conditional logic).
Nothing I knew about functions helped me understand syntax macros.
Conversely, this “consistency” principle also explains why I had such an easy time picking up Zig — it absolutely excels in this department.
Not only are there many fewer features to learn in the first place, they seem to all fit together nicely: The comptime
and inline for
keywords, for example, allowed me to leverage at compile-time all the looping, conditions, arithmetic, and control flow I wanted using the syntax and semantics I’d already learned — Zig!
Why am I excited about Zig?
Ease of learnability is nice if you can get it, sure, but I’m not picking up a systems language because I want something easy to learn. I’m doing it because I want the capabilities; I want to push pixels around the screen as fast as possible =D
As such, I’m excited about Zig for two broad reasons.
The first one is that it’s a very different kind of systems programming than I’m used to: It’s fast, small, and fun.
“Fast” is an easy one to explain: When I open a Rust project, Emacs starts dropping keystrokes and my poor 2013-era MacBook Air’s fans go wild:
With Rust 1.50, a from-scratch debug build of my keyboard firmware takes 70 seconds (release, 90 seconds) and the target/
directory consumes 450MB of disk.
Zig 0.7.1, on the other hand, compiles my firmware from-scratch in release mode in about 5 seconds and its zig-cache/
consumes 1.4MB.
Nice!
“Small” is similarly easy; again, there’s basically one page of documentation. This value proposition is right at the top of the Zig Website:
Focus on debugging your application rather than debugging your programming language knowledge.
When I first started using Zig, I was dismayed that it was missing so many features that I liked from other languages. No syntax for ranges. No closures. No iterator syntax.
However, I ended up finding these absences liberating — this is where the “fun” comes in.
After two minutes of searching, I’d conclude “well, guess I’ll just have to suck it up and write a while
loop” and then I’d get back to working on my problem.
I found myself more often in a state of creative flow, devising plans based on the limited capabilities of Zig and then executing them. This flow wasn’t constantly broken by stops for documentation or side-quests to investigate some feature/syntax/library.
This isn’t so much an observation of Zig alone as it is about my knowledge of Zig.
The language is so small and consistent that after a few hours of study I was able to load enough of it into my head to just do my work.
I wrote my keyboard firmware, it worked!
A few days later I paired with a never-seen-Zig-before friend on a bit of WASM image-processing code, it also worked!
(zig build-lib -target wasm32-freestanding -O ReleaseSmall foo.zig
generates foo.wasm
, that’s it!)
Even though I’m only a dozen hours in, I feel like I can already be productive with Zig without an Internet connection. It feels like Zig is a language that I’d be able to master; to fully internalize such that I can use it without thinking about it. This feels super exciting and empowering.
Can’t fail
Of course, this all could be a fluke. Maybe I just got unlucky, backed myself into an awkward corner of Rust, and in a moment of weakness left it for an immature language. Fair; I did have to generate my own microcontroller peripheral library from XML and have run into at least one Zig compiler bug so far (can’t continue from comptime loop).
It could be that Zig’s language simplicity will lead me astray; that ultimately I’ll have to face the much worse complexities of difficult-to-reproduce memory errors and I’ll wish I had the borrow checker. That I’ll make a mess of irreducibly complex compile-time logic and wish for syntax macros and attribute annotations. That I’ll be unable to reason about or extend programs of any substantial domain complexity, and I’ll suffer the pain of implementing my own dynamic trait object system and clunky ad-hoc safety prover.
Perhaps bizarrely, this is the second reason why I’m so excited about Zig: It feels like I can’t fail.
I’ll either use Zig successfully for my embedded hobby projects, one-off WASM helpers, and C API binding needs, or, in struggling to accomplish these tasks, I’ll finally begin to understand and appreciate more of the issues that Rust is protecting me from.
Either way, I’m quite excited!
Thanks
Thanks to Julia Evans, Pierre-Yves Baccou, Laura Lindzey, Jamie Brandon, and Boats for their thoughtful discussions about Rust/Zig and their constructive feedback on this article!