There are a lot of things I’d like to make — maybe you can help! Please let me know if:
- you know of an existing solution to one of the following ideas
- you have suggestions/pointers about a relevant technique/approach
- you want to have a product design chat or pair programming session — I love collaborating with people, so definitely let me know!
Twitter interface: I like Twitter because sometimes people say smart, interesting things there. But I don’t how little control I have over the main interaction — my “feed” is filled retweets, other people’s favorites, and snippets of contextless threaded conversation. How could I design an interface and sorting algorithms to enable the interactions I want out of Twitter (and even other platforms like Reddit):
- never missing the high-value, infrequent tweets of makers in the deluge of talkers/commentators (who I nonetheless want to roughly keep up with)
- batch Twitter into chunks (daily?) to make usage more deliberate
- acknowledge the fact that I have different moods; sometimes I want to “work”, othertimes I want to hear what my friends are up to.
Collaborative drawing/whiteboard: Using an iPad or drawing tablet, how can I:
- draw simple annotations on top of my computer desktop OR a blank canvas
- let my video-call collaborators draw too
I want a few colors, eraser/undo, and maybe layers that can be toggled on/off (perhaps each layer is just an image file, plan9-style?) Non-goals: async collaboration (operational transforms, crdts, etc.), fancy drawing brushes. Prior art: Clearboard (pointed out by Adam Solove)
Computer vision fiducial: I’d like a fiducial that a CNC mill can machine onto its own spoilboard suitable for high accuracy (sub-mm) pose estimation. I’m thinking a pattern of holes, since that’s easy to drill, detect, and isolate from other spoilboard marks. The reconstruction algorithm should be robust to missing holes (i.e., those covered by work pieces on the machine) and extra holes (should any be accidentally created on the spoilboard).
The fiducial would be custom designed for a single machine bed and used for pose estimation only — no need to embed an ID or other data.
SLAM: I found a 77-part YouTube series on SLAM (simultaneous localization and mapping) which looks like it’d make for a good first foray into statistical programming and control algorithms (something I vaguely know about in the abstract, but haven’t synthesized in project).
Real-time point cloud + pose synthesis: In real-time (30 FPS), estimate the pose of a depth camera and use that to position depth camera pointcloud measurements in a 3d scene. Basically, wave around a depth camera and get a point cloud. Would probably want to do this in Rust, though TBD on whether a naive frame-by-frame approach would yield good results or if some fancy statistics would need to coalesce pose estimation between frames to get better accuracy.
Position-based tensegrity design: I found a spectacular paper describing an algorithm to find stable tensegrity structures to fit a given design. I contacted the authors but they didn’t share their implementation, so I’d like to have a go. This would involve some constraint programming, numerical linear algebra, and 3d rendering on the software side, and once that’s working I’d like to design and fabricate a few pieces to test it out (scale models at first, then maybe furniture?)
iPad as trackpad: Write an iPad app to forward multitouch gestures to the Mac so that I can use the iPad as a giant “magic trackpad”: Move the cursor around, two finger scroll, pinch to zoom, etc. I think most of the work here would be learning about MacOS trackpad drivers and pretending to be one. Potentially relevant: Sensible Side Buttons low level mouse button remapping.
iPad as drawing tablet: As above, but using the Apple Pencil and using the iPad as a drawing tablet (the drawing happens on the computer, not the iPad). Potentially relevant: Tuio protocol (thanks Christian S.)
SpaceMouse in OS X: I have a SpaceMouse 6-axis controller and would love to pan/zoom/scroll in regular apps with it (it only works in CAD programs that have special plugins to use it). I’ve written a Rust adapter to the native driver and can get real time position information, so I think I just need to trick MacOS into thinking that this is a magic trackpad or something? Another interesting aspect is exploring whether it would “feel” okay to use a continuous controller for potentially discreet actions. (e.g., rotate the controller 10 degree yaw and it’ll start to switch through windows one per two seconds; 20 degree yaw speeds up to one window per second, etc.)
Windows webcam server: (Done w/ Larry P and Glenn W) I have an Intel RealSense depth camera attached via USB to a Windows machine. I’d like to stream frames (RGB + depth) over ethernet to Mac with lowest possible latency. Video streaming protocols all look very complicated, which makes sense because the Internet is hard. But since I’m on a local ethernet cable, I’m thinking maybe just write a little Rust server to unicast numbered packets?