Hi friends!
First, let me apologize for last month’s bizarre subject line: “Friends, Romans, countrymen, lend me your ears”. It’s the default subject line of my newsletter software, and only by gross negligence was it sent out last month. Those responsible have been given a stern talking to and placed on personal improvement plans.
Back around 2010 — before it was cool — I worked in machine learning (classifying medical bills using linear models with a touch of support vector machines). After a year, I left to consult on data visualization much stopped paying attention to anything machine learning.
But this path month, I had some time to check in on the past decade — turns out there was something behind all the computer vision hype that kept spilling into my Twitter feed.
Most of my catching up came in via the FastAI course, which is spectacular:
taught as “do cool stuff first, then backfill theory/explanation”, which I quite enjoyed (in contrast to the typical theory-first pedagogy lamented by Lockhart)
detailed instructions + works-out-of-the-box software for doing GPU training in Python Notebooks in the cloud (not going to lie, part of what kept me away for 10 years was not wanting to spend a weekend getting intimate with Python package managers…)
in addition to being accessible/usable, much of the software/material is also state of the art (at least, it’s plausibly in the 95th percentile, which is good enough for me)
After a few hours of going through the FastAI, I’m convinced that there’s a ton of newly low-hanging fruit. That is, accessible computer vision classification means the hard part is no longer the tech, but simply recognizing the problem. I suspect there are probably lots of projects/products/businesses now possible that will seem obvious in hindsight.
Other nice machine learning resources I came across this past month:
FastAI’s lesson 2 on how to implement gradient descent to fit a line to some points using “automatic differentiation”
Learning to play tic tac toe using reinforcement learning
In other “Kevin looks at stuff from 2016” news, I had the opportunity to play with Google’s Tiltbrush VR app on an HTC Vive. VR was neat, but I was more interested in the controllers themselves, which accurately track position and orientation in space.
I’m curious how such controllers could be used with CAD software or to enable some kind of fast, sketchy, “hands on” control of something like a CNC router.
I was surprised how little public work I could find in this space; most everything I found was “Walk around your CAD designs using VR” rather than “Design directly using VR/hands-on-controllers”.
Relevant products I’ve found:
A game that allows fast sketching / level building using spatial controllers; this is probably the most compelling single example I’ve seen for creative possibilities in this space
An ex-Autodesk researcher’s VR-native CAD software, who has spoken about about untapped potential in VR
Personally, I’ve started experimenting with the Fusion360 API to test how feasible it’d be to “bolt on” custom controllers and UI. (I’ll let you know how that works out next month.)
And, of course, if you use Fusion 360, have thoughts on VR controllers, or want to jam on gross plugin hacks (or maybe less-gross 3D programming in Rust?), definitely drop me a line!
Best,
Kevin
p.s. Between my desire to work on Fusion360 and Dropbox’s threat to leave me stranded and alone, I finally upgraded from OS X 10.9 to 10.14. I then personally experienced (and fixed) some Mojave annoyances that impaired Finda. So if you haven’t tried that wonderful, productivity-enhancing file/tab/editor-buffer searcher and switcher, now’s a good time to try it out.