The Elon / Twitter Bummer

Let’s get this out of the way up front: if you’re here expecting more snarky piling on about how stupid Elon Musk is, you’re going to have to get your schadenfreude fix elsewhere. Frankly, I think he’s a genius. A singular individual of our time that most fairly should be compared with Thomas Edison. But the whole Twitter thing really bums me out, because it makes obvious just how easily an unchecked strength can become a stunning downfall. It’s worth a few words; hopefully ones that will add a little bit of thoughtfulness to a mostly empty public “conversation.” We will see.

First let’s review a couple of the things Elon Musk has contributed to the world.

SpaceX

I’ve long been a believer in space exploration, so it’ll be no surprise that I have followed SpaceX since its earliest days. Back in 2002, Musk made a trip to Russia to acquire rockets at a commercially-reasonable price. The Russians basically told him he was an idiot and that it couldn’t be done. ON THE PLANE HOME, he put together a spreadsheet that showed that he could. His own people though he was nuts at first, but he was right. He surrounded himself with experts ranging from amateur to professional. He seeded money to folks and watched what happened. And most importantly he read, and read, and read. To call out just a few specifics he has cited:

… and then he made it happen. Bigtime. SpaceX is still today*** the only American company that can launch people into space. He puts satellites up for about $1,200 per pound (the Shuttle was $30,000). He has used the capability to launch Starlink, bringing the Internet to people and places previously left behind. The scope of what he has done here is stunning. No gimmicky “tourism.” He has never flown himself. He is simply knocking down real problems, one after another, while most others just second-guess from the sidelines.

Think space doesn’t matter? You’re dead wrong, but OK. How about climate change?

Tesla

It seems I can’t go a day without hearing Fleetwood Mac shill Chevy “EVs for Everyone.” Just like every other car company, Chevy would love you for you to believe that this was all their idea, but in reality they (together with all the usual suspects) have been slow-boating electrics since the 1980s. Not so Elon, who first invested in Tesla in 2004, and launched the Roadster in 2009 as CEO — more than a decade ago. We got our Model X in 2018 and it is straight up the best car I’ve ever owned.

What made the difference with Tesla was not new science, but a willingness to buck conventional wisdom as to what was “production ready.” Rather than whine about a lack of charging infrastructure, they designed the Supercharger and deployed enough of them that I’ve comfortably road-tripped the entire west coast of the USA multiple times. For daily use we haven’t even installed a dedicated charger of our own — we do just fine with a standard wall outlet. Software completes the package: we can safely leave our dog in a climate-controlled car; get automatically-recorded video of accidents or attempted theft; watch Netflix in the ferry line; verify that the doors are locked from our phones; ask it to extract itself from a tight parking space. I’m not allowed to take my hands off the wheel quite yet, but the Tesla drives itself way more than I do — stops at lights, changes lanes, you name it.

And sure they’ve been expensive so far, but at a base price of $47k the Model 3 is within striking distance of “normal” cars. It’s not a stretch to say that the EV industry is at a tipping point today directly because of what Elon has accomplished with Tesla over the past thirteen years.

But wait, there’s more. Tesla has used learning from the cars to become an energy company. One of my son’s best friends is kept busy way more than full-time installing Tesla Solar Roofs across the western half of the country. The Powerwall uses software to optimize power management — even automatically “topping up” the charge when severe weather is in the forecast.

I could keep going like this for a long time. And it’s easy for folks to talk about how nobody should be a billionaire or whatever, but he earned his money creating and selling things people want. His start in business was a $28k loan from his dad — a nice advantage to be sure, but turning $28k into $200B (legally) is a pretty good record and doesn’t happen by accident.

So then WTF happened?

In almost every case, Elon’s success has come down to “just doing” things that conventional wisdom said couldn’t be done. But it’s not a Zuckerberg “move fast and break things” vibe. He really listens to the arguments and the experts and the ideas — he is smart enough to understand what he learns — and only then he makes his call. Educated, but not encumbered, by those that have come before him. It’s just damn impressive.

The thing is, though, a key reason he can ignore the preconceptions of others is that he doesn’t have a ton of empathy for them (clinically it seems). Honestly he said it best himself on SNL:

“To anyone who I’ve offended [with my Twitter posts], I just want to say I reinvented electric cars, and I’m sending people to Mars in a rocket ship. Did you think I was also going to be a chill, normal dude?”

It’s funny because it’s true. The same quality that helps him ignore naysayers also keeps him from understanding the positions of folks that attack him (rightly or wrongly). Especially in the anonymous public sphere. I mean, it’s hard for anybody to turn the other cheek online — throw in some mental instability and it’s just not a shocker when he reacts without thinking.

With Twitter, this all just spun wildly out of control. He’s mad that people are mean to him on Twitter, and because he’s the richest guy in the freaking world his answer is to just buy it and kick off the people he doesn’t like. Obviously he regretted this just days after he set it in motion. But he’d gone too far and was legally required to finish what he started. And the real kicker this time is that — in stark contrast to his other ventures — with Twitter he didn’t do his homework. So in practice he’s just another clueless a$$hat money exec who shows up assuming he knows better. But he doesn’t. And that is exactly what we are seeing play out as he flails and reacts, flails and reacts, flails and reacts.

It just makes me really very sad.

I’m not asking you to feel sorry for the richest man in the world. And I’m not making excuses for this complete sh**show; it could have serious negative implications for all of us. It’s just a bummer to watch it unfold. I hope that history is able to remember both sides of the Elon story, because we’ll all still be benefiting from what he’s built long after people forget what a tweet was.

*** Between the time I wrote the first draft of this piece and published it, NASA finally launched the behemoth that is Artemis on its first trip around the moon. They haven’t put humans in the capsule yet, but it does appear we’re going to have a second option. Awesome, but that program feels a bit like a relic from the 80s. I hope it goes ok!

A Driftwood and Glowforge Chess Set

My nephew (a pretty cool guy BTW) has started playing a lot of chess. I thought it’d be fun to make him a custom board using driftwood from the beach — it’s been too long since he’s been able to visit in person, but at least I can send a bit of Whidbey Island his way. The Glowforge made it easy to finish the job with an etched playing surface and pieces; I love the way it turned out! (I masked out his full name because he’s wisely not hanging out on social media like I am.)

The Driftwood Base

The base is a solid piece cut from a nice beach log — still a big fan of my electric chainsaw for this work! Rather than hoof it all the way back on foot, I rowed my inflatable the quarter mile down and back to the house. You can’t tell from these pics, but it was actually a crazy foggy day with visibility no more than maybe thirty yards. I did not stray far from shore.

First step was to flatten the slab. The router sled I created last year was perfect for the first side — just like with code, it’s awesome when you use something like this a second time! I then ran it through the planer until it was parallel on both sides. Next I dried the slab in the oven for about four hours at two hundred degrees. This worked ok, but there is just so much moisture in the beach wood that quick-drying creates a ton of stress on the fibers. I was able to work around the cracks that opened up, and the warp planed out ok, but I think sometime this month I’m going to cut a few pieces, give the ends a nice coat of Anchorseal, and then just leave them in the garage to dry for a year. That’s a long time and I’m not very patient, but it’ll be worth it to (mostly) eliminate ongoing cracks and warps.

After cutting the base square on the table saw, it was time to move onto the center recess for storing pieces. Originally I planned to just hollow this out, but I’m just not very skilled with detail routing yet. Instead I cut the slab in a tic-tac-toe fashion, planed down the middle piece until it was a good height, and glued it all back together. It’s amazing to me just how strong well-made glue joints can be; the wood around them will often tear before they give way. A tip from this very amateur woodworker: buy a ton of good-quality clamps; it’s just impossible to build well without them.

Last steps on the base were to (a) shave off the uneven edge created by the kerfs during the tic-tac-toe cutting, (b) use a roundover bit to route a nice edge along the top; (c) sand it all to about 240 grit; (d) embed and glue in some magnets in the corners (more on this later); and (e) apply a few coats of Tried and True finish. I used this finish for the first time because it’s popular for wooden toys, and I’m kind of obsessed with it now — a combination of linseed oil and beeswax that goes on easily, buffs well and looks and feels great. Woot!

Glowforging the base

A piece of 1/8″ MDF with a white oak veneer serves as both the playing surface and a lid for the recess inside. Etching 32 squares takes a long time (SVG links are at the end of the article)! The board is secured with some small but relatively mighty disc magnets I got from Amazon. Honestly this could have turned out a bit better, but it worked OK. The magnets are 3/8″ diameter, but I couldn’t squish them into a 3/8” drilled inset — so I went up to a 1/2″ bit and that was fine after sanding down the edges a little. I secured a pair of magnets in each corner using J-B Weld epoxy. As an aside, J-B Weld is the absolute best. Years ago at Adaptive my belt buckle broke in the middle of the day and one of the folks in the lab hooked me up. That metal-to-metal repaired joint is STILL HOLDING under stress (OK eventually I did get a new belt but that one is still my backup). Amazing.

After that was dry, I put a third magnet on each stack, dabbed some epoxy on the top, and carefully placed the board so the magnets were aligned. This was the right approach, but I kind of screwed it up. The board and base are square and non-directional, so ideally you could just drop the board down in any rotation. But because the base isn’t perfectly square (remember when I trimmed off the kerf edge? That means the final piece is about 1/8″ shorter than it is long), the magnets are actually in a rectangle, not a square. Barely a rectangle, but enough that if you rotate the board 90 degrees it doesn’t sit well.

Worse, I reversed the polarity of the magnets in one corner so if you rotated the board, two corners actually repelled each other. Ugh. The rectangle I could live with but not this, so I carefully pried those magnets out and replaced them the right way. End result — a solid “B” job. I ended up burning two little dots, one on the corner of the board and one in the matching corner of the base, to make it easy to align.

Then the pieces

Now, my nephew takes this stuff very seriously, and I think he may choose to play with his own more traditional pieces. But I wanted the set to be complete, and I just wasn’t up to trying to lathe out a full set in two different woods. After thinking about it quite a bit I ended up cutting out a set of discs based on some great creative commons art (hat tip CBurnett and note my derivative SVG files linked below are freely available for use and modification as well).

The black pieces are on an MDF with mahogany veneer; the white ones are on basswood — so there’s clear contrast between the sides. I made an extra set of each in case some got lost, and they can also be used on the reverse side for checkers (although he’s pretty much too cool for that). Unfortunately I neglected to get any pictures of these before sending the final piece off — oops!

And finally, the inset

The final touch was to line the bottom of the storage recess with an engraved cork sheet. The ones I use are 2mm thick and have adhesive on one side — really nice for these insets, bowl and vase bottoms, and so on. For this project the adhesive wasn’t quite enough, so I added some wood glue and used my daughter’s pie weights (blind bake to avoid a soggy bottom!) to hold it all in place until the glue dried.

That’s a wrap

And that’s it! Lara made even cooler stuff for our niece and it all went into a box for their birthdays. I think I like these hybrid projects the best, using the Glowforge to add details and components to a piece made with more natural materials and techniques. SVG links are below; I didn’t include the cork inlay because that was just a personal note … but good settings for the cork sheets are 1000/10% for engraving and 400/100% for an easy cut.

Thanks as always for reading; it’s almost as fun running back through the projects in my mind as it is making them in the first place. Except for all the mistakes… so many mistakes.

Health IT: More I, less T

“USCDI vs. USCDI+ vs. EHI vs. HL7 FHIR US Core vs. IPA. Definitions, similarities, and differences as you understand them. Go!” —Anonymous, Twitter

I spent about a decade working in “Health Information Technology” — an industry that builds solutions for managing the flow of healthcare information. It’s a big tent that boasts one of the largest trade shows in the world and dozens of specialized venture funds. And it’s quite diverse, including electronic health records, consumer products, billing and cost management, image management, AI and analytics of every flavor you can imagine, and more. The money is huge, and the energy is huger.

Real world progress, on the other hand, is tough to come by. I’m not talking about health care generally. The tools of actual care keep rocketing forward; the rate limiter on tests and treatments seems only our ability to assess efficacy and safety fast enough. But in the HIT world, it’s mostly a lot of noise. The “best” exits are mostly acquisitions by huge insurance companies willing to try anything to squeak out a bit more margin.

That’s not to say there’s zero success. Pockets of awesome actually happen quite often, they just rarely make the jump from “promising pilot” to actual daily use at scale. There are many reasons for this, but primarily it comes down to workflow and economics. In our system today, nobody is incented to keep you well or to increase true efficiency. Providers get paid when they treat you, and insurance companies don’t know you long enough to really care about your long-term health. Crappy information management in healthcare simply isn’t a technology problem. But it’s an easy and fun hammer to keep pounding the table with. So we do.

But I’m certainly not the first genius to recognize this, and the world doesn’t need another cynical naysayer, so what am I doing here? After watching another stream of HIT technobabble clog up my Twitter feed this morning, I thought it might be helpful to call out four technologies that have actually made a real difference over the last few years. Perhaps we’ll see something in there that will help others find their way to a positive outcome. Or maybe not. Let’s give it a try.

A. Patient Portals

Everyone loves to hate on patient portals. I sure did during the time I spent trying to make HealthVault go. After all, most of us interact with at least a half dozen different providers and we’re supposed to just create accounts at all of them? And figure out which one to use when? And deal with their circa 1995 interfaces? Really?

Well, yeah. That’s pretty much how business works on the web. Businesses host websites where I can view my transaction history, pay bills, and contact customer support. A few folks might use aggregation services to create a single view of their finances or whatever, but most of us just muddle through, more-or-less happily, using a gaggle of different websites that don’t much talk to each other.

There were three big problems with patient portals a few years ago:

  1. They didn’t exist. Most providers had some third-party billing site where you could pay because, money. But that was it.
  2. When they did exist, they were hard to log into. You usually had to request an “activation code” at the front desk in person, and they rarely knew what you were talking about.
  3. When they did exist and you could log in, the staff didn’t use them. So secure messaging, for example, was pretty much a black hole.

Regulation fixed #1; time fixed #2; the pandemic fixed #3. And it turns out that patient portals today are pretty handy tools for interacting with your providers. Sure, they don’t provide a universal comprehensive view of our health. And sure, the interfaces seem to belong to a long ago era. But they’re there, they work, and they have made it demonstrably easier for us to manage care.

Takeaway: Sometimes, healthcare is more like other businesses than we care to admit.

B. Epic Community Connect & Care Everywhere

Epic is a boogeyman in the industry — an EHR juggernaut. Despite a multitude of options, fully a third of hospitals use Epic, and that percentage is much larger if you look at the biggest health systems in the country. It’s kind of insane.

It can easily cost hundreds of millions of dollars to install Epic. Institutions often have Epic consultants on site full time. And nobody loves the interface. So what is going on here? Well, mostly Epic is just really good at making life bearable for CIOs and their IT departments. They take care of everything, as long as you just keep sending them checks. They are extremely paternalistic about how their software can be used, and as upside-down as that seems, healthcare loves it. Great for Epic. Less so for providers and patients, except for two things:

Community Connect” is an Epic program that allows customers to “sublet” seats in their Epic installation to smaller providers. Since docs are basically required to have an EHR now (thanks regulation), this ends up being a no-brainer value proposition for folks that don’t have the IT savvy (or interest) to buy and deploy something themselves. Plus it helps the original customer offset their own cost a bit.

Because providers are using the same system here, data sharing becomes the default versus the exception. It’s harder not to share! And even non-affiliated Epic users can connect by enabling “Care Everywhere,” a global network run by Epic just for Epic customers. Thanks to these two things, if you’re served by the 33%+ of the industry that uses Epic, sharing images and labs and history is just happening like magic. Today.

Takeaway: Data sharing works great in a monopoly.

C. Open Notes

OpenNotes is one of those things that gives you a bit of optimism at a time when optimism can be tough to come by. Way back in 2010, three institutions (Beth Israel in MA, Geisinger in PA, and Harberview in WA) started a long-running experiment that gave patients completely unfettered access to their medical records. All the doctor’s notes, verbatim, with virtually no exception. This was considered incredibly radical at the time: patients wouldn’t understand the notes; they’d get scared and create more work for the providers; providers fearing lawsuits would self-censor important information; you name it.

But at the end of the study, none of that bad stuff happened. Instead, patients felt more informed and greatly preferred the primary data over generic “patient education” and dumbed-down summaries. Providers reported no extra work or legal challenges. It took a long time, but this wisdom finally made it into federal regulation last year. Patients now must be granted full access to their providers’ notes in electronic form at no charge.

In the last twelve months my wife had a significant knee surgery and my mom had a major operation on her lungs. In both cases, the provider’s notes were extraordinarily useful as we worked through recovery and assessed future risk. We are so much better educated than we would otherwise have been. An order of magnitude better than ever before.

Takeaway: Information already being generated by providers can power better care.

D. Telemedicine

It’s hard to admit anything good could have come out of a global pandemic, but I’m going to try. The adoption of telemedicine as part of standard care has been simply transformational. Urgent care options like Teladoc and Doctor on Demand (I’ve used both) make simple care for infections and viruses easy and non-disruptive. For years insurance providers refused “equal pay” for this type of encounter; it seems that they’ve finally decided that it can help their own bottom line.

Just as impactful, most “regular” docs and specialists have continued to provide virtual visits as an option alongside traditional face-to-face sessions. Consistent communication between patients and providers can make all the difference, especially in chronic care management. I’ve had more and better access to my GI specialists in the last few years than ever before.

It’s only quite recently that audio and video quality have gotten good enough to make telemedicine feel like “real” medicine. Thanks for making us push the envelope, COVID.

Takeaway: Better care and efficiency don’t have to be mutually exclusive.

So there we go. There are ways to make things better with technology, but you have to work within the context of reality, and they ain’t always that sexy. We don’t need more JSON or more standards or more jargon; we need more information and thoughtful integration. Just keep swimming!

Form and Function

I love reality TV about making stuff and solving problems. My family would say “to a fault.” Just a partial list of my favs:

I could easily spin a tangent about experiential archeology and the absolutely amazing Ruth Goldman, but I’ll be restrained about that (nope): Secrets of the Castle, Tudor Monastery Farm, Tales from the Green Valley, Edwardian Farm, Victorian Farm, Wartime Farm.

ANYWAY.

Recently I discovered that old Project Runway seasons are available on the Roku Channel, so I’ve been binging through them; just finished season fourteen (Ashley and Kelly FTW). At least once per year, the designers are asked to create a look for a large ready-to-wear retailer like JCPenney or JustFab or whatever. These are my favorites because it adds a super-interesting set of constraints to the challenge — is it unique while retaining mass appeal, can it be reproduced economically, will it read well in an online catalog, etc. etc.. This ends up being new for most of the participants, who think of themselves (legitimately) as “artists” and prefer to create fashion for fashion’s sake. Many of them have never created anything other than bespoke pieces and things often go hilariously off the rails as their work is judged against real world, economic criteria in addition to innovation and aesthetics. Especially because the judges themselves often aren’t able to express their own expectations clearly up front.

This vibe brings me back to software development in an enterprise setting (totally normal, right?). So many developers struggle to understand the context in which their work is judged. After all, we learned computer science from teachers for whom computer science itself is the end goal. We read about the cool new technologies being developed by tech giants like Facebook and Google and Amazon. All of our friends seem to be building microservices in the cloud using serverless backends and nosql map/reduce data stores leveraging deep learning and … whatever. So what does it mean to build yet another integration between System A and System B? What, in the end, is the point?

It turns out to be pretty simple:

  1. Does your software enable and accelerate business goals right now, and
  2. Does it require minimal investment to do the same in the future?

Amusingly, positive answers to both of these turn out to be pretty much 100% correlated not with “the shiniest new language” or “what Facebook is doing” but instead beautiful and elegant code. So that’s cool; just like a great dress created to sell online, successful enterprise code is exceptional both in form and function. Nice!

But as easy as these tests seem, they can be difficult to measure well. Enterprises are always awash in poorly-articulated requirements that all “need” to be ready yesterday. Becoming a slave to #1 can seem like the right thing — “we exist to serve the business” after all — but down that road lies darkness. You’ll write crappy code that doesn’t actually do what your users need anyways, breaks all the time and ultimately costs a ton in refactoring and lost credibility.

Alas, #2 alone doesn’t work either. You really have no idea what the future is going to look like, so you end up over-engineering into some super-generalized false utopian abstraction that surely costs more than it should to run and doesn’t do any one thing well. And it is true that if your business isn’t successful today it won’t exist tomorrow anyways.

It’s the combination that makes the magic. That push and pull of building something in the real world now, that can naturally evolve over time. That’s what great engineering is all about. And it primarily comes down to modularity. If you understand and independently execute the modules that make up your business, you can easily swap them in and out as needs change.

In fact, that’s why “microservices” get such play in the conversation these days — they are one way to help enforce separation of duties. But they’re just a tool, and you can create garbage in a microservice just as easily as you can in a monolith. And holy crap does that happen a lot. Technology is not the solution here … modular design can be implemented in any environment and any language.

  • Draw your business with boxes and arrows on one piece of paper.
  • Break down processes into independent components.
  • Identify the core data elements, where they are created and which processes need to know about them.
  • Describe the conversations between components.
  • Implement (build or buy) each “box” independently. In any language you want. In any environment that works for you.

Respect both form and function, knitting together a whole from independent pieces, and you are very likely to succeed. Just like the best designers on Project Runway. And the pottery show. And the baking one. And the knife-making one. And the …

Focus

OK, let’s see if I can actually get this thing written.

It’s a little hard to focus right now. We’re almost two weeks into life with Copper the shockingly cute cavapoo puppy. He’s a great little dude, and life is already better with him around. But holy crap, it’s like having a human baby again — except that I’m almost thirty years older, and Furry-Mc-Fur-Face doesn’t use diapers, so it seems like every ten minutes we’re headed outside to do his business. Apparently it’s super-important to provide positive reinforcement for this, so if you happen to see me in the back yard at 3am waxing poetic about pee and/or poop, well, yeah.

What’s interesting about my inability to focus (better explanation of this in a minute) is that it’s not like I don’t have blocks of open time in which I could get stuff done. Copper’s day is an endless cycle of (a) run around like crazy having fun; (b) fall down dead asleep; (c) go to the bathroom; (d) repeat — with a few meals jammed in between rounds. Those periods when he sleeps can be an hour or more long, so there’s certainly time in the day to be productive.

And yet, in at least one very specific way, I’m not. “Tasks” get done just fine. The dishes and clothes are clean. I take showers. I even mowed the lawn the other day. I’m caught up on most of my TV shows (BTW Gold Rush is back!). But when I sit down to do something that requires focus, it’s a lost cause. Why is that?

Things that require focus require me to hold a virtual model of the work in my head. For most of my life the primary example of this has been writing code. But it applies to anything that requires creation and creativity — woodworking, writing, art, all of that stuff. These models can be pretty complicated, with a bunch of interdependent and interrelated parts. An error in one bit of code can have non-obvious downstream effects in another; parallel operations can bump into each other and corrupt data; tiny changes in performance can easily compound into real problems.

IMNSHO, the ability to create, hold and manipulate these mental models is a non-negotiable requirement to be great at writing and debugging code. All of the noise around TDD, automated testing, scrums and pair programming, blah blah blah — that stuff might make an average coder more effective I suppose, but it can’t make them great. If you “walk” through your model step by step, playing out the results of the things that can go wrong, you just don’t write bugs. OK, that’s bullsh*t — of course you write bugs. But you write very few. Folks always give me crap for the lack of automated tests around my code, but I will go toe-to-toe with any of them on code quality — thirty years of bug databases say I’ll usually win.

The problem, of course, is that keeping a complex model alive requires an insane amount of focus. And focus (the cool kids and my friend Umesh call it flow state) requires a ton of energy. When I was just out of school I could stay in the zone for hours. No matter what was going on in the other parts of my life, I could drop into code and just write (best mental health therapy ever). But as the world kept coming at me, distractions made it increasingly difficult to get there. Kids and family of course, but work itself was much more problematic. As I advanced in my career, my day was punctuated with meetings and budgets and managing and investors and all kinds of stuff.

I loved all of that too (ok maybe not the meetings or budgets), but it meant I had smaller and smaller time windows in which to code. There is nothing more antithetical to focus than living an interrupt-driven existence. But I wasn’t going to quit coding, so it forced me to develop two behaviors — neither rocket science — that kept me writing code I was proud of:

1. Code in dedicated blocks of time.

Don’t multitask, and don’t try to squeeze in a few minutes between meetings. It takes time to establish a model, and it’s impossible to keep one alive while you’re responding to email or drive-by questions. Establish socially-acceptable cues to let people know when you need to be left alone — one thing I wish I’d done more for my teams is to set this up as an official practice. As an aside, this is why open offices are such horsesh*t for creative work — sometimes you just need a door.

2. Always finish finished.

This is actually the one bit of “agile” methodology that I don’t despise. Break up code into pieces that you can complete in one session. And I mean complete — all the error cases, all the edges, all of it. Whatever interface the code presents — an API, an object interface, a UX, whatever — should be complete when you let the model go and move on to something else. If you leave it halfway finished, the mental model you construct “next time” will be just ever so slightly different. And that’s where edges and errors get missed.

Finishing finished improves system architecture too — because it forces tight, compact modularity. For example, you can usually write a data access layer in one session, but might need to leave the cache for later. Keeping them separate means that you’ll test each more robustly and you’ll be in a position to replace one or the other as requirements morph over time. As an extra bonus, you get a bunch of extra endorphin hits, because really finishing a task just feels awesome.

OK sure, sounds great. But remember my friend Copper? In just the few sessions I’ve taken to write this little post, he’s come by dozens of times to play or make a quick trip outside. And even when I’m not paying him active attention, part of my brain always has to be on alert. Sometimes, distractions become so intense that the reality is you just won’t be successful at creative work, because your brain simply won’t focus. It hurts to say that, but better to acknowledge it than to do a crap job. The good news is that these times are usually transient — Copper will only be a brand new puppy for a few weeks, and during that time I just have to live with lower productivity. It’s not the end of the world, so long as I understand what’s going on and plan for it.

If you’re a manager, you really need to be on the lookout for folks suffering from focus-killing situations. New babies, new houses or apartments, health problems, parent or relationship challenges, even socio-political issues can catch up with people before they themselves understand what’s going on. Watch for sudden changes in performance. Ask questions. Maybe they just need some help learning to compartmentalize and optimize their time. Or maybe they need you to lighten the load for a few weeks.

Don’t begrudge them that help! Supporting each other through challenging times forges bonds that pay back many times over. And besides, you’ll almost certainly need somebody to do the same for you someday. Pay it forward.

Mastering the Art of Soviet Cooking

I’ve just started keeping a list of the (good) books I’m reading. Check out the full list!

I grew up in Lexington, Massachusetts in the late 70s and 80s. Our schools did the whole shelter under your desk thing, but consensus amongst my friends was that we were close enough to a bunch of ICBM development facilities that we’d go quickly in the first strike anyways. Kind of creepily fatalistic for a bunch of ten year olds — but honestly probably easier to deal with than the insane active shooter drills kids have to live with today. At least we knew that the Bad Guys were far away and that our leaders were actually working on the problem. OK, maybe their real motivations weren’t universally pure but, once again, I was a kid.

Anyhoo, my point is that the actual Soviet Union was way more of a concept than a reality — if you asked me what daily life was like for folks in Russia I’d probably have guessed some weird mashup of Ivan Denisovich and Rosa Klebb. Which is why I was super-psyched when I stumbled upon Mastering the Art of Soviet Cooking a few weeks ago. The author Anya Von Bremzen is just six years older than me, so we were pretty much twinsies separated by the iron curtain. Talk about a great read.

The book is loosely organized around a collection of dinners executed by Anya and her mom: one for each decade, starting with the twilight of Nicholas II and ending with the petro-oligarchs of the current millennium. But it’s really a family memoir and history of the USSR told through the lens of Anya, her mom, and her grandmother. Food plays an outsized role in the telling because, well, there never seems to have been much of it — but it’s not a sob story by any means. A few particularly interesting observations that stuck with me — hopefully they’ll draw you in enough to give it a read!

  •  Topical given the recent passing of Mikhail Gorbachev — it’s not just Putin who didn’t like the guy inside the USSR. Evaporation of the country aside, he seems to have been pretty universally viewed as an inept bungler while actually in power. He disrupted already shaky food distribution networks, his attempts to curb drinking just created another line for folks to stand in, and basically he just seems to have wandered around aimlessly watching stuff happen around him. NOT the perception we had from our vantage point in the West!
  • I 100% have to try and cannot stop thinking about Salat Olivier, which is basically a potato-vegetable-pickle salad with bologna (or chicken I guess, but I’m not turning down a salad with bologna).
  • The soviet staple breaded meat patty kotleta? Actually an ersatz hamburger, borrowed from America after a trip in the 1930s by Anastas Mikoyan, People’s Commissar of the Soviet Food Industry. Apparently they liked Eskimo Pies too.

Cover to cover interesting, perspective-broadening, yummy stuff.

Share to Roku!

TLDR: if you watch TV on a Roku and have an Android phone, please give my new Share To Roku app a try! It’s currently in open testing; install it with this link on the web or this link on your Android device. The app is free, has no ads, saves no data and only makes network calls to Rokus on your local network. It acts as a simple remote, but much more usefully lets you “Share” show names from the web or other apps directly to the Roku search interface. I use it with TV Time and it has been working quite well so far — but I need broader real-world testing and would really appreciate your feedback.

Oh user interface development, how I hate you so. But my lack of experience with true mobile development has become increasingly annoying, and I really wanted an app to drive my Roku. So let’s jump back into the world of input events and user interface layouts and see if we can get comfy. Yeesh.

Share To Roku in a nutshell

I’ve talked about this before (here and here). My goal is to transition as smoothly as possible from finding a show on my phone (an Android, currently the Samsung Galaxy S21) to watching it on my TV. I keep my personal watchlist on an app called TV Time and that’s key, but I also want to be able to jump from a recommendation in email or a review on the web. So feature #1 is to create a “share target” that can accept messages from any app.

Armed with this inbound search text, the app will help the user select their Roku, ensure the TV power is on (if that particular Roku supports it), and forward the search. The app then will land on a page hosting controls to help navigate the last mile to the show (including a nice swipe-enabled directional pad that IMNSHO is way better than the official Roku app). This remote control functionality will also be accessible simply by running the app on its own. And that’s about it. Easy peasy!

All of the code for Share To Roku is up on github. I’ll do a final clean-up on the code once the testing period is over, but everything I’ve written here is free to use and adopt under an MIT license, so please take anything you find useful.

Android Studio and “Kotlin”

If you’ve worked with me before, you know that my favorite “IDE” is Emacs; I build stuff from the command line; and I debug using logs and jdb. But for better or worse, the only realistic way to build for Android is to at least mostly use Android Studio, a customized version of IntelliJ IDEA (you can just use IntelliJ too but that’s basically the same thing). AStudio generates a bunch of boilerplate code for even the simplest of apps, and encourages you to edit it all in this weird overlapping sometimes-textual-sometimes-graphical mode that at least in my experience generally ensures a messy final product. I’m not going to spend this whole article complaining about it, but it is pretty stifling.

Love me a million docked windows, three-deep toolbars and controls on every edge of the screen!

Google would also really prefer that you drop Java and instead use their trendy sort-of-language “Kotlin” to build Android apps. I’ve played this Java pre-complier game before with Scala and Groovy, and all I can say is no thank you. I will never understand why people are so obsessed with turning code into a nest of side effects, just to avoid a few semicolons and brackets. At least for now they are grudgingly continuing to support Java development — so that’s where you’ll find me. On MY lawn, where I like it.

Android application basics

Components

The most important thing to understand about Android development is that you are not in charge of your process. There is no “main” and, while you get your own JVM in which to live, that process can come and go at pretty much any time. This makes sense — at least historically mobile devices have had to manage pretty limited memory and processing power, so the OS exerts a ton of control over the use of those resources. But it can be tricky when it comes to understanding state and threading in an app, and clearly a lot of bugs in the wild boil down to a lack of awareness here.

Instead of main, an Android app is effectively a big JAR that uses a manifest file to expose Component classes. The most common of these is an Activity, which is effectively represents one user interface screen within the app. Other components include various types of background process; I’m going to ignore them here. Share to Roku exposes two Activities, one for choosing a Roku and one for the search and remote interface. Each activity derives from an Android base class that defines a set of well-known entrypoints, each of which is called at different points in the process lifecycle.

Tasks and the Back Stack

But before we dig into those, two other important concepts: tasks and the back stack. This can get wildly complicated, but the super-basics are this:

  • A “task” is a thing you’re doing on the device. Most commonly tasks are born by opening an app from the home screen.
  • Each task maintains a “stack” of activities (screens). When you navigate to a new screen (e.g., open an email from a list of emails) a new activity is added to the top of the stack. When you hit the back button, the current (top) activity is closed and you return to the previous one.
  • Mostly each task corresponds to an app — but not always. For example, when you are in Chrome and you “share” a show to my app, a new Share To Roku activity is added to the Chrome task. Tasks are not the same as JVM processes!

Taken together, the general task/activity lifecycle starts to make sense:

  1. The user starts a new task by starting an app from the home screen.
  2. Android starts a JVM for that app and loads an instance of the class for the activity marked as MAIN/LAUNCHER in the manifest.
  3. The onCreate method of the activity is called.
  4. The user interacts with the ux. Maybe at some point they dip into another activity, in which case onPause/onResume and onStop/onStart are called as the new activity starts and finishes.
  5. When the activity is finished (the user hits the back button or closes the screen in some other way) the onDestroy method is called.
  6. When the system decides it’s a good time (e.g., to reduce memory usage), the JVM is shut down.

Of course, it’s not really that simple. For example, Android may just nuke your process at any time, without ever calling onDestroy — so you’ll need to put some thought into how and when to save persistent data. And depending on your configuration, existing activity instances may be “reused” (with a call to onNewIntent). But it’s a pretty good starting place.

Intents

Intents are the means by which users navigate between activities on an Android device. We’ve actually already seen an intent in action, in step #2 above — MAIN/LAUNCHER is a special intent that means “start this app from the beginning.” Intents are used for every activity-to-activity transition, whether that’s explicit (e.g., when an email app opens up a message details activity in response to a click in a message list) or implicit (e.g., when an app opens up a new, pre-populated text message without knowing which app the user has configured for SMS).

Share to Roku uses intents in both ways. Internally, after picking a Roku, ChooseRokuActivity.shareToRoku instantiates an intent to start the ShareToRokuActivity. Because that internal navigation is the only way to land on ShareToRokuActivity, its definition in the manifest sets the “exported” flag to false and doesn’t include any intent-filter elements.

Conversely, the entry for ChooseRokuActivity in the manifest sets “exported” to true and includes no less than three intent-filter elements. The first is our old friend MAIN/LAUNCHER, but the next two are more interesting. Both identify themselves as SEND/DEFAULT filters, which mark the activity as a target for the Android Sharesheet (which we all just know as “sharing” from one app to another). There are two of them because we are registering to handle both text and image content.

Wait, image content? This seems a little weird; surely we can’t send an image file to the Roku search API. That’s correct, but it turns out that when the TV Time app launches a SEND/DEFAULT intent, it registers the content type as an image. There is an image; a little thumbnail of the show, but there is also text included which we use for the search. There isn’t a lot of consistency in the way applications prepare their content for sharing; I foresee a lot of app-specific parsing in my future if Share To Roku gets any real traction with users.

ChooseRokuActivity

OK, let’s look a little more closely at the activities that make up the app. ChooseRokuActivity (code / layout) is the first screen a user sees; a simple list of Rokus found on the local network. Once the user makes a selection here, control is passed to ShareToRokuActivity which we’ll cover next.

The list is a ListView, which gives me another opportunity to complain about modern development. Literally every UX system in the world has a control for simple displays of text-based lists. Android’s ListView is just this — a nice, simple control to which you attach an Adapter that holds the data. But the Android Gods really would rather you don’t use it. Instead, you’re supposed to use RecyclerView, a fine but much more complicated view. It’s great for large, dynamic lists, but way too much for most simple text-based UX lists. This kind of judgy noise just bugs me — an SDK should make common things as easy as possible. Sorry not sorry, I’m using the ListView. Anyways, the list is wrapped in a SwipeRefreshLayout which provides the gesture and feedback to refresh the list by pulling down.

The activity populates the list of Rokus using static methods in Roku.java. Discovery is performed by UDP broadcast in Ssdp.java, a stripped down version of the discovery classes I wrote about extensively in Anyone out there? Service discovery with SSDP, WSD, other acronyms. The Roku class maintains a static (threadsafe) list of the Rokus it finds, and only searches again when asked to manually refresh. This is one of those places where it’s important to be aware of the process lifecycle; the list is cached as long as our JVM remains alive and will be used in any task we end up in.

Take a look at the code in initializeRokus and findRokus (called from onCreate). If we have a cache of Rokus, we populate it directly into the list for maximum responsiveness. If we don’t, we create an ActivityWorker instance that searches using a background thread. The trick here is that each JVM process has exactly one thread dedicated to managing all user interface interactions — only code running on that thread can touch the UX. So if another thread (e.g., our Roku search worker) needs to update user interface components (i.e., update the ListView), it needs help.

There are a TON of ways that people manage this; ActivityWorker is my solution. A caller implements an interface with two methods: doBackground is run on a background thread, and when that method completes, the code in doUx runs on the UI thread (thanks to Activity.runOnUiThread).  These two methods can share member variables (e.g., the “rokus” set) without worrying about concurrency issues — a nice clean wrapper for a common-but-typically-messy situation.

ShareToRokuActivity

The second activity (code / layout) has more UX going on, and I’ll admit that I appreciated the graphical layout tools in AStudio. Designing even a simple interface that squishes and stretches reasonably to fit on so many different device sizes can be a challenge. Hopefully I did an OK job, but testing with emulators only goes so far — we’ll see as I get a few more testers.

If the activity was started from a sharing operation, we pick up that inbound text as “extra” data that comes along with the Intent object (the data actually comes to us indirectly via ChooseRokuActivity, since that was the entry point). Dealing with this search text is definitely the most unpleasant part of the app, because it comes in totally random and often unhelpful forms. If Share To Roku is going to become a meaningfully useful tool I’m going to have to do some more investment here.

A rare rave from me — the Android Volley HTTP library (as used in Http.java) is just fantastic. It works asynchronously, but always makes its callback on the UX thread. That is, it does automatically what I had to do manually with ActivityWorker. Since most mobile apps are really just UX sitting atop some sort of HTTP API, this makes life really really easy. Love it!

The bulk of this activity is just buttons and lists that cause fire-and-forget calls to the Roku, except for the directional pad that takes up the center of the screen. CirclePad.java is a custom control (sorry, custom “View”) that lets the user click a center button and indicate direction with either clicks in the N-S-E-W regions or (way cooler) directional swipes. A long press on the control sends a backspace, which makes entering text on the TV a bit more pleasant. Building this control felt like a throwback to Windows 3.0 development. Set a clip region, draw some lines and circles and icons. The gesture recognition is simultaneously amazingly easy (love the “fling” handler) and oddly prehistoric (check out my manual identification of a “long” press).

Publishing to the Play store

Back in the mid 00’s I spent some time consulting for Microsoft on a project called Windows Marketplace (wow there is a Wikipedia article for everything). Marketplace was sponsored by the Windows marketing team as an attempt to highlight the (yes) shareware market, which had been basically decimated by cross-platform browser-based apps. It worked a lot like any other app store, with some nice features like secure backup of purchased license keys (still a thing with some software!!!). It served a useful role for a number of years — fun times with a neat set of people (looking at you Raj, Vikram, DeeDee, Paul, Matt and Susan) and excellent chaat in Emeryville.

Anyways, that experience gave me some insight into the challenges of running and monetizing a directory of apps developed by everyone from big companies to (hello) random individuals. Making sure the apps at least work some of the time and don’t contain viruses or some weirdo porn or whatever. It’s not easy — but Google and Apple really have really done a shockingly great job. Setting up account on the Play Console is pretty simple — I did have to upload an image of my official ID and pay a one-time $25 fee but that’s about it. The review process is painful because each cycle takes about three or four days and they often come back with pretty vague rejections. For example, “you have used a word you may not have the rights to use” … which word is, apparently, a secret? But I get it.

So anyways — my lovely little app is now available for testing. If you’ve got an Android device, please use the links below to give it a try. If you have an Apple device, I’m sorry for many reasons. I will definitely be doing some work to better manipulate inbound search strings to provide a better search result on the Roku. I’m a little torn as to whether I could just do that all in-app, or if I should publish an API that I can update more easily. Probably the latter, although that does create a dependency that I’m not super-crazy about. We’ll see.

Install the beta version of Share To Roku with this link on the web or this link on your Android device.

Anyone out there? Service discovery with SSDP, WSD, other acronyms.

Those few regular readers of this stuff may remember What should we watch tonight, in which I used the Roku API to build a little web app to manage my TV watchlist. Since then I’ve found TV Time, which is waaay better and even tells me how many days until the next season of Cobrai Kai gets here (51 as of this writing). But what it doesn’t do is launch shows automatically on my TV, and yes I’m lazy enough to be annoyed by that. So I’ve been planning a companion app that will let me “share” shows directly to my Roku using the same API I used a few months ago.

This time, I’d like the app to auto-discover the TV, rather than asking the user to configure its IP address manually. Seems pretty basic — the Roku ECP API describes how it uses “Simple Service Discovery Protocol” to enable just that. But man, putting together a reliable implementation turned out to be a bear, and sent me tumbling down a rabbit hole of “service discovery” that was both fascinating and frankly a bit appalling.

Come with me down that rabbit hole, and let’s learn how those fancy home devices actually try to find each other. It’s nerd-tastic!

Can I get your number?

99% of what happens on networks is conversations between two devices that already know each other, either directly by address (like a phone number), or by a name that they use to look up an address (like using a phone book). And 99% of the time this works great — between “google.com” and QR codes and the bookmark lists we’ve all built up, there’s rarely any need to even think about addresses. But once in awhile — usually when you’re trying to set up a printer or some other smarty-pants device on your home network — things get a bit more complicated.

Devices on your network are assigned a (basically) arbitrary address by your wifi router, and they don’t have a name registered anywhere, so how do other devices find them to start a conversation? It turns out that there are a pile of different ways — most of which involve either multicast or broadcast UDP messaging, two similar techniques that enable a device to initiate a conversation without knowing exactly who it’s talking to. Vox Clamantis in Deserto as it were.

Side note: for this post I’m going to limit examples to IPv4 addressing, because it makes my job a little easier. The same concepts generally apply to IPv6, except that there is no true “broadcast” with v6 because they figured out that multicast could do all the same things more efficiently, but close enough.

An IP broadcast message is received by all devices on the local network that are listening on a given port. Typically these use the special “limited broadcast” address 255.255.255.255 (there’s also a “directed” broadcast address for each subnet which could theoretically be routed to other networks, but that’s more detail than matters for us). An IP multicast message is similar, but is received only by devices that have subscribed to (or joined) the multicast’s special “group” address. Multicast addresses all have 1110 as their most significant bits, which translates to addresses from 224.0.0.0 to 239.255.255.255.

Generally, these messages are restricted to your local network — that is, routers don’t send them out onto the wider Internet (there are exceptions for complex corporate-style networks, but whatever). This is a good thing, because the cacophony of the whole world getting all of these messages would most definitely take down the Internet. It’s also safer, as we’ll see a bit later.

Roku and SSDP

OK, back to the main thread here. Per the ECP documentation, Roku devices use Simple Service Discovery Protocol for discovery. SSDP defines a multicast address (239.255.255.250) and port (1900), a set of messages using what old folks like me still call RFC 822 format, and two interaction patterns for discovery:

  1. A client looking for devices sends a multicast M-SEARCH message with the type ssdp:discover, setting the ST header to either ssdp:all (everybody respond!) or a specific service type string (the primary Roku type is roku:ecp). AFAIK there is no authoritative list of ST values, you just kind of have to know what they are.  Devices listening for these requests respond directly to the client with a unicast HTTP OK response that includes (thank you) addressing information.
  2. Clients can also listen on the same multicast address for NOTIFY messages of type ssdp:alive or ssdp:byebye; devices send these out when they are turned on and off. It’s a good way to keep a list of devices accurate, but implementations are spotty so it really needs to be used in combination with #1.

An SSDP client in Java

Seems simple, right? I mean, OK, the basics really are simple. But a robust implementation runs into a ton of nit-picky little gotchas that, all together, took me days to sort out. The end result is on github and can be built/tested on any machine with java, git and maven installed (be sure to fix up slashes on Windows):

git clone https://github.com/seanno/shutdownhook.git
cd shutdownhook/toolbox
mvn clean package
java -cp target/toolbox-1.0-SNAPSHOT.jar \
    com.shutdownhook.toolbox.discovery.ServiceDiscovery \
    ssdp

This command line entrypoint just sends out an ssdp:discover message, displays information on all the devices that respond, and loops listening for additional notifications. Somebody on your network is almost sure to respond; in particular for Roku you can look for a line something like this:

+++ ALIVE: uuid:roku:ecp:2N006D062746 | roku:ecp | http://192.168.86.47:8060/ | (/192.168.86.47:1900)

Super cool! If you open up that URL in a browser you’ll see a bunch more detail about your Roku and the interfaces it supports.

Discovery Protocol Abuse

If you don’t see any responses at all, it’s likely that your firewall is blocking UDP messages either to or from port 1900 — my Linux Mint distribution does both by default. Mint uses UncomplicatedFirewall which means you can see blocking activity (as root) in /var/log/ufw.log and open up UDP port 1900 with commands like:

sudo ufw allow from any port 1900 to any proto udp
sudo ufw allow from any to any port 1900 proto udp

Before you do this, you should be aware that there is some potential for bad guys to do bad stuff — pretty unlikely, but still. Any protocol that can “amplify” one message into many (because multiple devices can respond) carries some risk of a denial-of-service attack. That can be very simple: a bad guy on your network just fires off a ton of M-SEARCH requests, prompting a flood of responses that overwhelm the network as a whole. Or it can be nastier: combined with “ip spoofing,” a bad guy can redirect amplified responses to an unsuspecting victim.

Really though, it’s pretty theoretical for a home network — routers don’t generally route these messages, so it’d have to be an inside job anyways. And once you’ve got a bad actor inside your network, they can probably do a lot more damage than just slowing it down. YMMV, but I’m not personally super-worried about this particular attack. Just please don’t confuse my blasé assessment here with the risks of the related Universal Plug-and-Play (UPnP) protocol, which are quite real.

Under the Covers

There is quite a bit to talk about in the code here. Most of the hard work is in UdpServiceDiscovery.java (I’ll explain this abstraction later), which uses two sockets and two worker threads:

Socket/Thread #1 (DISCOVERY) sends M-SEARCH requests and receives back HTTP OK responses. A request is sent when the thread first starts up and can be repeated either on demand or on a timer (by default every twenty minutes).

It’s key to understand that while the request here is a multicast message, the responses are sent back directly as a unicast. I didn’t implement the server side specifically, but you can see how this works in the automated tests — the responding device extracts the source address and port from the multicast and just replies with a standard UDP unicast message. This is important for us because only one process on a computer can actively listen for unicast messages on a port. And on many systems, somebody is probably already doing that on port 1900 (for example, the Windows service “SSDP Discovery”). So if we want to reliably hear HTTP OK responses, we need to be using an unused, automatically-assigned port for this socket.

Socket/Thread #2 (NOTIFICATION) is for receiving unsolicited NOTIFY multicasts. This socket uses joinGroup to register interest and must be opened on port 1900 to work correctly.

forEachUsefulInterface is an interesting little bit of code. It’s used both for sending requests and joining multicast groups, ensuring that the code works in a system that is connected to multiple network interfaces (typically not the case at home, but better safe than sorry). Remember that multicasts are restricted to a local network — so if you’re attached to multiple networks, you’ll need to send out one message on each of them. The realities of coordinating interfaces with addresses can get pretty complicated, but I think this gets it right. Let me know if you think I’ve missed something!

The class also tries to identify and ignore duplicate UDP messages. Dealing with dups just comes with the territory when working with UDP — and while the nature of the SSDP protocol means it generally doesn’t hurt anything to re-process them, it’s just icky. UdpServiceDiscovery tries to filter them out using message hashing and a FIFO queue of recently-received messages. You can tune this behavior (or turn it off) through config; default is a two-second lookback.  

Wait, is that Everyone? Enter WS-Discovery.

If you look closely you’ll see that UdpServiceDiscovery really isn’t specific to SSDP at all — all of the protocol-specific stuff is in Ssdp.java and transits through yet another class ServiceDiscovery.java. What the heck is going on here? The short story is that SSDP doesn’t return most printers, and Microsoft always needs to be special. The longer story requires a quick aside into the insanity that was “WS*”.

Back in 1999 and 2000, folks realized that HTTP would be great for APIs as well as web pages — and two very different approaches emerged. First by a few months was SOAP (and it’s fast-follower WSDL), which tried to be transport-independent (although 99.9% of traffic was over HTTP) and was all about defined, strongly typed interfaces. The foil to SOAP was REST — a much lighter and Internettish way to think about machine-to-machine interaction.

SOAP was big company, REST was scrappy startup. And nobody was more SOAPy than Microsoft. They had a whole group (I’m looking at you Bill, and your buddy John too) that did nothing but make up abstract, overly-complicated, insane SOAP-based “standards” informally known as “WS*” that nobody understood or needed. Seriously, just check out this poster (really, click that link, zoom in and scroll for awhile, it’s shocking). Spoiler alert: REST crushed it.

Anyways — one of these beasts was WS-Discovery, a protocol for finding devices on a network that does exactly the same thing as SSDP. Not “generally the same thing,” but exactly the same thing. The code that works for SSDP works for WSD too, just swap out the HTTP-style metadata for XML. Talk about reinventing the wheel, yeesh. But at least this explains the weird object hierarchy in my discovery classes:

Since these all use callback interfaces and sometimes you just want an answer, I added OneShotServiceDiscovery that wraps up Ssdp and Wsd like this (where “4” below is the number of seconds to wait for UDP responses to come in):

Set<ServiceInfo> infosSSD = OneShotServiceDiscovery.ssdp(4);
Set<ServiceInfo> infosWSD = OneShotServiceDiscovery.wsd(4);

There’s an entrypoint for this too, so to get a WSD device list you can use (the example is my Epson ET-3760):

java -cp target/toolbox-1.0-SNAPSHOT.jar \
    com.shutdownhook.toolbox.discovery.OneShotServiceDiscovery \
    wsd
...
urn:uuid:cfe92100-67c4-11d4-a45f-e0bb9e278967 | wsdp:Device wscn:ScanDeviceType wprt:PrintDeviceType | http://192.168.5.228:80/WSD/DEVICE | (/192.168.5.228:3702)

Actually, a full WSD implementation is more complicated than this. The protocol defines a “discovery proxy” — a device on the network that can cache device information and reduce network traffic. A proxy advertises itself by sending out HELLO messages with type d:DiscoveryProxy; clients are supposed to switch over to use this service when it’s present. So so much complexity for so so little benefit. No thanks.

Don’t forget the broadcast bunch

And we’re still not done. SSDP and WSD cover a bunch of devices and services, but they still miss a lot. Most of these use some sort of custom broadcast approach. If you poke around in UdpServiceDiscovery you’ll find a few special case bits to handle the broadcast case — we disable the NOTIFICATION thread altogether, and just use the DISCOVERY thread/socket to send out pings and listen for responses. The Misc class provides an entrypoint for this; you can find my Roomba using broadcast port 5678 like this:

java -cp target/toolbox-1.0-SNAPSHOT.jar \
    com.shutdownhook.toolbox.discovery.Misc \
    255.255.255.255 5678 irobotmcs
...
============ /192.168.4.48:5678
{"ver":"3","hostname":"Roomba-3193C60472324700","robotname":"Bellvoomba","ip":"192.168.4.48","mac":"80:91:33:9D:E2:16","sw":"v2.4.16-126","sku":"R960020","nc":0,"proto":"mqtt","cap":{"pose":1,"ota":2,"multiPass":2,"pp":1,"binFullDetect":1,"langOta":1,"maps":1,"edge":1,"eco":1,"svcConf":1}}

Sometimes you don’t even need a ping. For example, devices that use thes Tuya platform just sit there and constantly broadcast their presence on port 6666 or 6667:

java -cp target/toolbox-1.0-SNAPSHOT.jar \
    com.shutdownhook.toolbox.discovery.Misc \
    255.255.255.255 6667 tuya
...
============ / 192.168.5.253:60913
????1r???8?W??⌂???▲????H??? _???r?9???3?o*?jwz#?$?Z?H?¶?Q??9??r~  ?U

OK, that’s not super useful — it’s my smart ceiling fan, but apparently not all of their devices broadcast on 6666, and the messages on port 6667 are encrypted (using a global key, duh — this code shows how to decrypt them). This kind of thing annoys me because it doesn’t really secure anything and just makes life harder for everyone. I’m going to register my protest by not writing that code myself; that’ll show them.

In any case, you get the point — there are a lot of ways that devices try to make themselves discoverable. I’ve even seen code that just fully scans the network — mini wardialers that check every possible address for specific open ports (an approach that won’t survive the eventual v4-v6 address transition!). It’d be nice if this was more standardized, but I’m happy to live with a little chaos in return for the innovations that pop up every day. It’ll settle out eventually. Maybe.

Now back to that Roku app, which will use something like 5% of the code I wrote for this post. Just one of the reasons I’m a fan of retirement — I can burn cycles on any rabbit hole I damn well please. Getting the code right can be tricky, so perhaps it’ll prove useful to some other nerd out there. And as always, please let me know if you find a bug!

Bentwood Ring v0.8

A few weeks ago “bentwood rings” started showing up on my Pinterest feed amongst the usual fare of woodturning and off-grid power systems (that recommendation engine knows its business). I’d been thinking vaguely about wooden rings for awhile, so went down the Internet rabbit hole to see what was up. There’s a ton of good stuff on YouTube, but this step-by-step guide was my favorite.

Bonus shot of the silicon ring Lara got me !

Bentwood rings are made by wrapping a very thin bit of wood (think veneer or shavings) around a form, gluing the overlapping layers together with CA (aka “Krazy”) glue. This basically creates plywood, with the grain running in circles around the ring. A bit complicated, but much stronger than a ring cut from solid wood, which inevitably has narrow bits with perpendicular grain that snap easily. Of course, all that glue doesn’t hurt the stability either.  

Most online tutorials use purchased wood veneers, and you can get some great looking stuff that way. But one of the main reasons I love to build with wood is that I can know exactly where it comes from — material I’ve harvested myself, from a place that has meaning to me. So I chose the shavings method, starting with a branch from one of the beautiful maple trees in our Bellevue yard.

The end result isn’t an unqualified success — there are clear flaws in the finish and it’s a bit bulkier than I’d have liked. But it was a ton of fun to make, and with some practice I think I’ll be able to get it right. So I’m calling this first attempt “version 0.8” which leaves me room to take another shot. Lots of lessons to talk about!

Sourcing and forming the shavings

The first step was to secure the maple branch (green, about a foot and a half long and one inch in diameter) in my bench vise and use my new spokeshave to first flatten it out and then pull off some long curls. It’s important to keep these as even as possible so that they’ll wrap neatly — shaving is easy and fun, so take a bunch and then pick out the best. They should be thin but not translucent, and wider than the final ring will be, to account for splitting at the edges.

If your spokeshave starts to “bounce” or catch as you cut, try reversing direction — you always want to be cutting “downhill” so that you’re not bumping the blade into end grain. This is a really neat tool and just super-satisfying to use when it’s cutting cleanly.

Use medium-grit sandpaper (like 240) to smooth out any bumps or nicks in the shavings. Then sand a shallow bevel into each end so that they taper off to almost nothing. Both of these are to ensure that the layers wrap together as closely as possible to prevent crappy-looking glue pockets.

You’ll want to wrap the shavings against their natural curl — honestly I’m not sure that this matters, but it makes the smoother side of the cut face out and that’s how “they” say to do it. Your shavings may be flexible enough to just wrap them as-is, but I chose to soak mine for about fifteen seconds in hot water, loosely roll them in a loop against the curl, tape them in that position and leave them overnight. This reversed the curl so that the actual glue-up took slightly less manual dexterity.

Making the blank

Next up was wrapping and gluing the shavings into the ring blank. Find something round that is just slightly smaller than the diameter of the ring you want to make (sanding to fit is easier than trying to build up layers of finish inside) — I used a socket from my toolbox wrapped with Teflon plumbing tape. The Teflon worked OK, but next time I’m going to try plain old Scotch tape which seems like it’ll absorb a bit less of the glue. I’m also going to try a stepped ring mandrel instead of the socket, just to take some guesswork out of the sizing.

I used Starbond thin CA glue for this step, which seems to be pretty standard. First, wrap the shaving once as tightly as possible around the form. Add plenty of glue to about a quarter inch bit of shaving and then press and hold until it sets, about ten seconds or so. Soak the next quarter inch with glue, press that bit down, and keep repeating this until you’ve wrapped around the from three or four times (or a little bigger, whatever you like). Don’t be stingy with the glue, you really want it to get into the wood fibers.

Getting the wrap finished takes a bit of finger gymnastics and despite using vinyl gloves I ended up with glue all over my hands — ah the price of art. Do your best to keep the wrap even and flat so that there are no visible pockets of glue. Secure the wrap with some masking tape, drip in a little more glue from each edge, and let it cure fully — overnight should be more than enough.

Shaping and Sanding

sanded but unfinished

This part was the most fun for me. After twisting the ring off of the form, I used a thin pull saw to cut off the rough edges and my spindle/belt sander to clean up the inside and outside of the ring. I found a dowel to hold the ring securely on the lathe and then shaped the ring to what I thought was a good profile (turns out it was a little too bulky, oh well).

From there it was all about sanding through the grits — 120, 240, 400, 600, 800, 1000. I did the same on the inside of the ring by hand, which wasn’t nearly as mind-numbing as you’d expect. The surface area is small so you only need a couple of minutes with each one; I use “one pop song per grit” as my rule of thumb.

I was really pleased with the ring at this point. A little large, but the grain was quite pretty and the ring felt strong and smooth.

Finishing it Up

The last step was to put a glossy protective finish on the ring. Turns out that the most common finish is … wait for it … even more CA glue. Because I was worried about the ring sticking in this step, I bought a one-inch HDPE rod and shaped it to fit the ring (CA glue doesn’t stick well to HDPE). In the end, though, it was pretty easy to keep the glue from dripping off of the ring, so that probably wasn’t necessary.

The technique here was to get the ring rotating on the lathe at its slowest setting (about 45 rpm), apply a drop of medium-thickness CA glue to the ring, and use a toothpick to spread it over the surface, making sure to reach the edges. Let that drop cure, and repeat until you like the look of the finish. The same approach works for the inside of the ring, but since I didn’t have an easy way to lathe-mount the ring with the inside exposed, I just rotated it in my left hand while applying and spreading glue with the right. It worked just fine.

But here’s where I really messed up. First, and I could really kick myself for this, a tiny bit of black HDPE dust was left on the mounting rod after shaping it, and a couple flakes got stuck in my finish. Tiny flakes, but they look like bits of dirt — infuriating. The bigger issue is that I added too much finish with each step (2-3 drops rather than 1), and didn’t let it cure sufficiently between coats. The end result is quite a bit of clouding and a few bubbles deep in the finish. Bummer.

Forging ahead, I re-sanded the ring through the same grits plus 2000 and 3000 at the end, then buffed it out with a plastic polish and buffing wheel. The surface is beautiful, glassy and strong, but the imperfections underneath are still quite apparent. Ah well, that’s why it’s version 0.8.

Next time will be better! I took some red alder shavings from a fallen tree at the Whidbey beach and will have another go in the next week or two (and will get some better in-progress pictures). There are some great advanced techniques to try as well — adding inlays, combining different types of veneer, and so on. There is always something new to learn and try. Hope you’ll give it a shot, and please let me know if you do.

Regulated software for software people

If you’ve built software at any scale, you know how the game works. You get requirements from somewhere — usually they’re wrong or at best incomplete. You do your best to implement and test them, and you ship. Users vote with their clicks as to what features work and which don’t — i.e., they refine the requirements for you — and then you repeat the process. Eventually you converge to a set of features that work, then you do it all over again with a new set of requirements.

If your cycle time is long, that’s called “waterfall” and folks judge you for it, which is sometimes fair but not always. If your cycle time is short, it’s called “agile.” Agile does have some advantages: user feedback gets incorporated more quickly, and doing things in smaller chunks generally results in fewer bugs. A lot of people have written a lot of boring religious articles about the differences here, but in reality most folks fall somewhere in the middle, and it’s usually fine. If you’re building the next Tinder or Candy Crush or whatever, that’s pretty much all the “process” you need to know.

But what if you’re building something for healthcare or another industry where the software is “regulated?” Oh my. “Regulation” is scary and mysterious, and people keep talking about going to jail. There’s a whole industry that as far as I can tell is built around paying protection money to consultants. So it’s not surprising that “regulated” in a software job description is a turn-off for lots of people. Still, it’s the cost of doing business for a lot of important things in the world, so let’s take a look at what’s really going on there.

An important caveat: I have never worked for the FDA or any regulatory agency. I’m not a lawyer. I’m just a guy who has written a bunch of regulated software and believes the advice I’ve received from the “regulatory industry” has been almost entirely crap (a very few people break that mold; you know who you are). My hope here is to give you a straightforward, clear-cut introduction so that you can enter into the process with enough confidence to avoid being bullied by silly hyperbole about going to jail or other craziness. The actual regulators I’ve known just expect you to do your best to build safe, reliable products, and they get that it’s a hard job. Understanding a few key things will not only keep you compliant, it will help you build better stuff. Honest.

The Big Secret

Most of my big-time regulatory work has been with FDA Class 1 and 2 medical devices. “Classification” is a risk stratification based on your “intended use” — a super-important bit of text that precisely defines what your device is supposed to do and how it’s supposed to be used. Nailing these down can be a fraught and expensive process all on its own.

But we’re getting ahead of ourselves here — a common problem in this space! In this piece I’m not going to go deep into the details of any specific regulation — because (cover your ears regulatory folks) they don’t really matter to you. OK, maybe they matter if you’re in a startup and you’re the one that has to actually do the filings … but that’s probably a horrible idea anyways. From a software perspective, pretty much all safety-focused regulation looks exactly the same and can be satisfied by adhering to a set of relatively small and, dare I say, pretty reasonable requirements.

This is a bigger secret than you might think. Thanks to the impenetrable nature of regulatory jargon and text, folks that can claim actual experience are in high demand. It’s to their advantage and job security to make it seem super complicated — nobody wants to go to jail, so we all just keep paying for vague explanations and double-talk. Of course that’s a broad brush —  but it’s more right than wrong.

Software first, not Regulation first

This is my biggest regret from my last regulated gig. I had stepped into a higher risk class of device, and we had hired a set of regulatory folks with no direct software experience. The company was very focused on agency approval, so there was a ton of pressure to get it “right.” Not good excuses, but it is what it is — I let our initial software processes be driven by regulation first. That’s not to say we didn’t build excellent software, because we did — but we did it with a very high burden on the team that honestly was mostly just wasted time and energy. I was lucky to lose only a couple of good people to the noise before we got things more-or-less straightened out.

Your job is to build great software. Finish reading this article, understand what you really need to be able to prove and document, talk to folks you trust, and then use your own software-focused best practices to meet the regs. It is the job of your regulatory team to take what you produce and “package” it into the right form for filings and auditors. This packaging takes work and a depth of understanding not many folks in the industry bring, so you’re probably going to get pushback. Stand your ground. Over time, it is for sure worth adding automation to generate different forms of documentation (this is where we finally ended up) and that’s great — but be confident that, if you execute well, you need not be bound by crazy redundant busywork.

Safety-based regulation in three bullets

Software regulation isn’t really intended to protect against bad or fraudulent actors — there are other laws for that. Instead, the point is to ensure that the specific risks and benefits associated with a product are understood and visible. From a software perspective, that means three things:

  1. You know what it’s supposed to do.
  2. It does what you expect.
  3. You’ve considered the risks.

The first two of these should look pretty familiar. #1 just says that you have a correct and detailed specification. #2 says you tested the software against that specification. These might need to be a bit more formal than you’re used to, but if you don’t have a good starting point, your product probably sucks anyways. You likely already use JIRA or some other system to track features/stories and bugs, so if you can generate the following reports you’re basically done with these first two requirements:

  • A list of features, each with sufficient detail to be implemented.
  • For each feature, a list of test cases that cover the feature.
  • For each test case, a record of each time it was executed and passed or failed.

Formal documentation of test cases and results — and especially links back to the specific features they exercise — can be pretty thin at many companies, where dedicated QA resources are hard to come by. If your tests are automated, a great start is just to log feature IDs along with each test case you run. Together with code coverage reports, that gets you a long way towards compliance. For manual test cases, you will need some way to keep track of things — I’ve used the Zephyr plugin for JIRA with good success, but there are tons of options out there.

Risk Assessment

Documented risk assessment (#3) is new to a lot of folks. The concept can take a bit of getting used to, because if you’re good at your job you’ve been assessing and addressing risks implicitly all along. Is there enough contrast to read this text in high-light situations? Will users understand what “accept” means in this situation? What happens if the user doesn’t scroll down to read the whole message? And so on. By the time you sit down for a “formal” risk analysis, you’ve probably taken care of most issues already.

And yet it’s a requirement to document a formal risk assessment for each feature. The best way I’ve found to manage this is to add a custom field to the requirements management system for risks, and ask folks to just make notes there along the way. Towards the end of the development phase, have the team sit down and clean them up and spend a bit of timing thinking about anything missed. That meeting tends to be pretty quick and actually serves as a nice double-check. Ultimately you’ll want to document four things for each risk identified:

  • What could happen.
  • The potential impact.
  • Some idea of how likely and how severe this would be (more on this below).
  • What you’ve done to mitigate or reduce the risk.

There are tons of rubrics for codifying “likelihood” and “severity” — red zone / green zone kind of stuff. I’m torn about these — there is definitely validity to the balance between risks that might cause actual physical harm but are so unlikely to occur that it would be silly to spend time on them, versus risks that have almost no real impact but could happen so frequently that it justifies extra work. But trying to get too precise is really quite hopeless — I’d just estimate low/medium/high on both dimensions and leave it at that.

Potential bugs are not “risks.” Of course bad code can cause all kinds of problems — but trying to capture that is a useless shell game. That “risk” applies to every feature, and the only mitigation is to develop and test better. Documenting this is useless. Software risks generally come down to user interface confusion, algorithms that break down given extreme inputs, that kind of thing. Honest, it gets easier once you do it for awhile.

Lastly, “documentation” or “user education” is a totally reasonable way to mitigate some risks. Sometimes something important is just complicated, and the user cannot be expected to understand how to use a feature without training and/or documentation. That’s OK! Just don’t use it as a crutch for bad design — your job after all is to build a helpful product, not an obtuse one. A trick that can increase the effectiveness of “mitigation by documentation” is to put the documentation directly into the user experience. For example, the first time the user clicks a particular button you might proactively pop up a dialog that can be dismissed once acknowledged.

The “Manufacturing” gotcha

Hopefully so far you’re feeling OK about all of this. A few tweaks to very standard practices and you’re pretty much capturing all the raw material you need. Woot! Ah, but wait.

Almost for sure you’re building modern software that runs as a service (in the cloud or otherwise), releasing new functionality on a regular basis — and that can make documentation a lot trickier. Maybe I should have mentioned this earlier, but I didn’t want to scare you off. Don’t worry, it’ll be ok.

Traditional medical devices are “things” — tongue depressors, MRI machines, cancer drugs, and so on. A great deal of up-front thought and effort and cost goes into figuring out what to make and how to make it. Prototypes are created. Factories and factory lines are set up. Raw materials are sourced. And then when you’re done, you flip a switch and stamp out thousands or millions of copies, exactly the same way, for years. Within that context, safety-based regulation makes a ton of sense. It expects to see “a” design record for each device. Auditors come in and ask to see “the” documents for a given product.

This worked ok when software shipped on a CD in a box. But when it runs as a service, updated and improved over many iterations in near real-time, things can get messy pretty quickly. Note this isn’t about “waterfall” vs. “agile,” it’s about frequent, incremental releases over time vs. one-and-done “manufacturing.”

My first stab at this didn’t work super-well. We basically just wrapped up each release, no matter how small, into its own set of documents — features, risks, test cases and results. When we did our first independent audit (internal, thankfully) the auditor asked for the documents for product X. I handed over dozens of these release packages and smiled confidently. We did a cool demo. They then said OK, you showed me this feature that does Y, where are the test results that prove it works? Seems like a reasonable request.

Yikes. Like almost every feature, this one had evolved over time. There were probably twenty stories related to it, scattered across dozens of releases, each one incremental, like “add option Z to the menu.” Was all the information there? Sure, you could figure it out if you really understood the product and had a couple of days to sort through it all. But answering that seemingly simple question in real time in the auditing room? No chance. And while I did say that it was the responsibility of your regulatory team to “package” your raw documents into something palatable to an auditor, this kind of synthesis is way too much to ask.

I’m sure there are many ways to address this issue, but we settled on something we called a “component document.” This was a single, authoritative, narrative document that could be used as the starting point for anybody trying to understand what the product did. It explained its purpose, the general approach to building it, and each major feature or feature area, assigning a unique identifier to each. The document was meant to be largely stable — that is, day-to-day features and bug fixes did not require changes to the text. An example might be a component-level feature that says “abnormal results will cause an alert to be sent to the medical team;” a corresponding release-focused requirement might describe specific alert conditions and channels for notification (like email). Adding SMS alerting would be a new requirement in a new release, but wouldn’t require updates to the component document.

By explicitly associating every requirement with a “component-level” feature, it became trivial to assemble coherent documentation packages. There were other benefits as well — for example, we found that if a requirement triggered a text change in the component document, it almost always warranted a full test pass rather than something more targeted. And the component document was a fantastic training vehicle for new engineers and even end users. It certainly isn’t always the case, but this time the regulatory framework really did directly help us improve our development process. Love that!

Almost there… honest!

At this point you should feel confident that you can build software in a way that satisfies the intent of safety-focused regulation. You understand and can explain what you’re building, you have tested it appropriately, you have assessed risks to health and safety — and you have the documents to prove it in an audit setting. This is really good, and frankly notably better than many self-claimed “compliant” software shops I’ve seen in real life. There is no orange jumpsuit in your future (at least for this reason).

That said, there are always more concepts in the regulatory framework you should be thinking about and evolving towards. None of these are all that challenging, and you should at least be prepared to explain to auditors how you think about them:

“V”erification vs “V”alidation

“V&V” is often used as a synonym for “testing” — which is pretty close, but obscures an important distinction between the two that you’ll need to address:

  • “Verification” ensures that features work as they are specified. It makes no judgment about whether the features do the “right” thing, just that they meet the spec.
  • “Validation” ensures that features do what users need them to do. They are really a test of the specifications themselves.

In an ideal world, verification tests are executed through automation and/or your engineering team, while validation tests are done by actual end users. In reality, most end-users aren’t qualified to do a good job, and you risk wasting time on “test theater” that doesn’t really prove anything. You’ll have to find your own way here; a reasonable approach might be to (a) make sure end-users are formally involved in the up-front process of creating specifications, and (b) label your test cases as “verification” or “validation” to show you’ve been thoughtful about both concepts.

Design Documents

Significant architectural decisions should be recorded in “design documents.” These are just engineer-focused documents in any form that help describe “how” the product is built. Think about the kind of documentation you’d like to show to a new developer on the team before they jump into code. Associating design documents with component-level features is a great way to keep a handle on how it all fits together.

Third-party software and/or “SOUP”

If your product incorporates COTS (“Commercial / Off The Shelf”) software, that also needs to be validated. Some vendors may be able to help you with this, and some may already have a base level validation that you can start from. But in most cases, you’ll want to show that the acquired software does what you need it to do. This is typically a “one-and-done” exercise where you (a) document your requirements and (b) write and execute tests cases to show the product satisfies those requirements.

This applies to third-party libraries you use as well, and even to your own internal software that may have been developed “way back when” without any documentation at all (sometimes called SOUP, for “Software of Unknown Provenance”). The same process applies — write some requirements, write some tests, run the tests, and have that documentation ready for auditors.

Surveillance

“Bugs found in the wild” is a fantastic measure of software quality (hint: fewer is better). Your regulatory team should be managing formal “complaints” (escalating to you as needed), but keeping track of which bugs were found post-release is a great practice that will serve you well. A quarterly meeting to discuss trends and identify problematic features shows that you’re taking it seriously, so keep meeting notes and be prepared to show a graph of incidents and their severity over time.

Approvals and Signatures

This is an area that really bugs me. Regulatory folks can get super hung-up on ink-based signatures and extreme measures to ensure that documentation is “tamper-proof.” Full stop, I think this is a waste of time. Regulation is not intended to stop a sophisticated bad actor — it’s supposed to help folks trying to do the right thing. The burden of security theater can be stupid high. My take:

  • The software you use to manage requirements and tests should require login and keep track of who creates/updates items in the system.
  • Don’t delete stuff; instead use “inactive” or “obsolete” statuses to keep irrelevant or mistaken entries out of everyday view.
  • Make sure that the appropriate people (especially end users) mark their approval of requirements and tests in the system by clicking a button or writing a comment, and be able to show a record of that.
  • Don’t go overboard.

A final note about audits and auditors

You’re never going to be “done” tweaking and evolving this stuff. Auditors are paid to find issues, and no matter how great you are, they’re going to find some. Don’t sweat it and don’t be defensive. Listen, create a plan to address what they find, and then — this is the real key — follow through. When that auditor comes back they’re going to assess your response, and the worst thing you can do is just ignore them. If you disagree, start a dialogue and you’re sure to find a reasonable compromise.

Bottom line — bureaucracy is bureaucracy, and there is for sure burden associated with complying with regulation. Some of that burden is stupid, and some of it helps. Believe it or not, the actual regulators really do understand this, and are always working to make it simpler (even right now). Your biggest challenge will be the “industry” of high-priced consultants who are incented only to keep you worried and paying their hourly fees. Don’t freak out. Put in a little work to understand the real intent, honestly work to incorporate the key concepts — and you’ll be just fine.