It’s Always a Normalization Problem

Heads up, this is another nerdy one! ShareToRoku is available on the Google Play store. All of the client and server code is up on my github under MIT license; I hope folks find it useful and/or interesting.

Algorithms are the cool kids of software engineering. We spend whole semesters learning to sort and find stuff. Spreadsheet “recalc” engines revolutionized numeric analysis. Alignment algorithms power advances in biotechnology.  Machine learning algorithms impress and terrify us with their ability to find patterns in oceans of data. They all deserve their rep!

But as great as they are, algorithms are hapless unless they receive inputs in a format they understand — their “model” of the world. And it turns out that these models are really quite strict — data that doesn’t fit exactly can really gum up the works. As engineers we often fail to appreciate just how “unnatural” this rigidity is. If I’m emptying the dishwasher, finding a spork amongst the silverware doesn’t cause my head to explode — even if there isn’t a “spork” section in the drawer (I probably just put it in with the spoons). Discovering a flip-top on my toothpaste rather than a twist cap really isn’t a problem. I can even adapt when the postman leaves packages on top of the package bin, rather than inside of it. Any one of these could easily stop a robot cold (so lame).

It’s easy to forget, because today’s models are increasingly vast and impressive, and better every day at dealing with the unexpected. Tesla’s Autopilot can easily be mistaken for magic — but as all of us who have trusted it to exit 405 North onto NE 8th know, the same weaknesses are still hiding in there under the covers. But that’s another story.

Anyhoo, the point is that our algorithms are only useful if we can feed them data that fits their models. And the code that does that is the workhorse of the world. Maybe not the sexiest stuff out there, but almost every problem we encounter in the real world boils down to data normalization. So you’d better get good at it.

Share to Roku (Release 6)

My little Android TV-watching app is a great (in miniature) example of this dynamic at work. If you read the original post, you’ll recall that it uses the  Android “share” feature to launch TV shows and movies on a Roku device. For example, you can share from the TV Time app to watch the latest episode of a show, or launch a movie directly from its review at the New York Times. Quite handy, but it turns out to be pretty hard to translate from what apps “share” to something specific enough to target the right show. Let’s take a look.

First, the “algorithm” at play here is the code that tells the Roku to play content. We use two methods of the Roku ECP API for this:

  • Deep Linking is ideal because it lets us launch a specific video on a specific channel. Unfortunately the identifiers used aren’t standard across channels, and they aren’t published — it’s a private language between Roku and their channel providers. Sometimes we can figure it out, though — more on this later.
  • Search is a feature-rich interface for jumping into the Roku search interface. It allows the caller to “hint” the search with channel identifiers and such, and in certain cases will auto-start the content it finds. But it’s hard to make it do the right thing. And even when it’s working great it won’t jump to specific episodes, just seasons.
public class RokuSearchInfo
public static class ChannelTarget
public String ChannelId;
public String ContentId;
public String MediaType;
public String Search;
public String Season;
public String Number;
public List<ChannelTarget> Channels;

Armed with this data, it’s pretty easy to slap together the optimal API request. You can see it happening in ShareToRokuActivity.resolveAndSendSearch — in short, if we can narrow down to a known channel we try to launch the show there, otherwise we let the Roku search do its best. Getting that data in the first place is where the magic really happens.

A Babel of Inputs

The Android Sharesheet is a pretty general-purpose app-to-app sharing mechanism, but in practice it’s mostly used to share web pages or social media content through text or email or whatever. So most data comes through as unstructured text, links and images. Our job is to make sense of this and turn it into the most specific show data we can. A few examples:

App / SourceShared DataIdeal Target
1. TV Time Episode PageShow Me the Love on TV Time“Trying” Season 1 Episode 6 on AppleTV+
2. Chrome Movie Review (No text selection)“Strange World” on Disney+
3. Chrome Wikipedia page (movie title selected)“Joe Versus the Volcano”,Joe%20Versus%20the%20Volcano,-Article“Joe Versus the Volcano” on multiple streaming services
4. YouTube Video“When you say nothing at all” cover by Reina del Cid on YouTube
5. Amazon Prime MovieHey I’m watching Black Adam. Check it out now on Prime Video!“Black Adam” on Amazon Prime
6. Netflix Series PageSeen “Love” on Netflix yet?“Love” Season 1 Episode 1 on Netflix
7. Search text entered directly into ShareToRokuProject Runway Season 5“Project Runway” Season 5 on multiple streaming services.

Pipelines and Plugins

All but the simplest normalization code typically breaks down into a collection of rules, each targeted at a particular type of input. The rules are strung together into a pipeline, each doing its little bit to clean things up along the way. This approach makes it easy to add new rules into the mix (and retire obsolete ones) in a modular, evolutionary way.

After experimenting a bit (a lot), I settled on a two-phase approach to my pipeline:

  1. Iterate over a list of “parsers” until one reports that it understands the basic format of the input data.
  2. Iterate over a list of “refiners” that try to enhance the initial model by cleaning up text, identifying target channels, etc.

Each of these is defined by a standard Java interface and strung together in A fancier approach would be to instantiate and order the implementations through configuration, but that seemed like serious overkill for my little hobby app. If you’re working with a team of multiple developers, or expect to be adding and removing components regularly, that calculus probably looks a bit different.

This split between “parsers” and “refiners” wasn’t obvious at first. Whenever I face a messy normalization problem, I start by writing a ton of if/then spaghetti, usually in pseudocode. That may seem backwards, but it can be hard to create an elegant approach until I lay out all the variations on the table. Once that’s in front of me, it becomes much easier to identify commonalities and patterns that lead to an optimal pipeline.


Parsers” in our use case recognize input from specific sources and extract key elements, such as the text most likely to represent a series name. As of today there are three in production:

TheTVDB Parser (

TV Time and a few other apps are powered by TheTVDB, a community-driven database of TV and movie metadata. The folks there were nice enough to grant me access to the API, which I use to recognize and decode TV Time sharing URLs (example 1 in the table). This is a four step process:

  1. Translate the short URL into their canonical URL. E.g., the short URL in example 1 resolves to
  2. Extract the series (375903) and/or episode (7693526) identifiers from the URL.
  3. Use the API to turn these identifiers into show metadata and translate it into a parsed result.
  4. Apply some final ad-hoc tweaks to the result before returning it.

All of this data is cached using a small SQLite database so that we don’t make too many calls directly to the API. I’m quite proud of the toolbox implementation I put together for this in, but that’s an article for another day.

UrlParser takes advantage of the fact that many apps send a URL that includes their own internal show identifiers, and often these internal identifiers are the same ones they use for “Deep Linking” with Roku. The parser is configured with entries that include a “marker” string — a unique URL fragment that identifies a particular — together with a Roku channel identifier and some extra sugar not worth worrying about. When the marker is found and an ID extracted, this parser can return enough information to jump directly into a channel. Woo hoo!

This last parser is kind of a last gasp that tries to clean up share text we haven’t already figured out. For example, it extracts just the search text from a Chrome share, and identifies the common suffix “SxEy” where x is a season and y is an episode number. I expect I’ll add more in here over time but it’s a reasonable start.


Once we have the basics of the input — we’ve extracted a clean search string and maybe taken a first cut at identifying the season and channels —  a series of “refiners” are called in turn to improve the results. Unlike parsers which short-circuit after a match is found, all the refiners run every time.

A ton of the content we watch these days is created by the streaming providers themselves. It turns out that there are folks who keep lists of all these shows on Wikipedia (e.g., this one for Netflix). The first refiner simply loads up a bunch of these lists and then looks at incoming search text for exact matches. If one is found, the channel is added to the model.

As a side note, the channel is actually added to the model only if the user has that channel installed on their Roku (as passed up in the “channels” query parameter). The same show is often available on a number of channels, and it doesn’t make sense to send a Roku to a channel it doesn’t know about. If the show is available on multiple installed channels, the Android UX will ask the user to pick the one they prefer.

Figuring out this refiner was a turning point for the app. It makes the results far more accurate, which of course makes sense since they are sourced from Roku itself. I’ve left the WikiRefiner in place for now, but suspect I can retire it with really no decrease in quality. The logs will show if that’s true or not after a few weeks.

In any case, this refiner passes the search text up to the same search interface used by It is insanely annoying that this API doesn’t return deep link identifiers for any service other than the Roku Channel, but it’s still a huge improvement. By restricting results to “perfect” matches (confidence score = 1), I’m able to almost always jump directly into a channel when appropriate.

I’m not sure Roku would love me calling this — but I do cache results to keep the noise down, so hopefully they’ll just consider it a win for their platform (which it is).

At the very end of the pipeline, it’s always good to have a place for last-chance cleanup. For example, TVDB knows “The Great British Bake Off,” but Roku in the US knows it as “The Great British Baking Show.” This refiner matches the search string against a set of rules that, if found, allow the model to be altered in a manual way. These make the engineer in me feel a bit dirty, but it’s all part of the normalization game — the choice is whether to feel morally superior or return great results. Oh well, at least the rules are in their own configuration file.

Hard Fought Data == Real Value

This project is a microcosm of most of the normalization problems I’ve experienced over the years. It’s important to try to find some consistency and modularity in the work — that’s why pipelines and plugins and models are so important. But it’s just as important to admit that the real world is a messy place, and be ready to get your hands dirty and just implement some grotty code to clean things up.

When you get that balance right, it creates enormous differentiation for your solution. Folks can likely duplicate or improve upon your algorithms — but if they don’t have the right data in the first place, they’re still out of luck. Companies with useful, normalized, proprietary data sets are just always always always more valuable. So dig in and get ‘er done.

public RokuSearchInfo parse(String input, UserChannelSet channels) {
// 1. PARSE
String trimmed = input.trim();
RokuSearchInfo info = null;
try {
info = tvdbParser.parse(input, channels);
if (info == null) info = urlParser.parse(input, channels);
if (info == null) info = syntaxParser.parse(input, channels);
catch (Exception eParse) {
log.warning(Easy.exMsg(eParse, "parsers", true));
info = null;
if (info == null) {
info = new RokuSearchInfo();
info.Search = trimmed;"Default RokuSearchInfo: " + info.toString());
// 2. REFINE
tryRefine(info, channels, rokuRefiner, "rokuRefiner");
tryRefine(info, channels, wikiRefiner, "wikiRefiner");
tryRefine(info, channels, fixupRefiner, "fixupRefiner");
// 3. RETURN"FINAL [%s] -> %s", trimmed, info));

The Elon / Twitter Bummer

Let’s get this out of the way up front: if you’re here expecting more snarky piling on about how stupid Elon Musk is, you’re going to have to get your schadenfreude fix elsewhere. Frankly, I think he’s a genius. A singular individual of our time that most fairly should be compared with Thomas Edison. But the whole Twitter thing really bums me out, because it makes obvious just how easily an unchecked strength can become a stunning downfall. It’s worth a few words; hopefully ones that will add a little bit of thoughtfulness to a mostly empty public “conversation.” We will see.

First let’s review a couple of the things Elon Musk has contributed to the world.


I’ve long been a believer in space exploration, so it’ll be no surprise that I have followed SpaceX since its earliest days. Back in 2002, Musk made a trip to Russia to acquire rockets at a commercially-reasonable price. The Russians basically told him he was an idiot and that it couldn’t be done. ON THE PLANE HOME, he put together a spreadsheet that showed that he could. His own people though he was nuts at first, but he was right. He surrounded himself with experts ranging from amateur to professional. He seeded money to folks and watched what happened. And most importantly he read, and read, and read. To call out just a few specifics he has cited:

… and then he made it happen. Bigtime. SpaceX is still today*** the only American company that can launch people into space. He puts satellites up for about $1,200 per pound (the Shuttle was $30,000). He has used the capability to launch Starlink, bringing the Internet to people and places previously left behind. The scope of what he has done here is stunning. No gimmicky “tourism.” He has never flown himself. He is simply knocking down real problems, one after another, while most others just second-guess from the sidelines.

Think space doesn’t matter? You’re dead wrong, but OK. How about climate change?


It seems I can’t go a day without hearing Fleetwood Mac shill Chevy “EVs for Everyone.” Just like every other car company, Chevy would love you for you to believe that this was all their idea, but in reality they (together with all the usual suspects) have been slow-boating electrics since the 1980s. Not so Elon, who first invested in Tesla in 2004, and launched the Roadster in 2009 as CEO — more than a decade ago. We got our Model X in 2018 and it is straight up the best car I’ve ever owned.

What made the difference with Tesla was not new science, but a willingness to buck conventional wisdom as to what was “production ready.” Rather than whine about a lack of charging infrastructure, they designed the Supercharger and deployed enough of them that I’ve comfortably road-tripped the entire west coast of the USA multiple times. For daily use we haven’t even installed a dedicated charger of our own — we do just fine with a standard wall outlet. Software completes the package: we can safely leave our dog in a climate-controlled car; get automatically-recorded video of accidents or attempted theft; watch Netflix in the ferry line; verify that the doors are locked from our phones; ask it to extract itself from a tight parking space. I’m not allowed to take my hands off the wheel quite yet, but the Tesla drives itself way more than I do — stops at lights, changes lanes, you name it.

And sure they’ve been expensive so far, but at a base price of $47k the Model 3 is within striking distance of “normal” cars. It’s not a stretch to say that the EV industry is at a tipping point today directly because of what Elon has accomplished with Tesla over the past thirteen years.

But wait, there’s more. Tesla has used learning from the cars to become an energy company. One of my son’s best friends is kept busy way more than full-time installing Tesla Solar Roofs across the western half of the country. The Powerwall uses software to optimize power management — even automatically “topping up” the charge when severe weather is in the forecast.

I could keep going like this for a long time. And it’s easy for folks to talk about how nobody should be a billionaire or whatever, but he earned his money creating and selling things people want. His start in business was a $28k loan from his dad — a nice advantage to be sure, but turning $28k into $200B (legally) is a pretty good record and doesn’t happen by accident.

So then WTF happened?

In almost every case, Elon’s success has come down to “just doing” things that conventional wisdom said couldn’t be done. But it’s not a Zuckerberg “move fast and break things” vibe. He really listens to the arguments and the experts and the ideas — he is smart enough to understand what he learns — and only then he makes his call. Educated, but not encumbered, by those that have come before him. It’s just damn impressive.

The thing is, though, a key reason he can ignore the preconceptions of others is that he doesn’t have a ton of empathy for them (clinically it seems). Honestly he said it best himself on SNL:

“To anyone who I’ve offended [with my Twitter posts], I just want to say I reinvented electric cars, and I’m sending people to Mars in a rocket ship. Did you think I was also going to be a chill, normal dude?”

It’s funny because it’s true. The same quality that helps him ignore naysayers also keeps him from understanding the positions of folks that attack him (rightly or wrongly). Especially in the anonymous public sphere. I mean, it’s hard for anybody to turn the other cheek online — throw in some mental instability and it’s just not a shocker when he reacts without thinking.

With Twitter, this all just spun wildly out of control. He’s mad that people are mean to him on Twitter, and because he’s the richest guy in the freaking world his answer is to just buy it and kick off the people he doesn’t like. Obviously he regretted this just days after he set it in motion. But he’d gone too far and was legally required to finish what he started. And the real kicker this time is that — in stark contrast to his other ventures — with Twitter he didn’t do his homework. So in practice he’s just another clueless a$$hat money exec who shows up assuming he knows better. But he doesn’t. And that is exactly what we are seeing play out as he flails and reacts, flails and reacts, flails and reacts.

It just makes me really very sad.

I’m not asking you to feel sorry for the richest man in the world. And I’m not making excuses for this complete sh**show; it could have serious negative implications for all of us. It’s just a bummer to watch it unfold. I hope that history is able to remember both sides of the Elon story, because we’ll all still be benefiting from what he’s built long after people forget what a tweet was.

*** Between the time I wrote the first draft of this piece and published it, NASA finally launched the behemoth that is Artemis on its first trip around the moon. They haven’t put humans in the capsule yet, but it does appear we’re going to have a second option. Awesome, but that program feels a bit like a relic from the 80s. I hope it goes ok!

A Driftwood and Glowforge Chess Set

My nephew (a pretty cool guy BTW) has started playing a lot of chess. I thought it’d be fun to make him a custom board using driftwood from the beach — it’s been too long since he’s been able to visit in person, but at least I can send a bit of Whidbey Island his way. The Glowforge made it easy to finish the job with an etched playing surface and pieces; I love the way it turned out! (I masked out his full name because he’s wisely not hanging out on social media like I am.)

The Driftwood Base

The base is a solid piece cut from a nice beach log — still a big fan of my electric chainsaw for this work! Rather than hoof it all the way back on foot, I rowed my inflatable the quarter mile down and back to the house. You can’t tell from these pics, but it was actually a crazy foggy day with visibility no more than maybe thirty yards. I did not stray far from shore.

First step was to flatten the slab. The router sled I created last year was perfect for the first side — just like with code, it’s awesome when you use something like this a second time! I then ran it through the planer until it was parallel on both sides. Next I dried the slab in the oven for about four hours at two hundred degrees. This worked ok, but there is just so much moisture in the beach wood that quick-drying creates a ton of stress on the fibers. I was able to work around the cracks that opened up, and the warp planed out ok, but I think sometime this month I’m going to cut a few pieces, give the ends a nice coat of Anchorseal, and then just leave them in the garage to dry for a year. That’s a long time and I’m not very patient, but it’ll be worth it to (mostly) eliminate ongoing cracks and warps.

After cutting the base square on the table saw, it was time to move onto the center recess for storing pieces. Originally I planned to just hollow this out, but I’m just not very skilled with detail routing yet. Instead I cut the slab in a tic-tac-toe fashion, planed down the middle piece until it was a good height, and glued it all back together. It’s amazing to me just how strong well-made glue joints can be; the wood around them will often tear before they give way. A tip from this very amateur woodworker: buy a ton of good-quality clamps; it’s just impossible to build well without them.

Last steps on the base were to (a) shave off the uneven edge created by the kerfs during the tic-tac-toe cutting, (b) use a roundover bit to route a nice edge along the top; (c) sand it all to about 240 grit; (d) embed and glue in some magnets in the corners (more on this later); and (e) apply a few coats of Tried and True finish. I used this finish for the first time because it’s popular for wooden toys, and I’m kind of obsessed with it now — a combination of linseed oil and beeswax that goes on easily, buffs well and looks and feels great. Woot!

Glowforging the base

A piece of 1/8″ MDF with a white oak veneer serves as both the playing surface and a lid for the recess inside. Etching 32 squares takes a long time (SVG links are at the end of the article)! The board is secured with some small but relatively mighty disc magnets I got from Amazon. Honestly this could have turned out a bit better, but it worked OK. The magnets are 3/8″ diameter, but I couldn’t squish them into a 3/8” drilled inset — so I went up to a 1/2″ bit and that was fine after sanding down the edges a little. I secured a pair of magnets in each corner using J-B Weld epoxy. As an aside, J-B Weld is the absolute best. Years ago at Adaptive my belt buckle broke in the middle of the day and one of the folks in the lab hooked me up. That metal-to-metal repaired joint is STILL HOLDING under stress (OK eventually I did get a new belt but that one is still my backup). Amazing.

After that was dry, I put a third magnet on each stack, dabbed some epoxy on the top, and carefully placed the board so the magnets were aligned. This was the right approach, but I kind of screwed it up. The board and base are square and non-directional, so ideally you could just drop the board down in any rotation. But because the base isn’t perfectly square (remember when I trimmed off the kerf edge? That means the final piece is about 1/8″ shorter than it is long), the magnets are actually in a rectangle, not a square. Barely a rectangle, but enough that if you rotate the board 90 degrees it doesn’t sit well.

Worse, I reversed the polarity of the magnets in one corner so if you rotated the board, two corners actually repelled each other. Ugh. The rectangle I could live with but not this, so I carefully pried those magnets out and replaced them the right way. End result — a solid “B” job. I ended up burning two little dots, one on the corner of the board and one in the matching corner of the base, to make it easy to align.

Then the pieces

Now, my nephew takes this stuff very seriously, and I think he may choose to play with his own more traditional pieces. But I wanted the set to be complete, and I just wasn’t up to trying to lathe out a full set in two different woods. After thinking about it quite a bit I ended up cutting out a set of discs based on some great creative commons art (hat tip CBurnett and note my derivative SVG files linked below are freely available for use and modification as well).

The black pieces are on an MDF with mahogany veneer; the white ones are on basswood — so there’s clear contrast between the sides. I made an extra set of each in case some got lost, and they can also be used on the reverse side for checkers (although he’s pretty much too cool for that). Unfortunately I neglected to get any pictures of these before sending the final piece off — oops!

And finally, the inset

The final touch was to line the bottom of the storage recess with an engraved cork sheet. The ones I use are 2mm thick and have adhesive on one side — really nice for these insets, bowl and vase bottoms, and so on. For this project the adhesive wasn’t quite enough, so I added some wood glue and used my daughter’s pie weights (blind bake to avoid a soggy bottom!) to hold it all in place until the glue dried.

That’s a wrap

And that’s it! Lara made even cooler stuff for our niece and it all went into a box for their birthdays. I think I like these hybrid projects the best, using the Glowforge to add details and components to a piece made with more natural materials and techniques. SVG links are below; I didn’t include the cork inlay because that was just a personal note … but good settings for the cork sheets are 1000/10% for engraving and 400/100% for an easy cut.

Thanks as always for reading; it’s almost as fun running back through the projects in my mind as it is making them in the first place. Except for all the mistakes… so many mistakes.

Health IT: More I, less T

“USCDI vs. USCDI+ vs. EHI vs. HL7 FHIR US Core vs. IPA. Definitions, similarities, and differences as you understand them. Go!” —Anonymous, Twitter

I spent about a decade working in “Health Information Technology” — an industry that builds solutions for managing the flow of healthcare information. It’s a big tent that boasts one of the largest trade shows in the world and dozens of specialized venture funds. And it’s quite diverse, including electronic health records, consumer products, billing and cost management, image management, AI and analytics of every flavor you can imagine, and more. The money is huge, and the energy is huger.

Real world progress, on the other hand, is tough to come by. I’m not talking about health care generally. The tools of actual care keep rocketing forward; the rate limiter on tests and treatments seems only our ability to assess efficacy and safety fast enough. But in the HIT world, it’s mostly a lot of noise. The “best” exits are mostly acquisitions by huge insurance companies willing to try anything to squeak out a bit more margin.

That’s not to say there’s zero success. Pockets of awesome actually happen quite often, they just rarely make the jump from “promising pilot” to actual daily use at scale. There are many reasons for this, but primarily it comes down to workflow and economics. In our system today, nobody is incented to keep you well or to increase true efficiency. Providers get paid when they treat you, and insurance companies don’t know you long enough to really care about your long-term health. Crappy information management in healthcare simply isn’t a technology problem. But it’s an easy and fun hammer to keep pounding the table with. So we do.

But I’m certainly not the first genius to recognize this, and the world doesn’t need another cynical naysayer, so what am I doing here? After watching another stream of HIT technobabble clog up my Twitter feed this morning, I thought it might be helpful to call out four technologies that have actually made a real difference over the last few years. Perhaps we’ll see something in there that will help others find their way to a positive outcome. Or maybe not. Let’s give it a try.

A. Patient Portals

Everyone loves to hate on patient portals. I sure did during the time I spent trying to make HealthVault go. After all, most of us interact with at least a half dozen different providers and we’re supposed to just create accounts at all of them? And figure out which one to use when? And deal with their circa 1995 interfaces? Really?

Well, yeah. That’s pretty much how business works on the web. Businesses host websites where I can view my transaction history, pay bills, and contact customer support. A few folks might use aggregation services to create a single view of their finances or whatever, but most of us just muddle through, more-or-less happily, using a gaggle of different websites that don’t much talk to each other.

There were three big problems with patient portals a few years ago:

  1. They didn’t exist. Most providers had some third-party billing site where you could pay because, money. But that was it.
  2. When they did exist, they were hard to log into. You usually had to request an “activation code” at the front desk in person, and they rarely knew what you were talking about.
  3. When they did exist and you could log in, the staff didn’t use them. So secure messaging, for example, was pretty much a black hole.

Regulation fixed #1; time fixed #2; the pandemic fixed #3. And it turns out that patient portals today are pretty handy tools for interacting with your providers. Sure, they don’t provide a universal comprehensive view of our health. And sure, the interfaces seem to belong to a long ago era. But they’re there, they work, and they have made it demonstrably easier for us to manage care.

Takeaway: Sometimes, healthcare is more like other businesses than we care to admit.

B. Epic Community Connect & Care Everywhere

Epic is a boogeyman in the industry — an EHR juggernaut. Despite a multitude of options, fully a third of hospitals use Epic, and that percentage is much larger if you look at the biggest health systems in the country. It’s kind of insane.

It can easily cost hundreds of millions of dollars to install Epic. Institutions often have Epic consultants on site full time. And nobody loves the interface. So what is going on here? Well, mostly Epic is just really good at making life bearable for CIOs and their IT departments. They take care of everything, as long as you just keep sending them checks. They are extremely paternalistic about how their software can be used, and as upside-down as that seems, healthcare loves it. Great for Epic. Less so for providers and patients, except for two things:

Community Connect” is an Epic program that allows customers to “sublet” seats in their Epic installation to smaller providers. Since docs are basically required to have an EHR now (thanks regulation), this ends up being a no-brainer value proposition for folks that don’t have the IT savvy (or interest) to buy and deploy something themselves. Plus it helps the original customer offset their own cost a bit.

Because providers are using the same system here, data sharing becomes the default versus the exception. It’s harder not to share! And even non-affiliated Epic users can connect by enabling “Care Everywhere,” a global network run by Epic just for Epic customers. Thanks to these two things, if you’re served by the 33%+ of the industry that uses Epic, sharing images and labs and history is just happening like magic. Today.

Takeaway: Data sharing works great in a monopoly.

C. Open Notes

OpenNotes is one of those things that gives you a bit of optimism at a time when optimism can be tough to come by. Way back in 2010, three institutions (Beth Israel in MA, Geisinger in PA, and Harberview in WA) started a long-running experiment that gave patients completely unfettered access to their medical records. All the doctor’s notes, verbatim, with virtually no exception. This was considered incredibly radical at the time: patients wouldn’t understand the notes; they’d get scared and create more work for the providers; providers fearing lawsuits would self-censor important information; you name it.

But at the end of the study, none of that bad stuff happened. Instead, patients felt more informed and greatly preferred the primary data over generic “patient education” and dumbed-down summaries. Providers reported no extra work or legal challenges. It took a long time, but this wisdom finally made it into federal regulation last year. Patients now must be granted full access to their providers’ notes in electronic form at no charge.

In the last twelve months my wife had a significant knee surgery and my mom had a major operation on her lungs. In both cases, the provider’s notes were extraordinarily useful as we worked through recovery and assessed future risk. We are so much better educated than we would otherwise have been. An order of magnitude better than ever before.

Takeaway: Information already being generated by providers can power better care.

D. Telemedicine

It’s hard to admit anything good could have come out of a global pandemic, but I’m going to try. The adoption of telemedicine as part of standard care has been simply transformational. Urgent care options like Teladoc and Doctor on Demand (I’ve used both) make simple care for infections and viruses easy and non-disruptive. For years insurance providers refused “equal pay” for this type of encounter; it seems that they’ve finally decided that it can help their own bottom line.

Just as impactful, most “regular” docs and specialists have continued to provide virtual visits as an option alongside traditional face-to-face sessions. Consistent communication between patients and providers can make all the difference, especially in chronic care management. I’ve had more and better access to my GI specialists in the last few years than ever before.

It’s only quite recently that audio and video quality have gotten good enough to make telemedicine feel like “real” medicine. Thanks for making us push the envelope, COVID.

Takeaway: Better care and efficiency don’t have to be mutually exclusive.

So there we go. There are ways to make things better with technology, but you have to work within the context of reality, and they ain’t always that sexy. We don’t need more JSON or more standards or more jargon; we need more information and thoughtful integration. Just keep swimming!

Form and Function

I love reality TV about making stuff and solving problems. My family would say “to a fault.” Just a partial list of my favs:

I could easily spin a tangent about experiential archeology and the absolutely amazing Ruth Goldman, but I’ll be restrained about that (nope): Secrets of the Castle, Tudor Monastery Farm, Tales from the Green Valley, Edwardian Farm, Victorian Farm, Wartime Farm.


Recently I discovered that old Project Runway seasons are available on the Roku Channel, so I’ve been binging through them; just finished season fourteen (Ashley and Kelly FTW). At least once per year, the designers are asked to create a look for a large ready-to-wear retailer like JCPenney or JustFab or whatever. These are my favorites because it adds a super-interesting set of constraints to the challenge — is it unique while retaining mass appeal, can it be reproduced economically, will it read well in an online catalog, etc. etc.. This ends up being new for most of the participants, who think of themselves (legitimately) as “artists” and prefer to create fashion for fashion’s sake. Many of them have never created anything other than bespoke pieces and things often go hilariously off the rails as their work is judged against real world, economic criteria in addition to innovation and aesthetics. Especially because the judges themselves often aren’t able to express their own expectations clearly up front.

This vibe brings me back to software development in an enterprise setting (totally normal, right?). So many developers struggle to understand the context in which their work is judged. After all, we learned computer science from teachers for whom computer science itself is the end goal. We read about the cool new technologies being developed by tech giants like Facebook and Google and Amazon. All of our friends seem to be building microservices in the cloud using serverless backends and nosql map/reduce data stores leveraging deep learning and … whatever. So what does it mean to build yet another integration between System A and System B? What, in the end, is the point?

It turns out to be pretty simple:

  1. Does your software enable and accelerate business goals right now, and
  2. Does it require minimal investment to do the same in the future?

Amusingly, positive answers to both of these turn out to be pretty much 100% correlated not with “the shiniest new language” or “what Facebook is doing” but instead beautiful and elegant code. So that’s cool; just like a great dress created to sell online, successful enterprise code is exceptional both in form and function. Nice!

But as easy as these tests seem, they can be difficult to measure well. Enterprises are always awash in poorly-articulated requirements that all “need” to be ready yesterday. Becoming a slave to #1 can seem like the right thing — “we exist to serve the business” after all — but down that road lies darkness. You’ll write crappy code that doesn’t actually do what your users need anyways, breaks all the time and ultimately costs a ton in refactoring and lost credibility.

Alas, #2 alone doesn’t work either. You really have no idea what the future is going to look like, so you end up over-engineering into some super-generalized false utopian abstraction that surely costs more than it should to run and doesn’t do any one thing well. And it is true that if your business isn’t successful today it won’t exist tomorrow anyways.

It’s the combination that makes the magic. That push and pull of building something in the real world now, that can naturally evolve over time. That’s what great engineering is all about. And it primarily comes down to modularity. If you understand and independently execute the modules that make up your business, you can easily swap them in and out as needs change.

In fact, that’s why “microservices” get such play in the conversation these days — they are one way to help enforce separation of duties. But they’re just a tool, and you can create garbage in a microservice just as easily as you can in a monolith. And holy crap does that happen a lot. Technology is not the solution here … modular design can be implemented in any environment and any language.

  • Draw your business with boxes and arrows on one piece of paper.
  • Break down processes into independent components.
  • Identify the core data elements, where they are created and which processes need to know about them.
  • Describe the conversations between components.
  • Implement (build or buy) each “box” independently. In any language you want. In any environment that works for you.

Respect both form and function, knitting together a whole from independent pieces, and you are very likely to succeed. Just like the best designers on Project Runway. And the pottery show. And the baking one. And the knife-making one. And the …


OK, let’s see if I can actually get this thing written.

It’s a little hard to focus right now. We’re almost two weeks into life with Copper the shockingly cute cavapoo puppy. He’s a great little dude, and life is already better with him around. But holy crap, it’s like having a human baby again — except that I’m almost thirty years older, and Furry-Mc-Fur-Face doesn’t use diapers, so it seems like every ten minutes we’re headed outside to do his business. Apparently it’s super-important to provide positive reinforcement for this, so if you happen to see me in the back yard at 3am waxing poetic about pee and/or poop, well, yeah.

What’s interesting about my inability to focus (better explanation of this in a minute) is that it’s not like I don’t have blocks of open time in which I could get stuff done. Copper’s day is an endless cycle of (a) run around like crazy having fun; (b) fall down dead asleep; (c) go to the bathroom; (d) repeat — with a few meals jammed in between rounds. Those periods when he sleeps can be an hour or more long, so there’s certainly time in the day to be productive.

And yet, in at least one very specific way, I’m not. “Tasks” get done just fine. The dishes and clothes are clean. I take showers. I even mowed the lawn the other day. I’m caught up on most of my TV shows (BTW Gold Rush is back!). But when I sit down to do something that requires focus, it’s a lost cause. Why is that?

Things that require focus require me to hold a virtual model of the work in my head. For most of my life the primary example of this has been writing code. But it applies to anything that requires creation and creativity — woodworking, writing, art, all of that stuff. These models can be pretty complicated, with a bunch of interdependent and interrelated parts. An error in one bit of code can have non-obvious downstream effects in another; parallel operations can bump into each other and corrupt data; tiny changes in performance can easily compound into real problems.

IMNSHO, the ability to create, hold and manipulate these mental models is a non-negotiable requirement to be great at writing and debugging code. All of the noise around TDD, automated testing, scrums and pair programming, blah blah blah — that stuff might make an average coder more effective I suppose, but it can’t make them great. If you “walk” through your model step by step, playing out the results of the things that can go wrong, you just don’t write bugs. OK, that’s bullsh*t — of course you write bugs. But you write very few. Folks always give me crap for the lack of automated tests around my code, but I will go toe-to-toe with any of them on code quality — thirty years of bug databases say I’ll usually win.

The problem, of course, is that keeping a complex model alive requires an insane amount of focus. And focus (the cool kids and my friend Umesh call it flow state) requires a ton of energy. When I was just out of school I could stay in the zone for hours. No matter what was going on in the other parts of my life, I could drop into code and just write (best mental health therapy ever). But as the world kept coming at me, distractions made it increasingly difficult to get there. Kids and family of course, but work itself was much more problematic. As I advanced in my career, my day was punctuated with meetings and budgets and managing and investors and all kinds of stuff.

I loved all of that too (ok maybe not the meetings or budgets), but it meant I had smaller and smaller time windows in which to code. There is nothing more antithetical to focus than living an interrupt-driven existence. But I wasn’t going to quit coding, so it forced me to develop two behaviors — neither rocket science — that kept me writing code I was proud of:

1. Code in dedicated blocks of time.

Don’t multitask, and don’t try to squeeze in a few minutes between meetings. It takes time to establish a model, and it’s impossible to keep one alive while you’re responding to email or drive-by questions. Establish socially-acceptable cues to let people know when you need to be left alone — one thing I wish I’d done more for my teams is to set this up as an official practice. As an aside, this is why open offices are such horsesh*t for creative work — sometimes you just need a door.

2. Always finish finished.

This is actually the one bit of “agile” methodology that I don’t despise. Break up code into pieces that you can complete in one session. And I mean complete — all the error cases, all the edges, all of it. Whatever interface the code presents — an API, an object interface, a UX, whatever — should be complete when you let the model go and move on to something else. If you leave it halfway finished, the mental model you construct “next time” will be just ever so slightly different. And that’s where edges and errors get missed.

Finishing finished improves system architecture too — because it forces tight, compact modularity. For example, you can usually write a data access layer in one session, but might need to leave the cache for later. Keeping them separate means that you’ll test each more robustly and you’ll be in a position to replace one or the other as requirements morph over time. As an extra bonus, you get a bunch of extra endorphin hits, because really finishing a task just feels awesome.

OK sure, sounds great. But remember my friend Copper? In just the few sessions I’ve taken to write this little post, he’s come by dozens of times to play or make a quick trip outside. And even when I’m not paying him active attention, part of my brain always has to be on alert. Sometimes, distractions become so intense that the reality is you just won’t be successful at creative work, because your brain simply won’t focus. It hurts to say that, but better to acknowledge it than to do a crap job. The good news is that these times are usually transient — Copper will only be a brand new puppy for a few weeks, and during that time I just have to live with lower productivity. It’s not the end of the world, so long as I understand what’s going on and plan for it.

If you’re a manager, you really need to be on the lookout for folks suffering from focus-killing situations. New babies, new houses or apartments, health problems, parent or relationship challenges, even socio-political issues can catch up with people before they themselves understand what’s going on. Watch for sudden changes in performance. Ask questions. Maybe they just need some help learning to compartmentalize and optimize their time. Or maybe they need you to lighten the load for a few weeks.

Don’t begrudge them that help! Supporting each other through challenging times forges bonds that pay back many times over. And besides, you’ll almost certainly need somebody to do the same for you someday. Pay it forward.

Mastering the Art of Soviet Cooking

I’ve just started keeping a list of the (good) books I’m reading. Check out the full list!

I grew up in Lexington, Massachusetts in the late 70s and 80s. Our schools did the whole shelter under your desk thing, but consensus amongst my friends was that we were close enough to a bunch of ICBM development facilities that we’d go quickly in the first strike anyways. Kind of creepily fatalistic for a bunch of ten year olds — but honestly probably easier to deal with than the insane active shooter drills kids have to live with today. At least we knew that the Bad Guys were far away and that our leaders were actually working on the problem. OK, maybe their real motivations weren’t universally pure but, once again, I was a kid.

Anyhoo, my point is that the actual Soviet Union was way more of a concept than a reality — if you asked me what daily life was like for folks in Russia I’d probably have guessed some weird mashup of Ivan Denisovich and Rosa Klebb. Which is why I was super-psyched when I stumbled upon Mastering the Art of Soviet Cooking a few weeks ago. The author Anya Von Bremzen is just six years older than me, so we were pretty much twinsies separated by the iron curtain. Talk about a great read.

The book is loosely organized around a collection of dinners executed by Anya and her mom: one for each decade, starting with the twilight of Nicholas II and ending with the petro-oligarchs of the current millennium. But it’s really a family memoir and history of the USSR told through the lens of Anya, her mom, and her grandmother. Food plays an outsized role in the telling because, well, there never seems to have been much of it — but it’s not a sob story by any means. A few particularly interesting observations that stuck with me — hopefully they’ll draw you in enough to give it a read!

  •  Topical given the recent passing of Mikhail Gorbachev — it’s not just Putin who didn’t like the guy inside the USSR. Evaporation of the country aside, he seems to have been pretty universally viewed as an inept bungler while actually in power. He disrupted already shaky food distribution networks, his attempts to curb drinking just created another line for folks to stand in, and basically he just seems to have wandered around aimlessly watching stuff happen around him. NOT the perception we had from our vantage point in the West!
  • I 100% have to try and cannot stop thinking about Salat Olivier, which is basically a potato-vegetable-pickle salad with bologna (or chicken I guess, but I’m not turning down a salad with bologna).
  • The soviet staple breaded meat patty kotleta? Actually an ersatz hamburger, borrowed from America after a trip in the 1930s by Anastas Mikoyan, People’s Commissar of the Soviet Food Industry. Apparently they liked Eskimo Pies too.

Cover to cover interesting, perspective-broadening, yummy stuff.

Share to Roku!

TLDR: if you watch TV on a Roku and have an Android phone, please give my new Share To Roku app a try! It’s currently in open testing; install it with this link on the web or this link on your Android device. The app is free, has no ads, saves no data and only makes network calls to Rokus on your local network. It acts as a simple remote, but much more usefully lets you “Share” show names from the web or other apps directly to the Roku search interface. I use it with TV Time and it has been working quite well so far — but I need broader real-world testing and would really appreciate your feedback.

Oh user interface development, how I hate you so. But my lack of experience with true mobile development has become increasingly annoying, and I really wanted an app to drive my Roku. So let’s jump back into the world of input events and user interface layouts and see if we can get comfy. Yeesh.

Share To Roku in a nutshell

I’ve talked about this before (here and here). My goal is to transition as smoothly as possible from finding a show on my phone (an Android, currently the Samsung Galaxy S21) to watching it on my TV. I keep my personal watchlist on an app called TV Time and that’s key, but I also want to be able to jump from a recommendation in email or a review on the web. So feature #1 is to create a “share target” that can accept messages from any app.

Armed with this inbound search text, the app will help the user select their Roku, ensure the TV power is on (if that particular Roku supports it), and forward the search. The app then will land on a page hosting controls to help navigate the last mile to the show (including a nice swipe-enabled directional pad that IMNSHO is way better than the official Roku app). This remote control functionality will also be accessible simply by running the app on its own. And that’s about it. Easy peasy!

All of the code for Share To Roku is up on github. I’ll do a final clean-up on the code once the testing period is over, but everything I’ve written here is free to use and adopt under an MIT license, so please take anything you find useful.

Android Studio and “Kotlin”

If you’ve worked with me before, you know that my favorite “IDE” is Emacs; I build stuff from the command line; and I debug using logs and jdb. But for better or worse, the only realistic way to build for Android is to at least mostly use Android Studio, a customized version of IntelliJ IDEA (you can just use IntelliJ too but that’s basically the same thing). AStudio generates a bunch of boilerplate code for even the simplest of apps, and encourages you to edit it all in this weird overlapping sometimes-textual-sometimes-graphical mode that at least in my experience generally ensures a messy final product. I’m not going to spend this whole article complaining about it, but it is pretty stifling.

Love me a million docked windows, three-deep toolbars and controls on every edge of the screen!

Google would also really prefer that you drop Java and instead use their trendy sort-of-language “Kotlin” to build Android apps. I’ve played this Java pre-complier game before with Scala and Groovy, and all I can say is no thank you. I will never understand why people are so obsessed with turning code into a nest of side effects, just to avoid a few semicolons and brackets. At least for now they are grudgingly continuing to support Java development — so that’s where you’ll find me. On MY lawn, where I like it.

Android application basics


The most important thing to understand about Android development is that you are not in charge of your process. There is no “main” and, while you get your own JVM in which to live, that process can come and go at pretty much any time. This makes sense — at least historically mobile devices have had to manage pretty limited memory and processing power, so the OS exerts a ton of control over the use of those resources. But it can be tricky when it comes to understanding state and threading in an app, and clearly a lot of bugs in the wild boil down to a lack of awareness here.

Instead of main, an Android app is effectively a big JAR that uses a manifest file to expose Component classes. The most common of these is an Activity, which is effectively represents one user interface screen within the app. Other components include various types of background process; I’m going to ignore them here. Share to Roku exposes two Activities, one for choosing a Roku and one for the search and remote interface. Each activity derives from an Android base class that defines a set of well-known entrypoints, each of which is called at different points in the process lifecycle.

Tasks and the Back Stack

But before we dig into those, two other important concepts: tasks and the back stack. This can get wildly complicated, but the super-basics are this:

  • A “task” is a thing you’re doing on the device. Most commonly tasks are born by opening an app from the home screen.
  • Each task maintains a “stack” of activities (screens). When you navigate to a new screen (e.g., open an email from a list of emails) a new activity is added to the top of the stack. When you hit the back button, the current (top) activity is closed and you return to the previous one.
  • Mostly each task corresponds to an app — but not always. For example, when you are in Chrome and you “share” a show to my app, a new Share To Roku activity is added to the Chrome task. Tasks are not the same as JVM processes!

Taken together, the general task/activity lifecycle starts to make sense:

  1. The user starts a new task by starting an app from the home screen.
  2. Android starts a JVM for that app and loads an instance of the class for the activity marked as MAIN/LAUNCHER in the manifest.
  3. The onCreate method of the activity is called.
  4. The user interacts with the ux. Maybe at some point they dip into another activity, in which case onPause/onResume and onStop/onStart are called as the new activity starts and finishes.
  5. When the activity is finished (the user hits the back button or closes the screen in some other way) the onDestroy method is called.
  6. When the system decides it’s a good time (e.g., to reduce memory usage), the JVM is shut down.

Of course, it’s not really that simple. For example, Android may just nuke your process at any time, without ever calling onDestroy — so you’ll need to put some thought into how and when to save persistent data. And depending on your configuration, existing activity instances may be “reused” (with a call to onNewIntent). But it’s a pretty good starting place.


Intents are the means by which users navigate between activities on an Android device. We’ve actually already seen an intent in action, in step #2 above — MAIN/LAUNCHER is a special intent that means “start this app from the beginning.” Intents are used for every activity-to-activity transition, whether that’s explicit (e.g., when an email app opens up a message details activity in response to a click in a message list) or implicit (e.g., when an app opens up a new, pre-populated text message without knowing which app the user has configured for SMS).

Share to Roku uses intents in both ways. Internally, after picking a Roku, ChooseRokuActivity.shareToRoku instantiates an intent to start the ShareToRokuActivity. Because that internal navigation is the only way to land on ShareToRokuActivity, its definition in the manifest sets the “exported” flag to false and doesn’t include any intent-filter elements.

Conversely, the entry for ChooseRokuActivity in the manifest sets “exported” to true and includes no less than three intent-filter elements. The first is our old friend MAIN/LAUNCHER, but the next two are more interesting. Both identify themselves as SEND/DEFAULT filters, which mark the activity as a target for the Android Sharesheet (which we all just know as “sharing” from one app to another). There are two of them because we are registering to handle both text and image content.

Wait, image content? This seems a little weird; surely we can’t send an image file to the Roku search API. That’s correct, but it turns out that when the TV Time app launches a SEND/DEFAULT intent, it registers the content type as an image. There is an image; a little thumbnail of the show, but there is also text included which we use for the search. There isn’t a lot of consistency in the way applications prepare their content for sharing; I foresee a lot of app-specific parsing in my future if Share To Roku gets any real traction with users.


OK, let’s look a little more closely at the activities that make up the app. ChooseRokuActivity (code / layout) is the first screen a user sees; a simple list of Rokus found on the local network. Once the user makes a selection here, control is passed to ShareToRokuActivity which we’ll cover next.

The list is a ListView, which gives me another opportunity to complain about modern development. Literally every UX system in the world has a control for simple displays of text-based lists. Android’s ListView is just this — a nice, simple control to which you attach an Adapter that holds the data. But the Android Gods really would rather you don’t use it. Instead, you’re supposed to use RecyclerView, a fine but much more complicated view. It’s great for large, dynamic lists, but way too much for most simple text-based UX lists. This kind of judgy noise just bugs me — an SDK should make common things as easy as possible. Sorry not sorry, I’m using the ListView. Anyways, the list is wrapped in a SwipeRefreshLayout which provides the gesture and feedback to refresh the list by pulling down.

The activity populates the list of Rokus using static methods in Discovery is performed by UDP broadcast in, a stripped down version of the discovery classes I wrote about extensively in Anyone out there? Service discovery with SSDP, WSD, other acronyms. The Roku class maintains a static (threadsafe) list of the Rokus it finds, and only searches again when asked to manually refresh. This is one of those places where it’s important to be aware of the process lifecycle; the list is cached as long as our JVM remains alive and will be used in any task we end up in.

Take a look at the code in initializeRokus and findRokus (called from onCreate). If we have a cache of Rokus, we populate it directly into the list for maximum responsiveness. If we don’t, we create an ActivityWorker instance that searches using a background thread. The trick here is that each JVM process has exactly one thread dedicated to managing all user interface interactions — only code running on that thread can touch the UX. So if another thread (e.g., our Roku search worker) needs to update user interface components (i.e., update the ListView), it needs help.

There are a TON of ways that people manage this; ActivityWorker is my solution. A caller implements an interface with two methods: doBackground is run on a background thread, and when that method completes, the code in doUx runs on the UI thread (thanks to Activity.runOnUiThread).  These two methods can share member variables (e.g., the “rokus” set) without worrying about concurrency issues — a nice clean wrapper for a common-but-typically-messy situation.


The second activity (code / layout) has more UX going on, and I’ll admit that I appreciated the graphical layout tools in AStudio. Designing even a simple interface that squishes and stretches reasonably to fit on so many different device sizes can be a challenge. Hopefully I did an OK job, but testing with emulators only goes so far — we’ll see as I get a few more testers.

If the activity was started from a sharing operation, we pick up that inbound text as “extra” data that comes along with the Intent object (the data actually comes to us indirectly via ChooseRokuActivity, since that was the entry point). Dealing with this search text is definitely the most unpleasant part of the app, because it comes in totally random and often unhelpful forms. If Share To Roku is going to become a meaningfully useful tool I’m going to have to do some more investment here.

A rare rave from me — the Android Volley HTTP library (as used in is just fantastic. It works asynchronously, but always makes its callback on the UX thread. That is, it does automatically what I had to do manually with ActivityWorker. Since most mobile apps are really just UX sitting atop some sort of HTTP API, this makes life really really easy. Love it!

The bulk of this activity is just buttons and lists that cause fire-and-forget calls to the Roku, except for the directional pad that takes up the center of the screen. is a custom control (sorry, custom “View”) that lets the user click a center button and indicate direction with either clicks in the N-S-E-W regions or (way cooler) directional swipes. A long press on the control sends a backspace, which makes entering text on the TV a bit more pleasant. Building this control felt like a throwback to Windows 3.0 development. Set a clip region, draw some lines and circles and icons. The gesture recognition is simultaneously amazingly easy (love the “fling” handler) and oddly prehistoric (check out my manual identification of a “long” press).

Publishing to the Play store

Back in the mid 00’s I spent some time consulting for Microsoft on a project called Windows Marketplace (wow there is a Wikipedia article for everything). Marketplace was sponsored by the Windows marketing team as an attempt to highlight the (yes) shareware market, which had been basically decimated by cross-platform browser-based apps. It worked a lot like any other app store, with some nice features like secure backup of purchased license keys (still a thing with some software!!!). It served a useful role for a number of years — fun times with a neat set of people (looking at you Raj, Vikram, DeeDee, Paul, Matt and Susan) and excellent chaat in Emeryville.

Anyways, that experience gave me some insight into the challenges of running and monetizing a directory of apps developed by everyone from big companies to (hello) random individuals. Making sure the apps at least work some of the time and don’t contain viruses or some weirdo porn or whatever. It’s not easy — but Google and Apple really have really done a shockingly great job. Setting up account on the Play Console is pretty simple — I did have to upload an image of my official ID and pay a one-time $25 fee but that’s about it. The review process is painful because each cycle takes about three or four days and they often come back with pretty vague rejections. For example, “you have used a word you may not have the rights to use” … which word is, apparently, a secret? But I get it.

So anyways — my lovely little app is now available for testing. If you’ve got an Android device, please use the links below to give it a try. If you have an Apple device, I’m sorry for many reasons. I will definitely be doing some work to better manipulate inbound search strings to provide a better search result on the Roku. I’m a little torn as to whether I could just do that all in-app, or if I should publish an API that I can update more easily. Probably the latter, although that does create a dependency that I’m not super-crazy about. We’ll see.

Install the beta version of Share To Roku with this link on the web or this link on your Android device.

Anyone out there? Service discovery with SSDP, WSD, other acronyms.

Those few regular readers of this stuff may remember What should we watch tonight, in which I used the Roku API to build a little web app to manage my TV watchlist. Since then I’ve found TV Time, which is waaay better and even tells me how many days until the next season of Cobrai Kai gets here (51 as of this writing). But what it doesn’t do is launch shows automatically on my TV, and yes I’m lazy enough to be annoyed by that. So I’ve been planning a companion app that will let me “share” shows directly to my Roku using the same API I used a few months ago.

This time, I’d like the app to auto-discover the TV, rather than asking the user to configure its IP address manually. Seems pretty basic — the Roku ECP API describes how it uses “Simple Service Discovery Protocol” to enable just that. But man, putting together a reliable implementation turned out to be a bear, and sent me tumbling down a rabbit hole of “service discovery” that was both fascinating and frankly a bit appalling.

Come with me down that rabbit hole, and let’s learn how those fancy home devices actually try to find each other. It’s nerd-tastic!

Can I get your number?

99% of what happens on networks is conversations between two devices that already know each other, either directly by address (like a phone number), or by a name that they use to look up an address (like using a phone book). And 99% of the time this works great — between “” and QR codes and the bookmark lists we’ve all built up, there’s rarely any need to even think about addresses. But once in awhile — usually when you’re trying to set up a printer or some other smarty-pants device on your home network — things get a bit more complicated.

Devices on your network are assigned a (basically) arbitrary address by your wifi router, and they don’t have a name registered anywhere, so how do other devices find them to start a conversation? It turns out that there are a pile of different ways — most of which involve either multicast or broadcast UDP messaging, two similar techniques that enable a device to initiate a conversation without knowing exactly who it’s talking to. Vox Clamantis in Deserto as it were.

Side note: for this post I’m going to limit examples to IPv4 addressing, because it makes my job a little easier. The same concepts generally apply to IPv6, except that there is no true “broadcast” with v6 because they figured out that multicast could do all the same things more efficiently, but close enough.

An IP broadcast message is received by all devices on the local network that are listening on a given port. Typically these use the special “limited broadcast” address (there’s also a “directed” broadcast address for each subnet which could theoretically be routed to other networks, but that’s more detail than matters for us). An IP multicast message is similar, but is received only by devices that have subscribed to (or joined) the multicast’s special “group” address. Multicast addresses all have 1110 as their most significant bits, which translates to addresses from to

Generally, these messages are restricted to your local network — that is, routers don’t send them out onto the wider Internet (there are exceptions for complex corporate-style networks, but whatever). This is a good thing, because the cacophony of the whole world getting all of these messages would most definitely take down the Internet. It’s also safer, as we’ll see a bit later.

Roku and SSDP

OK, back to the main thread here. Per the ECP documentation, Roku devices use Simple Service Discovery Protocol for discovery. SSDP defines a multicast address ( and port (1900), a set of messages using what old folks like me still call RFC 822 format, and two interaction patterns for discovery:

  1. A client looking for devices sends a multicast M-SEARCH message with the type ssdp:discover, setting the ST header to either ssdp:all (everybody respond!) or a specific service type string (the primary Roku type is roku:ecp). AFAIK there is no authoritative list of ST values, you just kind of have to know what they are.  Devices listening for these requests respond directly to the client with a unicast HTTP OK response that includes (thank you) addressing information.
  2. Clients can also listen on the same multicast address for NOTIFY messages of type ssdp:alive or ssdp:byebye; devices send these out when they are turned on and off. It’s a good way to keep a list of devices accurate, but implementations are spotty so it really needs to be used in combination with #1.

An SSDP client in Java

Seems simple, right? I mean, OK, the basics really are simple. But a robust implementation runs into a ton of nit-picky little gotchas that, all together, took me days to sort out. The end result is on github and can be built/tested on any machine with java, git and maven installed (be sure to fix up slashes on Windows):

git clone
cd shutdownhook/toolbox
mvn clean package
java -cp target/toolbox-1.0-SNAPSHOT.jar \
    com.shutdownhook.toolbox.discovery.ServiceDiscovery \

This command line entrypoint just sends out an ssdp:discover message, displays information on all the devices that respond, and loops listening for additional notifications. Somebody on your network is almost sure to respond; in particular for Roku you can look for a line something like this:

+++ ALIVE: uuid:roku:ecp:2N006D062746 | roku:ecp | | (/

Super cool! If you open up that URL in a browser you’ll see a bunch more detail about your Roku and the interfaces it supports.

Discovery Protocol Abuse

If you don’t see any responses at all, it’s likely that your firewall is blocking UDP messages either to or from port 1900 — my Linux Mint distribution does both by default. Mint uses UncomplicatedFirewall which means you can see blocking activity (as root) in /var/log/ufw.log and open up UDP port 1900 with commands like:

sudo ufw allow from any port 1900 to any proto udp
sudo ufw allow from any to any port 1900 proto udp

Before you do this, you should be aware that there is some potential for bad guys to do bad stuff — pretty unlikely, but still. Any protocol that can “amplify” one message into many (because multiple devices can respond) carries some risk of a denial-of-service attack. That can be very simple: a bad guy on your network just fires off a ton of M-SEARCH requests, prompting a flood of responses that overwhelm the network as a whole. Or it can be nastier: combined with “ip spoofing,” a bad guy can redirect amplified responses to an unsuspecting victim.

Really though, it’s pretty theoretical for a home network — routers don’t generally route these messages, so it’d have to be an inside job anyways. And once you’ve got a bad actor inside your network, they can probably do a lot more damage than just slowing it down. YMMV, but I’m not personally super-worried about this particular attack. Just please don’t confuse my blasé assessment here with the risks of the related Universal Plug-and-Play (UPnP) protocol, which are quite real.

Under the Covers

There is quite a bit to talk about in the code here. Most of the hard work is in (I’ll explain this abstraction later), which uses two sockets and two worker threads:

Socket/Thread #1 (DISCOVERY) sends M-SEARCH requests and receives back HTTP OK responses. A request is sent when the thread first starts up and can be repeated either on demand or on a timer (by default every twenty minutes).

It’s key to understand that while the request here is a multicast message, the responses are sent back directly as a unicast. I didn’t implement the server side specifically, but you can see how this works in the automated tests — the responding device extracts the source address and port from the multicast and just replies with a standard UDP unicast message. This is important for us because only one process on a computer can actively listen for unicast messages on a port. And on many systems, somebody is probably already doing that on port 1900 (for example, the Windows service “SSDP Discovery”). So if we want to reliably hear HTTP OK responses, we need to be using an unused, automatically-assigned port for this socket.

Socket/Thread #2 (NOTIFICATION) is for receiving unsolicited NOTIFY multicasts. This socket uses joinGroup to register interest and must be opened on port 1900 to work correctly.

forEachUsefulInterface is an interesting little bit of code. It’s used both for sending requests and joining multicast groups, ensuring that the code works in a system that is connected to multiple network interfaces (typically not the case at home, but better safe than sorry). Remember that multicasts are restricted to a local network — so if you’re attached to multiple networks, you’ll need to send out one message on each of them. The realities of coordinating interfaces with addresses can get pretty complicated, but I think this gets it right. Let me know if you think I’ve missed something!

The class also tries to identify and ignore duplicate UDP messages. Dealing with dups just comes with the territory when working with UDP — and while the nature of the SSDP protocol means it generally doesn’t hurt anything to re-process them, it’s just icky. UdpServiceDiscovery tries to filter them out using message hashing and a FIFO queue of recently-received messages. You can tune this behavior (or turn it off) through config; default is a two-second lookback.  

Wait, is that Everyone? Enter WS-Discovery.

If you look closely you’ll see that UdpServiceDiscovery really isn’t specific to SSDP at all — all of the protocol-specific stuff is in and transits through yet another class What the heck is going on here? The short story is that SSDP doesn’t return most printers, and Microsoft always needs to be special. The longer story requires a quick aside into the insanity that was “WS*”.

Back in 1999 and 2000, folks realized that HTTP would be great for APIs as well as web pages — and two very different approaches emerged. First by a few months was SOAP (and it’s fast-follower WSDL), which tried to be transport-independent (although 99.9% of traffic was over HTTP) and was all about defined, strongly typed interfaces. The foil to SOAP was REST — a much lighter and Internettish way to think about machine-to-machine interaction.

SOAP was big company, REST was scrappy startup. And nobody was more SOAPy than Microsoft. They had a whole group (I’m looking at you Bill, and your buddy John too) that did nothing but make up abstract, overly-complicated, insane SOAP-based “standards” informally known as “WS*” that nobody understood or needed. Seriously, just check out this poster (really, click that link, zoom in and scroll for awhile, it’s shocking). Spoiler alert: REST crushed it.

Anyways — one of these beasts was WS-Discovery, a protocol for finding devices on a network that does exactly the same thing as SSDP. Not “generally the same thing,” but exactly the same thing. The code that works for SSDP works for WSD too, just swap out the HTTP-style metadata for XML. Talk about reinventing the wheel, yeesh. But at least this explains the weird object hierarchy in my discovery classes:

Since these all use callback interfaces and sometimes you just want an answer, I added OneShotServiceDiscovery that wraps up Ssdp and Wsd like this (where “4” below is the number of seconds to wait for UDP responses to come in):

Set<ServiceInfo> infosSSD = OneShotServiceDiscovery.ssdp(4);
Set<ServiceInfo> infosWSD = OneShotServiceDiscovery.wsd(4);

There’s an entrypoint for this too, so to get a WSD device list you can use (the example is my Epson ET-3760):

java -cp target/toolbox-1.0-SNAPSHOT.jar \
    com.shutdownhook.toolbox.discovery.OneShotServiceDiscovery \
urn:uuid:cfe92100-67c4-11d4-a45f-e0bb9e278967 | wsdp:Device wscn:ScanDeviceType wprt:PrintDeviceType | | (/

Actually, a full WSD implementation is more complicated than this. The protocol defines a “discovery proxy” — a device on the network that can cache device information and reduce network traffic. A proxy advertises itself by sending out HELLO messages with type d:DiscoveryProxy; clients are supposed to switch over to use this service when it’s present. So so much complexity for so so little benefit. No thanks.

Don’t forget the broadcast bunch

And we’re still not done. SSDP and WSD cover a bunch of devices and services, but they still miss a lot. Most of these use some sort of custom broadcast approach. If you poke around in UdpServiceDiscovery you’ll find a few special case bits to handle the broadcast case — we disable the NOTIFICATION thread altogether, and just use the DISCOVERY thread/socket to send out pings and listen for responses. The Misc class provides an entrypoint for this; you can find my Roomba using broadcast port 5678 like this:

java -cp target/toolbox-1.0-SNAPSHOT.jar \
    com.shutdownhook.toolbox.discovery.Misc \ 5678 irobotmcs
============ /

Sometimes you don’t even need a ping. For example, devices that use thes Tuya platform just sit there and constantly broadcast their presence on port 6666 or 6667:

java -cp target/toolbox-1.0-SNAPSHOT.jar \
    com.shutdownhook.toolbox.discovery.Misc \ 6667 tuya
============ /
????1r???8?W??⌂???▲????H??? _???r?9???3?o*?jwz#?$?Z?H?¶?Q??9??r~  ?U

OK, that’s not super useful — it’s my smart ceiling fan, but apparently not all of their devices broadcast on 6666, and the messages on port 6667 are encrypted (using a global key, duh — this code shows how to decrypt them). This kind of thing annoys me because it doesn’t really secure anything and just makes life harder for everyone. I’m going to register my protest by not writing that code myself; that’ll show them.

In any case, you get the point — there are a lot of ways that devices try to make themselves discoverable. I’ve even seen code that just fully scans the network — mini wardialers that check every possible address for specific open ports (an approach that won’t survive the eventual v4-v6 address transition!). It’d be nice if this was more standardized, but I’m happy to live with a little chaos in return for the innovations that pop up every day. It’ll settle out eventually. Maybe.

Now back to that Roku app, which will use something like 5% of the code I wrote for this post. Just one of the reasons I’m a fan of retirement — I can burn cycles on any rabbit hole I damn well please. Getting the code right can be tricky, so perhaps it’ll prove useful to some other nerd out there. And as always, please let me know if you find a bug!

Bentwood Ring v0.8

A few weeks ago “bentwood rings” started showing up on my Pinterest feed amongst the usual fare of woodturning and off-grid power systems (that recommendation engine knows its business). I’d been thinking vaguely about wooden rings for awhile, so went down the Internet rabbit hole to see what was up. There’s a ton of good stuff on YouTube, but this step-by-step guide was my favorite.

Bonus shot of the silicon ring Lara got me !

Bentwood rings are made by wrapping a very thin bit of wood (think veneer or shavings) around a form, gluing the overlapping layers together with CA (aka “Krazy”) glue. This basically creates plywood, with the grain running in circles around the ring. A bit complicated, but much stronger than a ring cut from solid wood, which inevitably has narrow bits with perpendicular grain that snap easily. Of course, all that glue doesn’t hurt the stability either.  

Most online tutorials use purchased wood veneers, and you can get some great looking stuff that way. But one of the main reasons I love to build with wood is that I can know exactly where it comes from — material I’ve harvested myself, from a place that has meaning to me. So I chose the shavings method, starting with a branch from one of the beautiful maple trees in our Bellevue yard.

The end result isn’t an unqualified success — there are clear flaws in the finish and it’s a bit bulkier than I’d have liked. But it was a ton of fun to make, and with some practice I think I’ll be able to get it right. So I’m calling this first attempt “version 0.8” which leaves me room to take another shot. Lots of lessons to talk about!

Sourcing and forming the shavings

The first step was to secure the maple branch (green, about a foot and a half long and one inch in diameter) in my bench vise and use my new spokeshave to first flatten it out and then pull off some long curls. It’s important to keep these as even as possible so that they’ll wrap neatly — shaving is easy and fun, so take a bunch and then pick out the best. They should be thin but not translucent, and wider than the final ring will be, to account for splitting at the edges.

If your spokeshave starts to “bounce” or catch as you cut, try reversing direction — you always want to be cutting “downhill” so that you’re not bumping the blade into end grain. This is a really neat tool and just super-satisfying to use when it’s cutting cleanly.

Use medium-grit sandpaper (like 240) to smooth out any bumps or nicks in the shavings. Then sand a shallow bevel into each end so that they taper off to almost nothing. Both of these are to ensure that the layers wrap together as closely as possible to prevent crappy-looking glue pockets.

You’ll want to wrap the shavings against their natural curl — honestly I’m not sure that this matters, but it makes the smoother side of the cut face out and that’s how “they” say to do it. Your shavings may be flexible enough to just wrap them as-is, but I chose to soak mine for about fifteen seconds in hot water, loosely roll them in a loop against the curl, tape them in that position and leave them overnight. This reversed the curl so that the actual glue-up took slightly less manual dexterity.

Making the blank

Next up was wrapping and gluing the shavings into the ring blank. Find something round that is just slightly smaller than the diameter of the ring you want to make (sanding to fit is easier than trying to build up layers of finish inside) — I used a socket from my toolbox wrapped with Teflon plumbing tape. The Teflon worked OK, but next time I’m going to try plain old Scotch tape which seems like it’ll absorb a bit less of the glue. I’m also going to try a stepped ring mandrel instead of the socket, just to take some guesswork out of the sizing.

I used Starbond thin CA glue for this step, which seems to be pretty standard. First, wrap the shaving once as tightly as possible around the form. Add plenty of glue to about a quarter inch bit of shaving and then press and hold until it sets, about ten seconds or so. Soak the next quarter inch with glue, press that bit down, and keep repeating this until you’ve wrapped around the from three or four times (or a little bigger, whatever you like). Don’t be stingy with the glue, you really want it to get into the wood fibers.

Getting the wrap finished takes a bit of finger gymnastics and despite using vinyl gloves I ended up with glue all over my hands — ah the price of art. Do your best to keep the wrap even and flat so that there are no visible pockets of glue. Secure the wrap with some masking tape, drip in a little more glue from each edge, and let it cure fully — overnight should be more than enough.

Shaping and Sanding

sanded but unfinished

This part was the most fun for me. After twisting the ring off of the form, I used a thin pull saw to cut off the rough edges and my spindle/belt sander to clean up the inside and outside of the ring. I found a dowel to hold the ring securely on the lathe and then shaped the ring to what I thought was a good profile (turns out it was a little too bulky, oh well).

From there it was all about sanding through the grits — 120, 240, 400, 600, 800, 1000. I did the same on the inside of the ring by hand, which wasn’t nearly as mind-numbing as you’d expect. The surface area is small so you only need a couple of minutes with each one; I use “one pop song per grit” as my rule of thumb.

I was really pleased with the ring at this point. A little large, but the grain was quite pretty and the ring felt strong and smooth.

Finishing it Up

The last step was to put a glossy protective finish on the ring. Turns out that the most common finish is … wait for it … even more CA glue. Because I was worried about the ring sticking in this step, I bought a one-inch HDPE rod and shaped it to fit the ring (CA glue doesn’t stick well to HDPE). In the end, though, it was pretty easy to keep the glue from dripping off of the ring, so that probably wasn’t necessary.

The technique here was to get the ring rotating on the lathe at its slowest setting (about 45 rpm), apply a drop of medium-thickness CA glue to the ring, and use a toothpick to spread it over the surface, making sure to reach the edges. Let that drop cure, and repeat until you like the look of the finish. The same approach works for the inside of the ring, but since I didn’t have an easy way to lathe-mount the ring with the inside exposed, I just rotated it in my left hand while applying and spreading glue with the right. It worked just fine.

But here’s where I really messed up. First, and I could really kick myself for this, a tiny bit of black HDPE dust was left on the mounting rod after shaping it, and a couple flakes got stuck in my finish. Tiny flakes, but they look like bits of dirt — infuriating. The bigger issue is that I added too much finish with each step (2-3 drops rather than 1), and didn’t let it cure sufficiently between coats. The end result is quite a bit of clouding and a few bubbles deep in the finish. Bummer.

Forging ahead, I re-sanded the ring through the same grits plus 2000 and 3000 at the end, then buffed it out with a plastic polish and buffing wheel. The surface is beautiful, glassy and strong, but the imperfections underneath are still quite apparent. Ah well, that’s why it’s version 0.8.

Next time will be better! I took some red alder shavings from a fallen tree at the Whidbey beach and will have another go in the next week or two (and will get some better in-progress pictures). There are some great advanced techniques to try as well — adding inlays, combining different types of veneer, and so on. There is always something new to learn and try. Hope you’ll give it a shot, and please let me know if you do.