Over the last couple of years I’ve gone from being an amateur woodworker to still an amateur woodworker. Mostly I’ve learned first-hand how much I respect the craftspeople who have invested a lifetime really understanding the materials, tools and techniques involved. I’ve watched my wife develop that mastery in textiles over decades, but nothing hammers it home like trying to create something yourself (please appreciate the hammer joke).

Anyways, along this journey I’ve noticed that there is a striking similarity between woodworkers and software developers — both groups spend at least as much time making tools-to-make-stuff as they do actually making stuff.

Routers are Magic

My latest experience with this in the woodshop started with a burl from a huge first-growth fir stump a little way down the beach. After using it to create some fun pieces on the lathe, I was left with a nice slab that had qualities of both the burl and the healthy tree — perfect for a coffee table centerpiece.

I cut the slab with my chainsaw, so it was pretty uneven — task #1 was to flatten it up. Typically I’d do this with my planer, but read this gem in the manual: “DO NOT PLANE END GRAIN, AS THE WOOD COULD SPLINTER OR POSSIBLY EXPLODE.” Explode? While I’d like to see that, I would prefer not to experience it. Instead, enter the router sled. A sled holds a router at a fixed height over the work table, allowing the user to pass a flat cutting bit in “scan lines” over the piece until it’s even. Of course, I didn’t have such a device.

router sled

Time for a tool project! Figuring out how to build a sled that would work in my shop took a little research, but a day later I had a workable setup. A little fine-tuning and a lot of sawdust later — success!

Next up, trimming the edges, which were pretty damaged and brittle from years on the beach. The trick here was to follow the natural contours of the piece — I didn’t want to lose the live edge, just make sure it wasn’t going to be too sharp or break off. A straight-bit router would do the job, but it would be really hard to guide the tool smoothly around the slanted and uneven surface.

router table

Another tool project! This time to create a router table, which allows the bit to project up from underneath. In this configuration, I could slide the piece around the bit rather than the other way around. Took another day to build the table, but it worked great and I even added a little mount for my dust collector, so little less sawdust this time.

More work to do before this one is finished: adding some etching to the top of the slab, pouring the epoxy coating and cutting a cork pad for the bottom. But the centerpiece is well on its way after only two days worth of toolmaking side trips, and of course those tools are ready at hand the next time I need them. Woo hoo!

Back to Software

Back in front of the computer, I thought it’d be useful to write some code demonstrating SMART on FHIR, a pair of technologies that make it easier to add new functionality to the Electronic Medical Record systems in use at most hospitals and healthcare practices these days. I’m actually optimistic about this approach and it’s worthy of a little evangelism.

But even for learning purposes, I don’t like to write code that isn’t at least a little useful, so first I had to figure out what my SMART on FHIR app would actually do. It turns out that (the US government’s central database of human trials) has a nice search API, so I thought a little plugin that would match a patient’s record with potential trials would be a good place to start.

Sitting down to write the class encapsulating this search, of course I needed a way to make HTTP calls from Java. There are a million libraries for this, but dependencies suck. JDK11 has a nice HttpClient class, but with so many folks still on JDK8 I didn’t want to limit the audience. That leaves us with the old workhorse, HttpUrlConnection. It’s a fine class and I’ve used it in a few articles already, but it requires a bunch of scaffolding and has some quirks that can quickly cause issues in production use.

Tool time! Turns out I won’t be writing about EMRs until next article, because right now we’re going to take a little side trip into the seductive-but-oft-abused world of building software tools and utilities.

First, is it worth it?

Building tools is just fun. There is something incredibly satisfying about creating a tidy and effective interface that does something useful and self-contained. They are usually easy to test well with automation. And you can’t beat that little dopamine hit every time they prove useful.

But bad tools are the worst. The wooorst. They look reusable, but are really tied to whatever unique use case the creator was worried about. Or they work great in a prototype but ignore important production stuff like logging or timeouts. Maybe they have hidden restrictions that aren’t enforced or documented. If you’ve written production code for any length of time, you’ve been there.

So the first questions to ask are — (1) is this tool really something that will be reused; (2) do you really understand those use cases; and (3) are you really committed to make it something usable? Until all of these are obvious yesses, Just Say No. Do some basic modularization within your own feature space and file it away until the next time something similar comes up. In my experience, the third time you see a problem is usually about the right time to go full tool on it.

HTTP Requests

It is a pretty rare system these days that doesn’t make remote calls to some other service using HTTP. The vast majority of the time, consumers of these services basically just want network calls to act like local ones:

  • They return synchronously.
  • They time out reasonably.
  • They are secure, or at least not gratuitously insecure.
  • They accept and return Java objects, not wire types.

Truth be told, most consumers write their features as if these things are true regardless of what their HTTP library actually does. And because usually remote calls return quickly without errors, those assumptions easily make it through development and basic testing. It’s only in production (where everything happens) that things go south.

This reality is why you need an easy-to-use library built for production. It will dramatically reduce the number of times you have to re-visit features to add timeouts or error handling or other crap that should have been there to begin with. In short — it’s worth it.

Side note — “it’s worth it” even if you’re using a robust third-party HTTP library as your underlying implementation. These libraries are generic by design. You’ll reap all the same benefits by pre-wiring them to your specific configuration and logging services, and by reducing optionality using your own façade.

The first version of my tool for web requests is in Basic use of the library is simple by design:

WebRequests requests = new WebRequests(new WebRequests.Config());
WebRequests.Response response = requests.get("");
System.out.println(response.successful() ? response.Body 
                   : String.format("Call failed: %d", response.Status));

Good tool design starts with the first line in this snippet. Default constructors hide settings that a user really ought to think about, but forcing a bunch of explicit/arbitrary choices up front makes the tool hard to adopt. As with so many other things, we try to find balance. Users must be aware that the Config class exists, and the options are crystal clear (see lines 29-36), but they can inherit a set of defaults that were thought through with production environments in mind.

The Async Morass

Network IO really needs to be done asynchronously — the amount of time spent waiting on network activity is huge and can cause havoc in your main threads. But async coding is absolutely crap to get right, and the callback constructs built into modern environments

Create {
	The worst {
		Nesting {
			Disasters {
				Ever {
					No Seriously {
						I’m going to lose it. {

Blocks are for variable and condition scoping. I’ve learned to live with them for Exception handling. Using them to keep track of threading, especially beyond a single callback, is nuts. And don’t start yammering about “comprehensions” because you can’t debug those either.

Look, ok — complex things are complex (lines 116-181) and being really careful you can manage it. But in most real world situations, quality code should read for and be organized around logical behavior, not threading implementations.

We deal with this in WebRequests by hiding an ExecutorService inside the class. “get” methods appear synchronous to consumers, but coordinate behind the scenes, and are guaranteed to return control to the calling thread within their timeouts. This isn’t particularly easy to do with HttpUrlConnection (note the syncFutures stuff starting at line 205), and there are some edge cases that will leave worker threads in a temporarily zombie-ish state, but they won’t gum up the primary threads of the application.  

For complex use cases, the class offers up the CompletableFuture directly through getAsync … but I doubt that ever sees much use.

The biggest downside to this approach is that the consumer needs to close() the tool at shutdown time, or whenever they want to reclaim resources. This is an annoyance for sure, but on balance I like the tradeoff. If the user doesn’t eventually call close, process shutdown will hang. We could “fix” this by setting our worker threads as daemons. But I like it as-is … the shutdown behavior is obvious and a good reminder to clean things up, making it easy to do the right thing.

Bounded Functionality, not Partial Implementation

WebRequests v1 is a solid and pretty much complete solution for common GET requests. It helps manage query string parameters, deals with IO and error cases, and returns data as a convenient String that can be parsed into JSON or whatever if necessary. What it does, it does well.

On the other hand, there are a few things it doesn’t pretend to support: POST, authentication, cookies, other custom headers (in or out), all kinds of things. “Not pretending” is the key to a good tool, which needs to be both great at and clear about what it does.

As a simple example, because we accept a Map of query string parameters, we escape them correctly and merge them into the base URL if it already includes parameters. Our public interface implies that we’ll do that, so we’d better deliver on the promise. But there is no place to specify a method … so clearly POST isn’t implemented.

The v1 functionality was a nice bite. It supports uses cases beyond our own as-is, but also provides a “home” for new functionality as we need it — inheriting all the goodness we worked hard to get right in the first revision.

ClinicalTrialsSearch and What’s Next

With WebRequests in hand, just wraps up the interface to with a simple object. There are some smarts there to try to make it easy to normalize data into the proper formats, but nothing earth-shattering, just handy. (I do like the way and do this normalization themselves.)

Finally ready to pop up the stack and get back into SMART on FHIR — stay tuned for that. But first a little more time in the woodshop to finish up my centerpiece and a couple of other projects. Until next time!