TLDR, drag this link to your bookmarks bar: explain. If you select medical-related text on any page and then click the link, it will open up an “explain” window with an AI-driven translation. Alternatively, you can just visit the site at https://explainmynotes.azurewebsites.net/.
Every day I seem to get just a little bit older. My folks too. Not a bad thing, but it does inevitably mean more time spent trying to navigate the clown show that is American Healthcare. Sometimes things are simple, and sometimes they’re little mystery dramas with House or Doc Martin trying to figure out what’s going on.
By now, most of us have gotten used to using patient portals like MyChart to keep track of our care at various providers. Lab results and clinical notes show up in near real time and — thanks to years of policy pressure — are quite comprehensive. (to wit: when I had appendicitis earlier this year, my wife at home texted me the diagnosis before anybody in the ER came to let me know what was up.)
Access to these primary sources is invaluable. But clinical notes are also full of jargon, shorthand, codes and concepts that very few of us understand. Take for example this short snippet from the surgery notes of my appendicitis visit:
I attempted to bring omentum to sit over the anastomosis, but the omentum was fairly short and there was no easy reach.
I read this shortly after coming out of anesthesia — something she tried to do didn’t work. Is that bad? Should I be concerned? Luckly, Dr. M is pretty awesome; she explained that the omentum is a layer of fat that protects organs in the abdomen, and they like to “drape” it over surgical sites to aid healing by enhancing local blood flow. Somehow despite my notable beer belly I didn’t have enough fat to make this work, but it’s not a big deal. Case closed.
Unfortunately, not every provider is a great communicator like Dr. M. And even when they are, appointments are so short and far-between that there’s rarely a good opportunity for questions like this. That’s why I wrote explain my notes.

Clinical Notes, explained by AI
explain my notes takes advantage of two pretty neat technologies: SMART on FHIR for data access and ChatGPT for helping to interpret the notes. Currently it’s set up to connect to providers using Epic MyChart. In a nutshell, it works like this:
- Visit the site, read the terms of use and pick your provider.
- Log in at the provider’s patient portal and approve the connection.
- Pick an encounter to see a list of associated documents.
- Pick a document to view its contents.
- Select any text in the document and choose “Explain Selection” to pop up a window that shows the original and “explained” text side by side:
And that’s it! You’ll need to authorize the app each time you use it, because Epic doesn’t permit long-lived tokens for “automatic download” patient applications. Ah well.
Caveat 1: The ChatGPT API isn’t free — if I’m surprised and the app gets a lot of direct use, I may have to figure out how to offset those costs. For now I just hope folks try it and that it’s (a) helpful and (b) inspires others to build on the idea.
Caveat 2: As I mentioned, right now this is only hooked up to Epic MyChart sites. I’m happy to add Cerner or any other EHR that folks might be interested in, it just may take a minute. Let me know if there’s a particular provider you’d like to connect with.
Accessing the data: SMART on FHIR
From here out is just nerd stuff; feel free to exit if that’s not your vibe! All of the code for explain my notes is on github.
I’ve already written a bunch about SMART and why I think it’s so valuable, so I won’t repeat myself here. But this is the first time I’ve written a SMART app for patients, and there were a few interesting nuggets worth a mention:
Standalone Launch
explain my notes uses the “standalone launch” model. With a provider app, a huge part of the benefit comes from living within the context of the EHR — it gets you single sign-on and provider/patient context and feels seamless in an environment where providers are already spending much of their day. It’s not the same for patients; a dedicated site that can explain its function and then “connect to” the portal makes good sense.
Epic Automatic Download
The super-cool thing about patient-facing apps is that you don’t need to “register” them with each individual EHR. Instead, the EHR vendors maintain provider lists and automatically enable connections when authorized by the patient. It’s hard to overestimate just how great this is — back in the day, we had to arrange to connect HealthVault to each and every provider that wanted to work with us.
Careful, though! Automatic download comes with conditions, and they are not immediately obvious (Epic’s conditions are documented behind a free login). “Refresh” tokens aren’t allowed; only certain data types can be accessed; no “write” operations are permitted, etc. My first cut at the app didn’t meet the criteria exactly, and it took me awhile to figure out what was going on.
PDF and CCDA Content
Many notes are stored as HTML or text. Encounter summaries, though, are often stored in “CCDA” format — an old-school XML standard. XML needs to be translated into HTML for display in a browser, and while there is some solid open source code for doing that, the generated HTML doesn’t always display nicely within a larger web page. I was able to tweak it for my purposes; the altered stylesheet is available per the original’s open-source license terms.
PDF content was also a challenge to display so that it both (a) looks correct and (b) makes the selection available for sending to ChatGPT. I ended up doing a server-side translation using pdftohtml, an old standby that still works surprisingly well.
Explaining notes: ChatGPT
I think it’s clear that generative AI is going to be a seriously Big Deal — combustion engine and Internet big. But it’s still very early days, and it’s hard not to be annoyed by the seemingly endless garbage “applications” being churned out by hype-riding VC-funded bros. I get that — but bear with me.
Generative AI (specifically ChatGPT for us) is pretty amazing if you think about it as your well-read, smart, eager-to-please friend without any formal training and a fear of being wrong. People like this are super-useful, because they’ve probably come across information that you haven’t, and can be great “translators” of jargon and other specialty content. You just have to take what they say with a grain of salt — a little fact-checking goes a long way.
The ChatGPT “completions” API is pretty simple — it takes an array of input/questions and returns answers in markdown format. There are a few knobs you can turn, but that’s basically it. “Prompt engineering” is a weird concept, much closer to social engineering than code. The current “setup” prompt for explain my notes is this:
You are a medical professional that explains clinical notes and other medical text using terms and language that an average American adult without medical training will understand. Minimize the use of jargon. Your responses should not be notably longer than the original text. Also please include up to three Google search links targeting the key topics you find.
The “Google search links” part here is the most interesting. I initially asked the system to return “up to five links that would be helpful for further research,” but it turns out that ChatGPT is terrible at this, and is actually known for simply making up gibberish URLs. I’m not sure why this is the case; apologists claim they’re just stale links from old training data, but it’s way more than that. Restricting the links to Google searches seems to work pretty well.
And I guess that’s it for now! Please give the app a try — good test data is hard to come by and so I’d appreciate any and all feedback or bug reports. Until next time…

