My Most Embarrassing Bug ...

6 min read

My day started like many others at Epic except that it was a support week for me. I’d grown more capable and could deftly handle most support calls from customers with no assistance and I no longer suffered from insta-panic when the front desk notified me that I had a parked call from a customer.

Take a deep breath, grab a pad and pen and steady myself for maximum attention mode. “Hi, this is Aaron."

"Hey Aaron, this is Dave.” (It really was Dave, one of the primary contacts from one of our customers. He passed away 11 years ago unfortunately 😢). He continued, “I just finished with the install and everything went well except I found something unusual that may be a problem.”

My stomach tightened and my pulse quickened. Weeks earlier, I had done the Cohort release packing and testing, so I was disappointed that I missed something.

DevOps and Stuff

During the period I was on Cohort teams were individually responsible for the distribution of their software to customers. There wasn’t a “Release Team.” My built-in predictive algorithm suggests you are considering how this isn’t just early “DevOps and Stuff!” 😃

Releasing Wasn’t Easy or Simple

There were some core utilities supplied by the Foundations team to export and import various MUMPS things including routines, global data, and Chronicles related data/meta-data. It was functional. It wasn’t easy or simple.There wasn’t any scripting or automation. There was no CI/CD. (Well, there was Carl Dvorak, but he didn’t pack releases then.) Continuous and automated deployments during the development cycle remained an unfulfilled dream for many years.

We tested the software on all of the system types used by customers, including 2 flavors of Unix, VMS, and DataTree. Fun times! (As there were only 5 Cohort customers, it was bad-luck that there was that much variety in MUMPS/OS combinations)

Each release step required detailed documentation so that the correct processes were performed exactly as specified. Pages and pages of hand-typed instructions in an editor with less-than-markdown formatting capabilities.

As part of a release, we’d MANUALLY analyze each completed project (and had moved successfully to a “build” environment). Anything new was packed and double checked (insiders: a shout-out to my old friends ETAN, ^%ZeRSAVE, ^%ZeRLOAD, ^%ZeGSAVE, and ^%ZeGLOAD!!).

Any single release could include hundreds of ingredients. Leaving one item out could result in a failed release and breaking a production environment at worst, or causing a bug in the application. (Yes, customers would ideally do a backup FIRST …). Thankfully, there was no rambling preamble in the documentation like websites with recipes these days.

For Epic staff that started after the mid 1990s, they enjoyed the new “integrated” release cycle.

Multiple Releases To Rule Them All

Before the single integrated Epic release became common — every Epic team released their software when they were ready. It wasn’t quite controlled chaos, but it was often frustrating (RPGers: maybe chaotic neutral?). For applications that were integrated in some way (like Resolute Billing with Cadence and Cohort), the lack of an integrated release was annoying and confusing for customers and Epic internally. While not a constant source of consternation, the challenges of non-synchronized release schedules included two common issues:

  • Foundations/Chronicles version dependencies (App A was developed on Foundations 1.5, yet App B needed 1.6 — App A hadn’t been thoroughly tested with 1.6).
  • App A needed App B integration and was ready, but App B wouldn’t be installed for an unknown number of months.

The multiple Epic releases for different apps also impacted customers as they had to manage and schedule multiple releases. It wasn’t great.

What Had I Done????

In any case, I’d done the release packing, so it was on me for whatever had happened if the issue was related to my packing.

Dave seemed like he was in a good mood, so whatever he’d discovered wasn’t an emergency. The conversation went something like this:

“So, I discovered something very odd when I was looking at the new SQL support that I wasn’t expecting and also wasn’t documented. I was looking through the included SQL queries that were included and we will definitely will use them. Except …”

Here it is. I tentatively acknowledged, “yes?"

"I found a few queries named something like this:“

MUMPS
X_ATC_BLAH_BLAH_BLAH
X_ATC_BLAH_BLAH_BLAH_2
X_ATC_BLAH_BLAH_BLAH_STUFF

I was aghast.

Internal Only

Items prefaced with X_ had been a convention for years for internal only stuff that would not be released to customers. I’d mistakenly packed up the X_* stuff from development and shipped it as part of the release. X_ATC_ was the prefix I used for my items (another convention at Epic was to use your initials as an alias rather than your name).

Yay me.

Me: “Oh! That’s a mistake! Those are my test SQL queries I use when I’m exploring ideas. I rename them when I’m done. But, I definitely didn’t mean to include them.” I likely apologized way way too much.

He went on to list a few more that I’d included. But, the fact that I’d managed to include my test queries … slap forehead. I know there wasn’t anything embarrassing in any of them — they were just my playground queries.

”You can delete them or enjoy them.” I had already opened the editor to see what was in them. I may have never typed as fast as I had at that moment: what in Judy’s name had I shipped? (It was just as I said — just test queries that worked or were commented out).

We both had a good laugh about the mistake. OK, laughed with him but was not feeling the humor. It took me quite a while to get over the blunder. I hadn’t realized at the time that he was trying to be as serious as he could at the start of the call.

I got a minor scolding from my TL as it was a non-serious oversight and my own embarrassment served as a sufficient life lesson.

I suspect that I inadvertently ignored the test queries during release packing as I was so accustomed to seeing them everywhere — they became part of the working system. I had queries everywhere when I was testing the new Epic SQL functionality. I did write a release tool for the next release to delete the X_* queries if they were still present.

I thankfully did not repeat this mistake again. What’s your most do-no-harm embarrassing software bug?

Hi! Before you go...🙏

I really appreciate you stopping by and reading my blog!

You might not know that each Epic blog post takes me several hours to write and edit.

If you could help me by using my Amazon affiliate links, it would further encourage me to write these stories for you (and help justify the time spent). As always, the links don't add cost to the purchase you're making, I'll just get a little something from Amazon as a thanks.

I'll occasionally write a blog post with a recommendation and I've also added a page dedicated to some of my more well-liked things. While you can buy something I've recommended, you can also just jump to Amazon and make a purchase. Thanks again!

I Wanted a New Job After This Project

9 min read

Have you worked on a project that you really never completely understood? A project where you worked on it and completed it and couldn’t have answered many specifics about what it really did or was? Often, these projects take you out of your comfort zone but eventually the core begins to click and it starts to become something that clicks. That wasn’t the case in this instance.

I don’t remember any project I’ve worked on since I started doing professional software development that was assigned to me where I had a nearly complete lack of understanding from start to finish for a multiple week project (I’ve inflicted this type of pain on myself hundreds of times!).

Good news!

I think this was one of my final large projects when working on Cohort Lab. It started out poorly:

TL: “The other developer needs to work on an important data interface for a customer and he’s in the best position to work on that. We’ve also committed to finishing this other project that he had started working on and I need you to complete it instead. He’s made some progress though and it shouldn’t take too long to complete. He’ll be available to answer questions but the other project has a hard deadline.”

She went on to explain her general understanding of the project and the requirements. I met with the other developer so he could dump on me the work he’d completed and the documentation that was associated with the project. I can’t even sugar coat the wording there — dump best describes the transfer. PLOP. PLOP. The existing work was definitely incomplete and he hadn’t started what in many ways was the most important part. My sense is that he may have thought 70-80% of the project was complete. In reality, many parts had been started and some were more complete than others. It wasn’t 70-80% complete.

I stumbled into Levey-Jennings

The project was to implement a laboratory quality control charting system for Cohort Lab. In particular, the customer request was for a Levey-Jennings chart. Our documentation included one long academic paper from the late 1980s about how to compute and how a graph could be constructed. In addition to that documentation, we had some examples.

That was it.

The other developer had spent most of his project time building data structures and doing some of the calculations. I thought I’d never actually need that math I’d learned in high school and college. But, here it was…smacking me in the face and mocking me for selling my college text books.

No Rosetta Stone Available

I spent many hours trying to make sense of the code, the paper, and what was left to do. I know there are people that enjoy the writing style of a academic paper, but I am not in that group. The documentation and examples we had clearly were intended to be read by experts in the field. I wasn’t an expert. I didn’t even know an expert. Oddly, neither were our customers — they wanted the software to do this, but could provide no helpful guidance.

If you’ve been reading along with my Epic journey, you may remember that Cohort Lab was an application that ran entirely within a terminal. There was not a reasonable way to do “charting” on screen. Even with my skills related to manipulating the screen and drawing and animating various ASCII characters, creating an on-screen representation of this data wasn’t practical (80x25, or in wide mode a 120x25 characters doesn’t provide the detail needed).

Customers wanted to print the Levey-Jennings graph.

The printing that was common at the time used dot matrix printers. Fast and Loud (no, not the TV show, although there were many dramatic moments with the printer spewing and spewing followed by the panic of trying to halt it’s enthusiasm). And while The Print Shop could do some amazing things with dot matrix printers, the printer technology wasn’t up to the task of creating a professional graph.

Epic I think at the time had three laser printers and one of the printers was in the printer closet down the hall from my shared office. Laser printers weren’t very common. They weren’t very reliable at printing. Unjamming became my jam during this project.

So, I needed to figure out how to wrangle a laser printer into making graphics from MUMPS. The Epic printer drivers/engines were text focused. Actually, they couldn’t do anything but print text. I found some documentation that suggested how various proprietary options existed to communicate with the printers and print something other than text. PCL came up a few times, but customers weren’t consistently using printers that supported PCL then. To add to the complexity, connecting to the laser printer from MUMPS running on Unix (or OpenVMS) system, and then getting the printer to switch to PCL was very poorly documented (remember, no Internet!). I tried. I consulted internal experts. Epic was too tiny to have meaningful technical contacts at hardware manufacturers. That approach was abandoned as it seemed like it would be a project much larger than what I was working on (and there weren’t staff resources for that).

If you’re keeping score, … I didn’t understand the charts, the lab tests, the math, the terminology, much of the code that had been dumped on me, or how to get charts to appear on a laser printer.

We did have documentation available (man pages and a basic user manual) for a graphing system called gnuplot. (Sorry, the official website isn’t HTTPS!!, so here’s a Wikipedia article if you’re interested.). From their home page, it looks like version 2.0 or 3.5 would have been available.

Fade to a cool movie-like montage of a software developer looking at manuals and trying to understand how to piece these parts together into a releasable software package …

It took a while to get a basic set of moving parts together and operating. During the montage, my struggles included:

  • how to install gnuplot
  • how to shell out of MUMPS and execute a random application that could be installed in a variety of locations (the MUMPS answer to shell out is to use the BANG! syntax: !gnuplot)
  • how to detect if it was available and the right version
  • how to generate a file that could be read by gnuplot
  • how to trigger a print to the laser printer after executing gnuplot
  • how to delete the file after the laser printer had completed the print
  • how to return to MUMPS after all of this, otherwise leaving a stuck MUMPS process that an IT admin would need to clobber

But, the real challenge — how to take all of this data that made very little sense to me and get it to print something that represented the graph that customers were looking for. I only had a terminal. In all but a few cases there was not a way for me to check the results without physically printing the page (I learned how to better read the gnuplot instructions so I could spot obvious issues).

Over and over and over. And over. I began to show progress to the team by taping each print to the walls of my shared office as high as I could reach. I filled 3 walls of the office. My step count went through the roof and there was likely a path worn from my desk to the printer closet from the trips back and forth. It took me a long while to get charts to print. I tried to make it enjoyable by hanging both successes and failures. Some prints would be empty. Some would have lines that went off the page, or were too small, too big, wrong, flattened, squished, stretched, … I had a fixed set of data that I was using so that I could know when I’d unlocked the secrets to Levey-Jennings Charts. I fixed some issues (math and data) with the dumped code as I honed in on getting a final result.

The project went long — I don’t know that I could have done anything differently though to expedite the work given the unknowns. For some reason, this project really didn’t click with me. I enjoy finding and fixing a bug way more than a project dump, especially of this complexity and with this many unknowns.

I know this was a project that customers selected and was important to them. But, it was miserable for me. It deflated me in a way that I rarely have felt over the decades. I knew the “parts”, but the “whole” remained elusive. I couldn’t find the passion I normally apply to my work. The end result was “clunky” at best as it required so many things to come together to get a successful print.

I was deflated. The result worked. But I could find no satisfaction as the project felt thoroughly OK for what it was. I couldn’t make it better in a reasonable amount of time with the information and resources that we had available. Rarely do I accept “OK” as it relates to my output. I expect myself to add to projects I’m involved with a bit of bedazzling but this project had nothing. No shine or glitter. No hidden gems.

It wasn’t even glossy: it was flat-matte.

It just existed as a hot-glued graphing contraption in the remote areas of Cohort. I didn’t know where a hint of sparkle could have been added.

Ugh.

My TL wasn’t overly surprised that I requested something “new” at Epic after completing this project. It was one of a few moments at Epic over the years where my conversation with a TL or Carl veered into the territory of me looking for other jobs. I had much much longer periods of my Epic career where I wasn’t doing something I enjoyed, but this project had really gotten under my skin. Too many unknowns along with the dump of code and not even having the satisfaction that the end that customers would use this regularly made it feel like a total waste of time to that young programmer.

I just asked my Cohort TL if she remembered any customer deploying any of this work…:

No.

🤨

Have you had any projects that just never clicked for you from start to finish? Tell me!

Not a Level 5

6 min read

It’s not uncommon to see someone build a small working demonstration or design a user interface for an aspect of managing healthcare. From managing your own personal healthcare data to a “complete” electronic health record, you’ll see it all.

Having worked at Epic for 25 years, I have had some experience with healthcare software from patient/consumer access (MyChart) to the core databases running the Epic system (and so many moving parts between). This morning while doing my routine of watching a few YouTube videos while exercising, I started watching a video entitled, “5 levels of UI skill. Only 4+ gets you hired.”

Ok: Pure and utter clickbait. I tapped it and started watching as the draw to find out …, but, I couldn’t stomach the full content after the creator made proclamations of extraordinarily subjective measures of people and their designs: five levels?

The aspect of the video that caught my attention and raised my alarm for “this is utter nonsense” was the self-proclaimed, level 5 application. Here’s a snapshot:

Level 5 Amazing App

If you’ve worked at Epic, or in healthcare in nearly any capacity, I suspect you may see why this “Level 5” application design is so poor.

I’m confident the author thinks this is an award winning application. If you ignore the giant-roundness of everything, let me show something by way of a list:

  • Menu button probably
  • An alarm with a green dot
  • Good morning Angelina and photo of Angelina
  • November 2022 Healthcare Report > Read Report
  • Your teammates (4 images teammates), and a add/plus button
  • Upcoming Appointments:
    • Now, Today, Thomas Lawrence, Briefing > Join Meeting
    • 12:20 Today, Melissa McMillan, Appointment > Awaiting
  • A row of unlabeled icons:
    • Home, Graphs?, Calendar, Folder?, Person

That’s it. A full application screen.

Comments

  • An entire mobile screen with very limited information. There’s no concern of having too much to do on this screen.
  • The application places no importance on data density or relevance to important tasks.
  • How long has “Healthcare report” been available? Is it new? Why is it appearing there? Why does Angelina care?
  • Can a user dismiss the report if they aren’t interested or have already read it?
  • Decide on a meaningful report name and use that in an example rather than “healthcare report.”
  • Is the “healthcare report” more important than the briefing that Angelina is apparently due for right now?
  • Only 2 appointments fit on the screen, and still provide basic to no helpful information about each appointment.
  • The second appointment listed is an “appointment?” That information was obvious from being in a list of appointments. “Virtual exam.” “Wellness Visit” … nearly anything would have been a better choice than “Appointment” in a list of appointments.
  • I wonder how long an appointment is.
  • It appears that there’s no way to scroll back to see the appointments Angelina had (but I’ll say maybe, but I doubt it as there’s no visual hint that would work). Have they missed an appointment or want to review them using the same user experience that they use to see upcoming appointments?
  • There’s a button saying “awaiting” on the second appointment. Is that an action: “awaiting” someone? Or is it a status? The giant appointment slot to the left suggests the rectangular shape is an actionable button, but with the second one saying “awaiting” it’s not clear.
  • Does the user need a giant “Good morning Angelina?” In a business setting, it’s wasted space for someone who is likely rushing from task to task. After seeing that on the 2nd day, I’d want to disable that feature FOREVER.
  • Even if this app were being used on a shared device with authentication for a current user, the space taken by the avatar and the greeting is very much wasted.
  • Does Angelina need a large daily reminder of what they looked like when their staff photo was taken 5 years ago?
  • What unusual workplace would Angelina be in where adding a teammate would be important enough to warrant reserving space for that action on this screen?
  • Who are these “teammates” anyway? That might align with some clinical situations, but I doubt it. I would expect the list to change based on scheduling, care teams, etc. Having only photos for staff is frustrating as it’s very common that staff photos are poor and out of date. Hair, beards, age, glasses …, and at a small size, they tend to look too similar. In most work environments (including healthcare), you don’t choose your teammates.
  • I presume that the images of teammates with what probably is their status is an actionable button. It’s a mystery.
  • Does the likely status circle mean they’re not busy if it’s green? Or they’re in an appointment? Or at lunch? Or out of the office? Or in a briefing … it better not just be color alone that indicates the status.
  • There’s an avatar image on appointments, is that also an actionable button?
  • What happens when the list teammates is longer than 4? Does the list scroll? Is that useful?
  • The affirmation that “Now” is also today …, but what would it say if Angelina was 5 minutes late?
  • The blue background color seems to be used for important actions, yet the “Add to Teammates” button is also blue.
  • I would have expected to see some way to see messages from other teammates and coworkers on this screen, in addition to their “in box” of clinical messages that aren’t chat-like.
  • This application would be difficult to localize as is, I would expect text wrapping issues in many languages.

I couldn’t think of a single application I use routinely in any capacity that fails as much as this mock/application has.

On Levels

To suggest that there are “levels” and the only real way to get hired is to X, Y, & Z is a load of 💩. The author creates courses that they want you to buy: Achieve Level 5!

I’d instead say an app that looks good but doesn’t provide the functionality the user needs for their job is a failure. Since it’s all subjective anyway, I’d give this app design a 1.

Instead, my basic advice:

Practice your skills. Get feedback. Keep at it. Practice. Listen. Watch videos, but watch/listen with a critical eye. Even taking the time to mentally make a list of what you’d change about a design you see can help you grow and learn.

Summary

Don’t watch the video.

Overall, this is a sloppy attempt at click-bait and application design for a well-known industry. A moderate amount of research into the responsibilities of Angelina’s role would have provided a much better guide for a purpose built application/design.

If you’d like your application reviewed, I have a service where I provide that. Save yourself a lot of time and frustration by having your app design reviewed BEFORE your engineers spend weeks or months implementing it. I can go a lot deeper and broader than I’ve done here. I can talk pixels and typefaces and colors and …😀.

Four Characters Saved the Project!

6 min read

Have you ever tried to type with gloves on? Not on your mobile phone (which can be considered barely passable with the right gloves), but on a full size keyboard…?

During the frigidly cold ☃️ Winter of 1993-1994, the Epic building was not a warm cozy place in every office. The office I was sharing at the time could have doubled as a refrigerator. I didn’t need an ice-pack for my lunch as the room was cold enough. Space heaters were prohibited for two reasons: the Madison Fire Marshall did not approve and the building’s electrical was temperamental on many circuits. The building was likely wired during an era when the occupants had lamps, electric pencil sharpeners, and the Space Race hadn’t even been a dream. My teeth chattered along with the clacky keys of my keyboard. That is, they clacked when I could get my cold stiff finger joints to perform the basic operations. Desperate to warm my fingers, I’d wear my winter gloves, but that just resulted in even longer MUMPS routines that contained more gibberish than normal.

With an endless stream of glamorous possibilities for the Cohort Public Lab product, I was assigned an important project:

PRJ 249567: DELETE ALL BATCH TESTS RESULTS AFTER FAILURE

(I have no idea what the actual project number was. But did you know that Cohort and Foundations used the PRJ database before it was “cool” for other teams at Epic? And a big hat tip to any reader who knows where that number is from — hint: it’s a Hollywood reference.)

The Project

Occasionally, the lab would need to throw out a large batch of test results. Accidents happen. Machines fail. Zombies attack. Apparently, these incidents happened frequently enough that that the manual deletion of one-by-one-by-one was a terrible experience. It could be a dozen to hundreds of tests that needed to be voided/deleted from the system. The existing user interface was on Cohort Lab screen using Chronicles Screen Paint (a neat way to draw a screen, show data, and have input entry). Using the arrow keys to navigate patiently to the result to delete, press the appropriate function key, wait for it, and repeat. There’s no way to sugar coat how slow that process was. Screen refreshes were like like using a 2G mobile/cell signal to watch videos on Tik-Tok.

My task was to make this a noticeably faster operation. Naive me got to coding. As this logic was embedded inside of Cohort and in a particular screen as part of Chronicles, there were more than a handful of “rules” that had to be followed. Some rules were there to to prevent screen issues and others to prevent data corruption. I followed the rules. Screen glitches were annoying for users and data corruption would lead to an unhappy end and trouble down the road.

The first results SUCKED.

While the data was removed faster than a human could have performed the same operation, it was akin to upgrading from that 2G to a weak 3G mobile signal. There was a lot of buffering, screen painting, and a lot of frustrating waiting. I talked to my TL who suggested looking for other options but she had no specific advice. Following the rules and common patterns seemed to be the problem.

Undeterred by the rules of Chronicles and MUMPS, I sought a creative workaround. Interestingly, because of the way Screen Paint worked, the prescriptive process was to remove rows from the results one by one, which caused a repaint. The code removed row 1, then row 1, then row 1 … For common interactive workflows, the order in which this was done didn’t matter. Fast enough. In this case, the sheer volume of results caused the screen painting algorithm to be constantly busy and the terminal would only occasionally refresh meaningfully (the terminal was constantly busy).

Eureka!

During a moment of non-caffeinated free-soda-fueled inspiration I realized that disabling updates to the screen and deleting rows from the END of the list would significantly improve the performance of this workflow. Cohort’s lists were often uncommonly large compared to other Epic applications, so this wasn’t a pattern that was routinely necessary. Almost instantly, it was fast!

I immediately went to see the TL and mentioned now there was an interesting problem that cropped up — nothing happened visually on the screen until the list was nearly empty. The code was busy deleting and the screen wasn’t refreshing. There was nothing to watch for the user. It was just doing its $job.

Like a modern application where there’s no beach-ball or spinning hourglass, Cohort just seemed busted.

IT’S DEAD JUDY

I tried adding some warning text using standard mechanisms before the process started, but that wasn’t very effective:

DON’T PANIC. TRUST ME, I AM BUSY RIGHT NOW.`

That may not have been the exact phrase I tried, but the user experience was confusingly great and awful at the same time. We could have shipped it that way with a slight tweak to wording. I wanted a great user experience that didn’t leave the user in a state of elated befuddlement. It was fast! Hold on — when I say fast in this context … it was fast for 1993-1994. The operation even after this vast improvement was in the 30-60 SECOND range to remove many hundreds of voided test results. Yes, you read that right. 30-60 seconds! Compare this to the 15+ minutes that a customer would spend manually doing the operation and you can see why this would have been a phenomenal workflow improvement, especially as the task was tedious and the result of an unintended incident in the lab.

As you may recall during the creation of EAVIEWID, I had learned the hidden secrets of the terminal and how to bend it to my will through the power of terminal (control) codes. An idea formed … what if …

These four characters changed the world project: \ | / -

Please Wait

The Spinner is Revealed

At a few key workflow points during the long operation, my new code replaced the text at a specific location on the lower left of the screen with one of those characters. I know I didn’t use complicated logic to make sure that the pacing was even … it performed much like the Windows 95 till Windows vCurrent file copy dialog … via spurts of rapid progress and then sudden slow downs. Wasting routine characters and CPU cycles for an animation easing with 4 terminal characters was out of scope no matter how much I would have wanted to add those even back then (there was no sub-second precision timer available either in MUMPS then, so…).

But, in the end, the new functionality and simple animation worked well and customers rejoiced partied after receiving and using the new functionality.

I don’t remember if the Cohort team and other coworkers gave me high-fives for my creative solution, but I don’t remember them NOT doing that either. 😁

Brr ❄️🧤 Brr

Thankfully, I was able to complete the project without my winter gloves. While I have a few fond memories of the Epic Medical Circle building experience, I am glad I only spent one cold Winter season at that location.

SQL Enters the Epic Chat

10 min read

Happy 30th birthday to SQL at Epic (1994-2024)!

Did you know that SQL wasn’t always available at Epic? And that before Clarity … there was a way to use SQL against Chronicles databases? I know! It’s wild! In the time before SQL there was only Search and Report Generator (I shudder from needing to use those very often).

Made up chat conversation about needing SQL

The specific start date of the project is fuzzy to me, but I was asked to assist with the effort of embedding a SQL solution while still working on the Cohort Lab product team. Even back in late 1993 and 1994 Epic was hearing from potential new customers that the lack of an industry standard query language for accessing Epic data could be a sales challenge. They wanted a better way to get access to their data; their requests were very reasonable.

The Epic built-in tools were a hard sell. With the right Epic staff and wizardry, a lot could be done with the tools, but wielding these tools required great patience and knowledge. My TL could easily shame me with her knowledge and skills at the time. She could unlock the doors and breeze through the data. Me, on the other hand would walk straight into the closed doors routinely, stumbling around in the darkness. The experience of editing and creation of reports was also …, well, cumbersome. The workflows were all prompt and menu driven and it was very easy to rapidly become lost in the experience. It never clicked for me.

The Epic Foundations team (maintainers of Chronicles and other lower level utilities at the time) was tasked to enable SQL for accessing Chronicles data (I think it was just one person working on this task full time). If you were to sit down and design a proprietary database that was generally optimized for healthcare datasets and then try to layer on a standards-based structured query language on top of that proprietary database, you’d likely decide that the two cannot be combined effectively and other access mechanisms would be reasonable (create an API I hear you say!). But, in the 1990s, that wasn’t a thing and just meant more programming and required customers to have skills they shouldn’t have needed to have. Epic software was being sold into organizations where Epic was just a piece of the software IT puzzle and was not the massive “Solution” with so many systems as it has today. It wasn’t The Enterprise, just Enterprise-ready.

Epic’s software needed to integrate. Data needed to be combined with other data. IT staff didn’t have time to learn another reporting tool, extraction layer, etc. Further, there were admittedly quite a few limits with data reporting back then that made gathering data from multiple Epic databases (AKA table groups) perplexing and complex. A SQL solution would enable a whole new world of capabilities for customers and Epic developers.

Here’s the rub though: Chronicles doesn’t map well to “tables” like you’d find in a traditional relational database. In fact, it’s not a natural fit at all. If you’re an (ex)Epic employee reading this — yes yes yes. It isn’t too terrible the way it’s all mapped now. But, getting there wasn’t so straightforward.

One of the early decisions was to buy a license for a MUMPS based software package from a very tiny software company (I think it was just one full-time guy, Dave Middleton, the owner and maybe a part timer?). The product was called KB-SQL. It seems that in 2022 the company was acquired and the product still exists Knowledge Based Systems.

I know the initial price of the Epic part of the license was more than a typical software developer yearly salary at Epic. That was HUGE, especially since it was such a small part of the overall Epic system. And, Epic is very OMG frugal when it comes to software spending.Each customer then had to pay for “per-user” licenses to run it on their systems.

KB-SQL was a very interesting solution and setup. It had two things that made it very “modern” at the time — a full screen text editor (EZQ) for writing and running queries and an extension mechanism for connecting the editor and query builder/runtime to an arbitrary data source. Even with that extensibility we still needed to actually map the Chronicles data to tables/schemas. We had a LOT of meetings diving into the way Chronicles worked and the way KB-SQL worked. The combination forced some interesting limitations that we had to design around. Dave made changes to accommodate Epic’s requirements when practical. We wrote a lot of experimental queries to test the design.

I remember table and column names had to be far fewer characters than we wanted (12 I think?). I kept writing and adjusting tools to run through all of the Epic built Chronicles databases at the time to test our naming conventions (and to provide a way for us to apply the names automatically as much as possible). I’d print out the results and we’d often go, “bleh, that’s awful.” It took some time and many tables needed some adjusting. The final tool I’d written for this project had a list of common abbreviations we needed to apply and Epic naming conventions so that it could shorten names as much as possible while feeling like the names came from the same company rather than teams deciding on their own patterns.

We created new Chronicles items to store the metadata needed by the KB-SQL engine. Many lines of code were written to convert the queries into compiled MUMPS code (all behind the scenes). The compiler along with KB-SQL tooling had deep knowledge of the Chronicles database and could produce a best-case MUMPS routine for a single query. The temporary routines were not meant for human consumption. They stored temporary results in intentionally obscure/unused globals, did direct global access when possible (although the push to using APIs as mentioned previously made this more challenging).

By doing execution this way, it provided the best runtime experience for the query author. Generating the code wasn’t slow, so the step seemed very reasonable. That choice did mean that moving a query to another Epic environment as part of an installation for example would trigger a build at that time. There was no Epic branding or wrapper placed around the KB-SQL editor experience. For 1994, it was slick.

We decided on naming conventions so that the types of tables and data was more obvious from the name. For example, because some data is time oriented, those tables needed the concept of a timestamp. If you were using a timestamped table row, you needed to absolutely not forget to use it in the query or the results could be wrong and LONG! Some tables were no more than a category (pick) list (TEST_TYPES_CAT). We added short postfix notation to most tables which was annoying, but without these, it was very very difficult to understand what was what (there was no EZQ-intellisense in the code editor!). Common columns in a database were named based on the database when possible, PATIENT_ID. Each database mapped to potentially dozens and dozens of tables, so having a consistent convention helped tremendously when building queries. Following a long standing tradition at Epic, temporary queries were prefixed with X_{Initials}_{Name} with each Epic product team having a prefix letter reserved for standard queries that were shipped with Epic software.

Locating an item wasn’t as easy at the time as would have liked. If you had a specific Chronicles item you wanted, you needed to either remember where it was or consult the Chronicles Dictionary. It wasn’t hard to do, but it wasn’t the ideal user experience. We produced documentation with the details for external and internal use although it wasn’t satisfying.

We automated as much as we could for application teams. I don’t recall honestly anyone particularly enthused about the new SQL support directly. Unfortunately, it was a “one-more-thing” to worry about, learn, test, etc. Maybe because I was too close to the project, I was the opposite. I wanted to push this thing to its limits and beyond. I frequently did (and broke things in spectacular ways). In making and testing lots of reports for Cohort Lab though, it became very evident that SQL alone wouldn’t be enough to produce the best reports. KB-SQL had what I’m going to call “user-defined-functions”. These were MUMPS code based, but wrapped up into KB-SQL in such a way that a developer could use them to enhance both the query but also the column output. I made miracles happen with the data (miracles may be a stretch, but they were super useful and really tricked out the system — some were moved into standard code and shipped with all Epic products). Whereas Chronicles Report Generator and its ad-hoc search capabilities built into Chronicles always left me wanting, the SQL support gave me reporting super-powers. Writing queries that used multiple databases was no longer a technical hurdle I needed to jump and stumble over, it was a few queries away using a standard language tool. When it fell short, UDFs to the rescue!

Because of the way Chronicles structures data, building code that traversed the code most efficiently required some adjustments and new size-calculations to be stored (like how many keys were in an index for example). Selecting the best index needed to be scored against other potential options. I haven’t added this to my LinkedIn profile, but I know I wrote my fair share of cartesian product joins too, especially at the beginning. My skills at quickly killing the right process on the shared server grew day by day.

We also added enhancements so that doing reports to screen or a file used Epic’s standard systems (which in turn unlocked some improvements to the way device output selection was done for the better).

For me, the most amazing feature that all of this work eventually unlocked was that there were database connectors available for Windows! Using a then modern tool or programming language that supported the KB-SQL driver, I could access Chronicles data directly without a terminal or Epic application! It seems so ho-hum these days, but in 1994, that was big. It provided a level of data access that Epic couldn’t do otherwise without custom coding.

It was a fun and important project for Epic to work on and I’m glad I was along for the ride. I don’t know the specific date when a customer had production access to KB-SQL and Epic, but it was sometime 1994 (I’ve got a post in mind to talk about the release schedules back then).

Dave probably grew weary of all of our requests and dreams and wants back then, but he was great to work with all along the way. I see that he retired in August 2023 — so congratulations to him!

Hi! Before you go...🙏

I really appreciate you stopping by and reading my blog!

You might not know that each Epic blog post takes me several hours to write and edit.

If you could help me by using my Amazon affiliate links, it would further encourage me to write these stories for you (and help justify the time spent). As always, the links don't add cost to the purchase you're making, I'll just get a little something from Amazon as a thanks.

I'll occasionally write a blog post with a recommendation and I've also added a page dedicated to some of my more well-liked things. While you can buy something I've recommended, you can also just jump to Amazon and make a purchase. Thanks again!