Not a Level 5

6 min read

It’s not uncommon to see someone build a small working demonstration or design a user interface for an aspect of managing healthcare. From managing your own personal healthcare data to a “complete” electronic health record, you’ll see it all.

Having worked at Epic for 25 years, I have had some experience with healthcare software from patient/consumer access (MyChart) to the core databases running the Epic system (and so many moving parts between). This morning while doing my routine of watching a few YouTube videos while exercising, I started watching a video entitled, “5 levels of UI skill. Only 4+ gets you hired.”

Ok: Pure and utter clickbait. I tapped it and started watching as the draw to find out …, but, I couldn’t stomach the full content after the creator made proclamations of extraordinarily subjective measures of people and their designs: five levels?

The aspect of the video that caught my attention and raised my alarm for “this is utter nonsense” was the self-proclaimed, level 5 application. Here’s a snapshot:

Level 5 Amazing App

If you’ve worked at Epic, or in healthcare in nearly any capacity, I suspect you may see why this “Level 5” application design is so poor.

I’m confident the author thinks this is an award winning application. If you ignore the giant-roundness of everything, let me show something by way of a list:

  • Menu button probably
  • An alarm with a green dot
  • Good morning Angelina and photo of Angelina
  • November 2022 Healthcare Report > Read Report
  • Your teammates (4 images teammates), and a add/plus button
  • Upcoming Appointments:
    • Now, Today, Thomas Lawrence, Briefing > Join Meeting
    • 12:20 Today, Melissa McMillan, Appointment > Awaiting
  • A row of unlabeled icons:
    • Home, Graphs?, Calendar, Folder?, Person

That’s it. A full application screen.

Comments

  • An entire mobile screen with very limited information. There’s no concern of having too much to do on this screen.
  • The application places no importance on data density or relevance to important tasks.
  • How long has “Healthcare report” been available? Is it new? Why is it appearing there? Why does Angelina care?
  • Can a user dismiss the report if they aren’t interested or have already read it?
  • Decide on a meaningful report name and use that in an example rather than “healthcare report.”
  • Is the “healthcare report” more important than the briefing that Angelina is apparently due for right now?
  • Only 2 appointments fit on the screen, and still provide basic to no helpful information about each appointment.
  • The second appointment listed is an “appointment?” That information was obvious from being in a list of appointments. “Virtual exam.” “Wellness Visit” … nearly anything would have been a better choice than “Appointment” in a list of appointments.
  • I wonder how long an appointment is.
  • It appears that there’s no way to scroll back to see the appointments Angelina had (but I’ll say maybe, but I doubt it as there’s no visual hint that would work). Have they missed an appointment or want to review them using the same user experience that they use to see upcoming appointments?
  • There’s a button saying “awaiting” on the second appointment. Is that an action: “awaiting” someone? Or is it a status? The giant appointment slot to the left suggests the rectangular shape is an actionable button, but with the second one saying “awaiting” it’s not clear.
  • Does the user need a giant “Good morning Angelina?” In a business setting, it’s wasted space for someone who is likely rushing from task to task. After seeing that on the 2nd day, I’d want to disable that feature FOREVER.
  • Even if this app were being used on a shared device with authentication for a current user, the space taken by the avatar and the greeting is very much wasted.
  • Does Angelina need a large daily reminder of what they looked like when their staff photo was taken 5 years ago?
  • What unusual workplace would Angelina be in where adding a teammate would be important enough to warrant reserving space for that action on this screen?
  • Who are these “teammates” anyway? That might align with some clinical situations, but I doubt it. I would expect the list to change based on scheduling, care teams, etc. Having only photos for staff is frustrating as it’s very common that staff photos are poor and out of date. Hair, beards, age, glasses …, and at a small size, they tend to look too similar. In most work environments (including healthcare), you don’t choose your teammates.
  • I presume that the images of teammates with what probably is their status is an actionable button. It’s a mystery.
  • Does the likely status circle mean they’re not busy if it’s green? Or they’re in an appointment? Or at lunch? Or out of the office? Or in a briefing … it better not just be color alone that indicates the status.
  • There’s an avatar image on appointments, is that also an actionable button?
  • What happens when the list teammates is longer than 4? Does the list scroll? Is that useful?
  • The affirmation that “Now” is also today …, but what would it say if Angelina was 5 minutes late?
  • The blue background color seems to be used for important actions, yet the “Add to Teammates” button is also blue.
  • I would have expected to see some way to see messages from other teammates and coworkers on this screen, in addition to their “in box” of clinical messages that aren’t chat-like.
  • This application would be difficult to localize as is, I would expect text wrapping issues in many languages.

I couldn’t think of a single application I use routinely in any capacity that fails as much as this mock/application has.

On Levels

To suggest that there are “levels” and the only real way to get hired is to X, Y, & Z is a load of 💩. The author creates courses that they want you to buy: Achieve Level 5!

I’d instead say an app that looks good but doesn’t provide the functionality the user needs for their job is a failure. Since it’s all subjective anyway, I’d give this app design a 1.

Instead, my basic advice:

Practice your skills. Get feedback. Keep at it. Practice. Listen. Watch videos, but watch/listen with a critical eye. Even taking the time to mentally make a list of what you’d change about a design you see can help you grow and learn.

Summary

Don’t watch the video.

Overall, this is a sloppy attempt at click-bait and application design for a well-known industry. A moderate amount of research into the responsibilities of Angelina’s role would have provided a much better guide for a purpose built application/design.

If you’d like your application reviewed, I have a service where I provide that. Save yourself a lot of time and frustration by having your app design reviewed BEFORE your engineers spend weeks or months implementing it. I can go a lot deeper and broader than I’ve done here. I can talk pixels and typefaces and colors and …😀.

Four Characters Saved the Project!

6 min read

Have you ever tried to type with gloves on? Not on your mobile phone (which can be considered barely passable with the right gloves), but on a full size keyboard…?

During the frigidly cold ☃️ Winter of 1993-1994, the Epic building was not a warm cozy place in every office. The office I was sharing at the time could have doubled as a refrigerator. I didn’t need an ice-pack for my lunch as the room was cold enough. Space heaters were prohibited for two reasons: the Madison Fire Marshall did not approve and the building’s electrical was temperamental on many circuits. The building was likely wired during an era when the occupants had lamps, electric pencil sharpeners, and the Space Race hadn’t even been a dream. My teeth chattered along with the clacky keys of my keyboard. That is, they clacked when I could get my cold stiff finger joints to perform the basic operations. Desperate to warm my fingers, I’d wear my winter gloves, but that just resulted in even longer MUMPS routines that contained more gibberish than normal.

With an endless stream of glamorous possibilities for the Cohort Public Lab product, I was assigned an important project:

PRJ 249567: DELETE ALL BATCH TESTS RESULTS AFTER FAILURE

(I have no idea what the actual project number was. But did you know that Cohort and Foundations used the PRJ database before it was “cool” for other teams at Epic? And a big hat tip to any reader who knows where that number is from — hint: it’s a Hollywood reference.)

The Project

Occasionally, the lab would need to throw out a large batch of test results. Accidents happen. Machines fail. Zombies attack. Apparently, these incidents happened frequently enough that that the manual deletion of one-by-one-by-one was a terrible experience. It could be a dozen to hundreds of tests that needed to be voided/deleted from the system. The existing user interface was on Cohort Lab screen using Chronicles Screen Paint (a neat way to draw a screen, show data, and have input entry). Using the arrow keys to navigate patiently to the result to delete, press the appropriate function key, wait for it, and repeat. There’s no way to sugar coat how slow that process was. Screen refreshes were like like using a 2G mobile/cell signal to watch videos on Tik-Tok.

My task was to make this a noticeably faster operation. Naive me got to coding. As this logic was embedded inside of Cohort and in a particular screen as part of Chronicles, there were more than a handful of “rules” that had to be followed. Some rules were there to to prevent screen issues and others to prevent data corruption. I followed the rules. Screen glitches were annoying for users and data corruption would lead to an unhappy end and trouble down the road.

The first results SUCKED.

While the data was removed faster than a human could have performed the same operation, it was akin to upgrading from that 2G to a weak 3G mobile signal. There was a lot of buffering, screen painting, and a lot of frustrating waiting. I talked to my TL who suggested looking for other options but she had no specific advice. Following the rules and common patterns seemed to be the problem.

Undeterred by the rules of Chronicles and MUMPS, I sought a creative workaround. Interestingly, because of the way Screen Paint worked, the prescriptive process was to remove rows from the results one by one, which caused a repaint. The code removed row 1, then row 1, then row 1 … For common interactive workflows, the order in which this was done didn’t matter. Fast enough. In this case, the sheer volume of results caused the screen painting algorithm to be constantly busy and the terminal would only occasionally refresh meaningfully (the terminal was constantly busy).

Eureka!

During a moment of non-caffeinated free-soda-fueled inspiration I realized that disabling updates to the screen and deleting rows from the END of the list would significantly improve the performance of this workflow. Cohort’s lists were often uncommonly large compared to other Epic applications, so this wasn’t a pattern that was routinely necessary. Almost instantly, it was fast!

I immediately went to see the TL and mentioned now there was an interesting problem that cropped up — nothing happened visually on the screen until the list was nearly empty. The code was busy deleting and the screen wasn’t refreshing. There was nothing to watch for the user. It was just doing its $job.

Like a modern application where there’s no beach-ball or spinning hourglass, Cohort just seemed busted.

IT’S DEAD JUDY

I tried adding some warning text using standard mechanisms before the process started, but that wasn’t very effective:

DON’T PANIC. TRUST ME, I AM BUSY RIGHT NOW.`

That may not have been the exact phrase I tried, but the user experience was confusingly great and awful at the same time. We could have shipped it that way with a slight tweak to wording. I wanted a great user experience that didn’t leave the user in a state of elated befuddlement. It was fast! Hold on — when I say fast in this context … it was fast for 1993-1994. The operation even after this vast improvement was in the 30-60 SECOND range to remove many hundreds of voided test results. Yes, you read that right. 30-60 seconds! Compare this to the 15+ minutes that a customer would spend manually doing the operation and you can see why this would have been a phenomenal workflow improvement, especially as the task was tedious and the result of an unintended incident in the lab.

As you may recall during the creation of EAVIEWID, I had learned the hidden secrets of the terminal and how to bend it to my will through the power of terminal (control) codes. An idea formed … what if …

These four characters changed the world project: \ | / -

Please Wait

The Spinner is Revealed

At a few key workflow points during the long operation, my new code replaced the text at a specific location on the lower left of the screen with one of those characters. I know I didn’t use complicated logic to make sure that the pacing was even … it performed much like the Windows 95 till Windows vCurrent file copy dialog … via spurts of rapid progress and then sudden slow downs. Wasting routine characters and CPU cycles for an animation easing with 4 terminal characters was out of scope no matter how much I would have wanted to add those even back then (there was no sub-second precision timer available either in MUMPS then, so…).

But, in the end, the new functionality and simple animation worked well and customers rejoiced partied after receiving and using the new functionality.

I don’t remember if the Cohort team and other coworkers gave me high-fives for my creative solution, but I don’t remember them NOT doing that either. 😁

Brr ❄️🧤 Brr

Thankfully, I was able to complete the project without my winter gloves. While I have a few fond memories of the Epic Medical Circle building experience, I am glad I only spent one cold Winter season at that location.

SQL Enters the Epic Chat

10 min read

Happy 30th birthday to SQL at Epic (1994-2024)!

Did you know that SQL wasn’t always available at Epic? And that before Clarity … there was a way to use SQL against Chronicles databases? I know! It’s wild! In the time before SQL there was only Search and Report Generator (I shudder from needing to use those very often).

Made up chat conversation about needing SQL

The specific start date of the project is fuzzy to me, but I was asked to assist with the effort of embedding a SQL solution while still working on the Cohort Lab product team. Even back in late 1993 and 1994 Epic was hearing from potential new customers that the lack of an industry standard query language for accessing Epic data could be a sales challenge. They wanted a better way to get access to their data; their requests were very reasonable.

The Epic built-in tools were a hard sell. With the right Epic staff and wizardry, a lot could be done with the tools, but wielding these tools required great patience and knowledge. My TL could easily shame me with her knowledge and skills at the time. She could unlock the doors and breeze through the data. Me, on the other hand would walk straight into the closed doors routinely, stumbling around in the darkness. The experience of editing and creation of reports was also …, well, cumbersome. The workflows were all prompt and menu driven and it was very easy to rapidly become lost in the experience. It never clicked for me.

The Epic Foundations team (maintainers of Chronicles and other lower level utilities at the time) was tasked to enable SQL for accessing Chronicles data (I think it was just one person working on this task full time). If you were to sit down and design a proprietary database that was generally optimized for healthcare datasets and then try to layer on a standards-based structured query language on top of that proprietary database, you’d likely decide that the two cannot be combined effectively and other access mechanisms would be reasonable (create an API I hear you say!). But, in the 1990s, that wasn’t a thing and just meant more programming and required customers to have skills they shouldn’t have needed to have. Epic software was being sold into organizations where Epic was just a piece of the software IT puzzle and was not the massive “Solution” with so many systems as it has today. It wasn’t The Enterprise, just Enterprise-ready.

Epic’s software needed to integrate. Data needed to be combined with other data. IT staff didn’t have time to learn another reporting tool, extraction layer, etc. Further, there were admittedly quite a few limits with data reporting back then that made gathering data from multiple Epic databases (AKA table groups) perplexing and complex. A SQL solution would enable a whole new world of capabilities for customers and Epic developers.

Here’s the rub though: Chronicles doesn’t map well to “tables” like you’d find in a traditional relational database. In fact, it’s not a natural fit at all. If you’re an (ex)Epic employee reading this — yes yes yes. It isn’t too terrible the way it’s all mapped now. But, getting there wasn’t so straightforward.

One of the early decisions was to buy a license for a MUMPS based software package from a very tiny software company (I think it was just one full-time guy, Dave Middleton, the owner and maybe a part timer?). The product was called KB-SQL. It seems that in 2022 the company was acquired and the product still exists Knowledge Based Systems.

I know the initial price of the Epic part of the license was more than a typical software developer yearly salary at Epic. That was HUGE, especially since it was such a small part of the overall Epic system. And, Epic is very OMG frugal when it comes to software spending.Each customer then had to pay for “per-user” licenses to run it on their systems.

KB-SQL was a very interesting solution and setup. It had two things that made it very “modern” at the time — a full screen text editor (EZQ) for writing and running queries and an extension mechanism for connecting the editor and query builder/runtime to an arbitrary data source. Even with that extensibility we still needed to actually map the Chronicles data to tables/schemas. We had a LOT of meetings diving into the way Chronicles worked and the way KB-SQL worked. The combination forced some interesting limitations that we had to design around. Dave made changes to accommodate Epic’s requirements when practical. We wrote a lot of experimental queries to test the design.

I remember table and column names had to be far fewer characters than we wanted (12 I think?). I kept writing and adjusting tools to run through all of the Epic built Chronicles databases at the time to test our naming conventions (and to provide a way for us to apply the names automatically as much as possible). I’d print out the results and we’d often go, “bleh, that’s awful.” It took some time and many tables needed some adjusting. The final tool I’d written for this project had a list of common abbreviations we needed to apply and Epic naming conventions so that it could shorten names as much as possible while feeling like the names came from the same company rather than teams deciding on their own patterns.

We created new Chronicles items to store the metadata needed by the KB-SQL engine. Many lines of code were written to convert the queries into compiled MUMPS code (all behind the scenes). The compiler along with KB-SQL tooling had deep knowledge of the Chronicles database and could produce a best-case MUMPS routine for a single query. The temporary routines were not meant for human consumption. They stored temporary results in intentionally obscure/unused globals, did direct global access when possible (although the push to using APIs as mentioned previously made this more challenging).

By doing execution this way, it provided the best runtime experience for the query author. Generating the code wasn’t slow, so the step seemed very reasonable. That choice did mean that moving a query to another Epic environment as part of an installation for example would trigger a build at that time. There was no Epic branding or wrapper placed around the KB-SQL editor experience. For 1994, it was slick.

We decided on naming conventions so that the types of tables and data was more obvious from the name. For example, because some data is time oriented, those tables needed the concept of a timestamp. If you were using a timestamped table row, you needed to absolutely not forget to use it in the query or the results could be wrong and LONG! Some tables were no more than a category (pick) list (TEST_TYPES_CAT). We added short postfix notation to most tables which was annoying, but without these, it was very very difficult to understand what was what (there was no EZQ-intellisense in the code editor!). Common columns in a database were named based on the database when possible, PATIENT_ID. Each database mapped to potentially dozens and dozens of tables, so having a consistent convention helped tremendously when building queries. Following a long standing tradition at Epic, temporary queries were prefixed with X_{Initials}_{Name} with each Epic product team having a prefix letter reserved for standard queries that were shipped with Epic software.

Locating an item wasn’t as easy at the time as would have liked. If you had a specific Chronicles item you wanted, you needed to either remember where it was or consult the Chronicles Dictionary. It wasn’t hard to do, but it wasn’t the ideal user experience. We produced documentation with the details for external and internal use although it wasn’t satisfying.

We automated as much as we could for application teams. I don’t recall honestly anyone particularly enthused about the new SQL support directly. Unfortunately, it was a “one-more-thing” to worry about, learn, test, etc. Maybe because I was too close to the project, I was the opposite. I wanted to push this thing to its limits and beyond. I frequently did (and broke things in spectacular ways). In making and testing lots of reports for Cohort Lab though, it became very evident that SQL alone wouldn’t be enough to produce the best reports. KB-SQL had what I’m going to call “user-defined-functions”. These were MUMPS code based, but wrapped up into KB-SQL in such a way that a developer could use them to enhance both the query but also the column output. I made miracles happen with the data (miracles may be a stretch, but they were super useful and really tricked out the system — some were moved into standard code and shipped with all Epic products). Whereas Chronicles Report Generator and its ad-hoc search capabilities built into Chronicles always left me wanting, the SQL support gave me reporting super-powers. Writing queries that used multiple databases was no longer a technical hurdle I needed to jump and stumble over, it was a few queries away using a standard language tool. When it fell short, UDFs to the rescue!

Because of the way Chronicles structures data, building code that traversed the code most efficiently required some adjustments and new size-calculations to be stored (like how many keys were in an index for example). Selecting the best index needed to be scored against other potential options. I haven’t added this to my LinkedIn profile, but I know I wrote my fair share of cartesian product joins too, especially at the beginning. My skills at quickly killing the right process on the shared server grew day by day.

We also added enhancements so that doing reports to screen or a file used Epic’s standard systems (which in turn unlocked some improvements to the way device output selection was done for the better).

For me, the most amazing feature that all of this work eventually unlocked was that there were database connectors available for Windows! Using a then modern tool or programming language that supported the KB-SQL driver, I could access Chronicles data directly without a terminal or Epic application! It seems so ho-hum these days, but in 1994, that was big. It provided a level of data access that Epic couldn’t do otherwise without custom coding.

It was a fun and important project for Epic to work on and I’m glad I was along for the ride. I don’t know the specific date when a customer had production access to KB-SQL and Epic, but it was sometime 1994 (I’ve got a post in mind to talk about the release schedules back then).

Dave probably grew weary of all of our requests and dreams and wants back then, but he was great to work with all along the way. I see that he retired in August 2023 — so congratulations to him!

Finding a Faster Way, and not Doing Boring Work

10 min read

One of my first really big projects was truly Epic in scope … convert all of Cohort’s direct Chronicles global references into using APIs, everywhere.

I definitely don’t remember how many lines of code there were across the Epic Cohort code base back then, but it wasn’t small. When I worked on this, EpicCare Ambulatory / Legacy hadn’t been released yet and every other application was fully text/terminal based. It wasn’t a two-week project. Worse, it really didn’t have a clear end-date, because no one knew how much work it would be, other than substantial. I’d proven I wasn’t terrible at my job and could be trusted to change a LOT of code.

MUMPS, used by Epic at the time (and still does, just with a different name), organizes source code into files that are named routines. The routines back in early-to-mid 1990s were size limited. I don’t remember if it was 2K or 4K of source code at the time — it really wasn’t much. One of the astonishingly wild things about MUMPS the programming language is that it allows keyword abbreviations. And back in the days when the file sizes were capped at such small sizes, the abbreviations were used exclusively.

Until you see old MUMPS code, you really don’t understand how unusual it was to read and understand MUMPS code. This is just a goofy little sample that doesn’t do anything important other than exercise a number of features of MUMPS:

MUMPS
MYEXAM ; 
        N c,i,d,t S %2=""
        W !,"I<3MUMPS: "
        S c=^G("LAB",LID,DATE),%2="",t=$$tm()
        F  D  Q:i=""
        . S i=$O(^G("LAB",LID,DATE,i))
        . Q:i="" 
        . S %1=^G("LAB",LID,DATE,i)
        . S:i>100 %2="!"
        . S ^G1("S",$I,t,c-i)=%1_%2        
        . W:i#5=0 "." W:i#100=0 !,i
        Q
tm()    Q $P($H,",",2)

Post-conditionals (S:i>100 %2="!") make for some fun code to read: perform the operation if the condition is true.

In addition to the limited file/routine sizes, MUMPS also was not fast. That meant that code needed to take a number of liberties if performance was desirable. For example, not calling functions with arguments. The stack wasn’t particularly efficient, so code would routinely document its requirements and expect variable values to be available. Variable values would be available in the current process/execution without needing to pass the variable. Calling a function didn’t prevent the called code from accessing the variables that had been declared or set in other code.

Aside: When a user would connect to an Epic system, they’d connect the terminal to a new captive session which was a new MUMPS process (also known as a $JOB). Epic code would be called immediately. That process would be dedicated to that user until they ended the Epic application. When the process starts, the MUMPS code executed and all variables declared from all prior execution stacks were available. A new stack could redeclare a variable and then that new variable’s value would be available to further code until the stack was popped. It’s ingeniously simple and dastardly at the same time. So if a variable X was declared in stack one and set to 1X and called a function, the function could read the value of X without it being passed! If X were declared (via NEW) and set to a new value (or not), they’d reference the newly declared X rather than the X that was lower on the stack. As soon as the stack level was exited, the prior X was in again in scope along with it’s value.

If you’re thinking that must have been a super fun way to code, you’d absolutely be right! Not uncommon at the time when writing MUMPS code was having some assumed scratch variables that you did not need to declare (Epic’s convention was variables %0-%9 were scratch), but you needed to use them at your own risk, as a function call/goto anywhere else might also use the same scratch variables.

I won’t lie to you. Scratch variables would often make you scratch your head. Repeatedly. Great for performance (via well tested performance metrics to confirm that their use was to the benefit of the end-user), but lousy for developers. Lousy. Additionally, there were a handful of variable values that were well-known across Epic code bases and much of the core Foundations code expected them, so those were generally reasonable to intuit without concern. But, occasionally, they’d leak and cause unexpected conflicts during development.

Back to the project. Cohort, due to the size and the way public health labs operated, often was executed in what was considered to be multiple “directories.” It was one MUMPS system, but Cohort had multiple directories in which it would run. As Cohort systems grew larger, the need for a different way to address not only the directories but also the core MUMPS globals became very important. (I don’t remember what the sizes of these Cohort systems were anymore, but at the time they seemed GIGANTIC—probably on the orders of tens of GBs or less honestly).

It was necessary for these Very Large Systems that the MUMPS globals would be stored in separate disk files, and that would require that all global references would need to include references to the proper global file. It went from ^Global() to ^[loc]Global(). And while the Cohort team could have adapted I suppose, there was enough configuration and complexity that switching to Foundations APIs that were designed to handle this new requirement was a much better long term strategy.

Every global reference needed to be reviewed and modified. Honestly, I wanted none of that. It was a mind-numbingly fraught-with-error-potential project that was as horrible as it sounds. Touch EVERYTHING. Test EVERYTHING.

As you can guess, MUMPS wasn’t designed around any modern design principles of effective coding. There were no pure functions (and as I mentioned, often no functions at all). There were these horrible horrible things called naked global references:

MUMPS
S ^GLOB("DATA",456,"ABC")=1
S ^("DEF")=2

That was the equivalent of:

MUMPS
S ^GLOB("DATA",456,"ABC")=1
S ^GLOB("DATA",456,"DEF")=2

MUMPS would reuse the most recent specified global name and all but the last subscript (the keys are called subscripts), and then build a new global reference from that. Any reference was considered within the process. Using a naked global reference was super risky if you didn’t control every line of code between the most recent Global reference and the line that used that syntax.

Sure, it was a shortcut, and so frustratingly difficult to find and understand. Thankfully, it was mostly in older Epic MUMPS code and was a practice that was actively discouraged in any new code.

I started to look at the Cohort code. The project was complex both from a technical perspective but also from a timing management perspective. I couldn’t just start on line one and work my way through the code. The other Cohort developers would also be touching the same code as they worked on other projects. We had source control — it was: “Here’s the source: One copy. Don’t mess up something another dev is working on.”

After a few days of getting a better understanding of the Cohort source code (HEY TL, I know what you did (and I know she’s reading this)—it was a great way for me to deeply learn the Cohort code), I came up with a different proposal for my TL from what she had originally suggested.

$TEXT is a function built into MUMPS that will read the source code for a file. My proposal: write a parser that would make the changes to as much code as it could and flag changes that needed to be manually modified. She gave me about 2 weeks to work on the parser as it seemed to address a lot of the technical and timing concerns. I jumped in. I’ve worked on a lot of parsers since and find them very intellectually stimulating. Writing a parser in MUMPS was all sorts of ridiculous at the time. It had to adhere to the same core requirements of all MUMPS code. Limited stack, limited memory, slow CPU, no parsers, no objects, simple string manipulations, … it was very basic.

If you step back and look at MUMPS from a high level and look at the language and the behavior, you’ll see machine code. It has so much of the same essential look and functionality (and limitations). The original designs and implementation of MUMPS had it running as its own operating systems (yeah, it’s THAT OLD). The connection to machine code patterns makes sense, even though MUMPS is interpreted. There weren’t lots of programming languages to inspire language design back in the 1960s. Yep: 1960s.

My approach was to build a global structure mirroring the code base as much as was needed. Things that didn’t matter were ignored and not recorded (as it was a slow process already and recording source code noise wasn’t worth the time/disk). Using dozens of test routines with the most complex Cohort code I encountered, my utility improved over the next two weeks to where it was processing and converting the majority of code successfully and logging code that couldn’t be safely converted automatically.

I know I went a little long and the TL was fine with that as she could see the end was in sight and the time spent was still less than if I had done the same work manually. I’d guess it took me about 3 weeks with normal interruptions. I specifically recall it was about 70 hours total.

It was sincerely cool to see it churn through the code and make the necessary modifications. The parser was actually pretty sophisticated and had to handle the nuances and complexity of the Cohort code and MUMPS. It recorded routine jumps, functions, gotos, variables, … so much.

At the end of my development, we ran the tool on the Cohort code base successfully. The dozens of logged issues were fixed by hand by the team collectively. It worked. I think there were a few issues, but it had made many many thousands of changes to the Cohort code base, so no one was surprised that it encountered a few bumps. I’d run it on the code base manually, but the sheer number of changes meant it was nearly impossible to spot the issues that it inadvertently caused.

The project was a success. My TL appreciated my out-of-the-box thinking about how to do the project successfully, not wasting Epic time or resources, and in fact doing it faster than had been expected if I’d done it manually, and with many fewer interruptions to the other Cohort developers (🤣, there were only 2 others at the time).

The code was eventually included in Cohort’s official code base for a variety of reasons, I believe it was called CLPARSE (or CLXPARSE?)

One of the other Cohort developers later took the code and modified and extended it to become Epic’s first linter for MUMPS code. My recollection is fuzzy, but I think the name became HXPARSE which is likely far more familiar to Epic devs than Cohort. 😁

An Upgrade to Viewing Chronicles Data

5 min read

One of the strengths of a good R&D leader is that they’ll let you explore (and build) tools that should save company time and resources. Viewing Chronicles database records in a developer friendly format was a usability challenge. Chronicles isn’t a relational DB. Nor as some might think a NoSQL database. It’s …, well, an amalgamation of a variety of systems, in many ways taking the best of each and mashing them together into a very adaptable solution. (In fact, it did things in 1993 that many modern Database systems still can’t do efficiently, features that would be of benefit to many developers).

A primary reason that there wasn’t a tool that made viewing a full record straightforward is due to the way Chronicles records stored into the MUMPS global (data) systems at the time (the storage has not changed much since my early days at Epic, primarily it’s only been extended, and the classic MUMPS global still remains the only storage). Essentially, there are different storage layouts for different types of Chronicles items in a database. Chronicles uses a sparse storage layout which significantly improves performance when compared to most RDBMSs and (generally) reduces disk storage requirements as well.

Back up for a second though. Epic has used non-common words for various systems for decades. It’s been a sore spot when discussions with more common database system consumers occur.

Here’s the basic breakdown:

Chronicles TermIndustry TermNotes
DatabaseCollection of related TablesToo bad Epic didn’t change this term decades ago
.
DictionaryTable Structure, SchemaI don’t remember why “dictionary” was used
.
ItemColumn
.
Master FileStatic TableBuild data, static things (ex., medications)
.
Category ListPick ListTable as a list, usually static, but with only an identifier and a title and limited other meta-data; ex.: Country
.
Multiple Response ItemOne-to-Many tableFeature of Chronicles to not require a secondary table to store multiple values
.
Related GroupOne-to-Many tableA collection of multiple response items
.
Over-timeDB Row storing value and timestampDefinitely an Epic differentiator for reporting and storage efficiency
.
The .1Primary KeyProbably the table’s primary key, but sometimes like a ROWID in some DB systems
.
Database InitialsSchema NameDue to the way Chronicles stores data in MUMPS, Databases have 3 letter codes, like EMP for Employee
.
Networked ItemForeign KeyItem that points at the ID of another Database
.
GlobalStorageSee this article for now

Those are some of the highlights. Chronicles has support for a wide variety of index types as well. There’s a lot more about Chronicles that isn’t relevant here. I wished we’d had this cheat-sheet guide to hand out to non-Epic staff back then.

As Chronicles is very much an internal Epic-built creation that evolved over the decades, it did not integrate with a set of “off-the-shelf” tools without Epic putting resources to provide APIs enabling tools to access an Epic system. (Not too surprising as Epic is a commercial proprietary system.)

One of the challenges I kept running into during my tenure on Cohort was that I often wanted a holistic view of a Chronicles record (and in particular a recurring need for a specific project I’ll talk about later). Chronicles has built in reporting tools (Report Generator), but they weren’t targeted at developers and they weren’t great for ad-hoc developer needs either. So, one afternoon I built a little experiment that allowed me to see all of the data for a Cohort Database, I’m pretty sure it was OVR, which was Cohort’s storage for lab results. There were lots of items in the database and finding the data quickly was painful. Decoding categories and networked items …, it was cumbersome.

I showed the horribly basic results to my TL and she did not discourage me from continuing. 😁 Over the next few weeks on and off working extra hours I built a nice little viewer. But I hadn’t stopped at the basics. I reverse engineered the ANSI escape codes for VT100+, scoured what few technical books I could find (I found ONE BOOK!), and built a reusable scrollable viewer for this new tool I’d built. REMEMBER: No Internet for research.

The basic steps to use the tool were super developer friendly (I was doing DX before DX became a loved and unloved term!):

  • Pick the database by typing in the 3 character code.
  • Tool would respond with the friendly DB name
  • Select the records using some code that existing to allow record ranges, etc.
  • Finally, select the Chronicles items to view.

The tool then would brute force it’s way through the various Chronicles items and display results in a 128 column viewport (if the terminal supported it), allowing scrolling up and down. Now in 1993/1994, this wasn’t a common user experience, especially for a developer tool. The tool wasn’t efficient because of how Chronicles was structured. It would look at the database dictionary, and then go on a hunt for the items that may be available.

My TL and others on my team started using the tool and I was encouraged to contribute not only the code for the tool but some of the screen code to the Foundations team so that everyone could use it (and start to build more DX friendly Chronicles/Foundation tools).

After a few weeks of more work getting things moved over, documented, waiting for release cycles to connect, the tool was born:

EAVIEWID

I didn’t use a lot of my normal 40-hours working on this initially, but the little bit of encouragement from my TL and team inspired me to create not just a one-off tool, but an actual utility that was used for decades (and then thankfully got rewritten after discovering that it was being used at customers — it was not intended for production use given how it worked).

No SQL

(And if you’re wondering, … there was no SQL available at the time for Chronicles data.)

Opportunities

Have you built something like this for your employer? Have your managers been encouraging? Why?