Shorts

Finding a Faster Way, and not Doing Boring Work

10 min read

One of my first really big projects was truly Epic in scope … convert all of Cohort’s direct Chronicles global references into using APIs, everywhere.

I definitely don’t remember how many lines of code there were across the Epic Cohort code base back then, but it wasn’t small. When I worked on this, EpicCare Ambulatory / Legacy hadn’t been released yet and every other application was fully text/terminal based. It wasn’t a two-week project. Worse, it really didn’t have a clear end-date, because no one knew how much work it would be, other than substantial. I’d proven I wasn’t terrible at my job and could be trusted to change a LOT of code.

MUMPS, used by Epic at the time (and still does, just with a different name), organizes source code into files that are named routines. The routines back in early-to-mid 1990s were size limited. I don’t remember if it was 2K or 4K of source code at the time — it really wasn’t much. One of the astonishingly wild things about MUMPS the programming language is that it allows keyword abbreviations. And back in the days when the file sizes were capped at such small sizes, the abbreviations were used exclusively.

Until you see old MUMPS code, you really don’t understand how unusual it was to read and understand MUMPS code. This is just a goofy little sample that doesn’t do anything important other than exercise a number of features of MUMPS:

MUMPS
MYEXAM ; 
        N c,i,d,t S %2=""
        W !,"I<3MUMPS: "
        S c=^G("LAB",LID,DATE),%2="",t=$$tm()
        F  D  Q:i=""
        . S i=$O(^G("LAB",LID,DATE,i))
        . Q:i="" 
        . S %1=^G("LAB",LID,DATE,i)
        . S:i>100 %2="!"
        . S ^G1("S",$I,t,c-i)=%1_%2        
        . W:i#5=0 "." W:i#100=0 !,i
        Q
tm()    Q $P($H,",",2)

Post-conditionals (S:i>100 %2="!") make for some fun code to read: perform the operation if the condition is true.

In addition to the limited file/routine sizes, MUMPS also was not fast. That meant that code needed to take a number of liberties if performance was desirable. For example, not calling functions with arguments. The stack wasn’t particularly efficient, so code would routinely document its requirements and expect variable values to be available. Variable values would be available in the current process/execution without needing to pass the variable. Calling a function didn’t prevent the called code from accessing the variables that had been declared or set in other code.

Aside: When a user would connect to an Epic system, they’d connect the terminal to a new captive session which was a new MUMPS process (also known as a $JOB). Epic code would be called immediately. That process would be dedicated to that user until they ended the Epic application. When the process starts, the MUMPS code executed and all variables declared from all prior execution stacks were available. A new stack could redeclare a variable and then that new variable’s value would be available to further code until the stack was popped. It’s ingeniously simple and dastardly at the same time. So if a variable X was declared in stack one and set to 1X and called a function, the function could read the value of X without it being passed! If X were declared (via NEW) and set to a new value (or not), they’d reference the newly declared X rather than the X that was lower on the stack. As soon as the stack level was exited, the prior X was in again in scope along with it’s value.

If you’re thinking that must have been a super fun way to code, you’d absolutely be right! Not uncommon at the time when writing MUMPS code was having some assumed scratch variables that you did not need to declare (Epic’s convention was variables %0-%9 were scratch), but you needed to use them at your own risk, as a function call/goto anywhere else might also use the same scratch variables.

I won’t lie to you. Scratch variables would often make you scratch your head. Repeatedly. Great for performance (via well tested performance metrics to confirm that their use was to the benefit of the end-user), but lousy for developers. Lousy. Additionally, there were a handful of variable values that were well-known across Epic code bases and much of the core Foundations code expected them, so those were generally reasonable to intuit without concern. But, occasionally, they’d leak and cause unexpected conflicts during development.

Back to the project. Cohort, due to the size and the way public health labs operated, often was executed in what was considered to be multiple “directories.” It was one MUMPS system, but Cohort had multiple directories in which it would run. As Cohort systems grew larger, the need for a different way to address not only the directories but also the core MUMPS globals became very important. (I don’t remember what the sizes of these Cohort systems were anymore, but at the time they seemed GIGANTIC—probably on the orders of tens of GBs or less honestly).

It was necessary for these Very Large Systems that the MUMPS globals would be stored in separate disk files, and that would require that all global references would need to include references to the proper global file. It went from ^Global() to ^[loc]Global(). And while the Cohort team could have adapted I suppose, there was enough configuration and complexity that switching to Foundations APIs that were designed to handle this new requirement was a much better long term strategy.

Every global reference needed to be reviewed and modified. Honestly, I wanted none of that. It was a mind-numbingly fraught-with-error-potential project that was as horrible as it sounds. Touch EVERYTHING. Test EVERYTHING.

As you can guess, MUMPS wasn’t designed around any modern design principles of effective coding. There were no pure functions (and as I mentioned, often no functions at all). There were these horrible horrible things called naked global references:

MUMPS
S ^GLOB("DATA",456,"ABC")=1
S ^("DEF")=2

That was the equivalent of:

MUMPS
S ^GLOB("DATA",456,"ABC")=1
S ^GLOB("DATA",456,"DEF")=2

MUMPS would reuse the most recent specified global name and all but the last subscript (the keys are called subscripts), and then build a new global reference from that. Any reference was considered within the process. Using a naked global reference was super risky if you didn’t control every line of code between the most recent Global reference and the line that used that syntax.

Sure, it was a shortcut, and so frustratingly difficult to find and understand. Thankfully, it was mostly in older Epic MUMPS code and was a practice that was actively discouraged in any new code.

I started to look at the Cohort code. The project was complex both from a technical perspective but also from a timing management perspective. I couldn’t just start on line one and work my way through the code. The other Cohort developers would also be touching the same code as they worked on other projects. We had source control — it was: “Here’s the source: One copy. Don’t mess up something another dev is working on.”

After a few days of getting a better understanding of the Cohort source code (HEY TL, I know what you did (and I know she’s reading this)—it was a great way for me to deeply learn the Cohort code), I came up with a different proposal for my TL from what she had originally suggested.

$TEXT is a function built into MUMPS that will read the source code for a file. My proposal: write a parser that would make the changes to as much code as it could and flag changes that needed to be manually modified. She gave me about 2 weeks to work on the parser as it seemed to address a lot of the technical and timing concerns. I jumped in. I’ve worked on a lot of parsers since and find them very intellectually stimulating. Writing a parser in MUMPS was all sorts of ridiculous at the time. It had to adhere to the same core requirements of all MUMPS code. Limited stack, limited memory, slow CPU, no parsers, no objects, simple string manipulations, … it was very basic.

If you step back and look at MUMPS from a high level and look at the language and the behavior, you’ll see machine code. It has so much of the same essential look and functionality (and limitations). The original designs and implementation of MUMPS had it running as its own operating systems (yeah, it’s THAT OLD). The connection to machine code patterns makes sense, even though MUMPS is interpreted. There weren’t lots of programming languages to inspire language design back in the 1960s. Yep: 1960s.

My approach was to build a global structure mirroring the code base as much as was needed. Things that didn’t matter were ignored and not recorded (as it was a slow process already and recording source code noise wasn’t worth the time/disk). Using dozens of test routines with the most complex Cohort code I encountered, my utility improved over the next two weeks to where it was processing and converting the majority of code successfully and logging code that couldn’t be safely converted automatically.

I know I went a little long and the TL was fine with that as she could see the end was in sight and the time spent was still less than if I had done the same work manually. I’d guess it took me about 3 weeks with normal interruptions. I specifically recall it was about 70 hours total.

It was sincerely cool to see it churn through the code and make the necessary modifications. The parser was actually pretty sophisticated and had to handle the nuances and complexity of the Cohort code and MUMPS. It recorded routine jumps, functions, gotos, variables, … so much.

At the end of my development, we ran the tool on the Cohort code base successfully. The dozens of logged issues were fixed by hand by the team collectively. It worked. I think there were a few issues, but it had made many many thousands of changes to the Cohort code base, so no one was surprised that it encountered a few bumps. I’d run it on the code base manually, but the sheer number of changes meant it was nearly impossible to spot the issues that it inadvertently caused.

The project was a success. My TL appreciated my out-of-the-box thinking about how to do the project successfully, not wasting Epic time or resources, and in fact doing it faster than had been expected if I’d done it manually, and with many fewer interruptions to the other Cohort developers (🤣, there were only 2 others at the time).

The code was eventually included in Cohort’s official code base for a variety of reasons, I believe it was called CLPARSE (or CLXPARSE?)

One of the other Cohort developers later took the code and modified and extended it to become Epic’s first linter for MUMPS code. My recollection is fuzzy, but I think the name became HXPARSE which is likely far more familiar to Epic devs than Cohort. 😁

An Upgrade to Viewing Chronicles Data

5 min read

One of the strengths of a good R&D leader is that they’ll let you explore (and build) tools that should save company time and resources. Viewing Chronicles database records in a developer friendly format was a usability challenge. Chronicles isn’t a relational DB. Nor as some might think a NoSQL database. It’s …, well, an amalgamation of a variety of systems, in many ways taking the best of each and mashing them together into a very adaptable solution. (In fact, it did things in 1993 that many modern Database systems still can’t do efficiently, features that would be of benefit to many developers).

A primary reason that there wasn’t a tool that made viewing a full record straightforward is due to the way Chronicles records stored into the MUMPS global (data) systems at the time (the storage has not changed much since my early days at Epic, primarily it’s only been extended, and the classic MUMPS global still remains the only storage). Essentially, there are different storage layouts for different types of Chronicles items in a database. Chronicles uses a sparse storage layout which significantly improves performance when compared to most RDBMSs and (generally) reduces disk storage requirements as well.

Back up for a second though. Epic has used non-common words for various systems for decades. It’s been a sore spot when discussions with more common database system consumers occur.

Here’s the basic breakdown:

Chronicles TermIndustry TermNotes
DatabaseCollection of related TablesToo bad Epic didn’t change this term decades ago
.
DictionaryTable Structure, SchemaI don’t remember why “dictionary” was used
.
ItemColumn
.
Master FileStatic TableBuild data, static things (ex., medications)
.
Category ListPick ListTable as a list, usually static, but with only an identifier and a title and limited other meta-data; ex.: Country
.
Multiple Response ItemOne-to-Many tableFeature of Chronicles to not require a secondary table to store multiple values
.
Related GroupOne-to-Many tableA collection of multiple response items
.
Over-timeDB Row storing value and timestampDefinitely an Epic differentiator for reporting and storage efficiency
.
The .1Primary KeyProbably the table’s primary key, but sometimes like a ROWID in some DB systems
.
Database InitialsSchema NameDue to the way Chronicles stores data in MUMPS, Databases have 3 letter codes, like EMP for Employee
.
Networked ItemForeign KeyItem that points at the ID of another Database
.
GlobalStorageSee this article for now

Those are some of the highlights. Chronicles has support for a wide variety of index types as well. There’s a lot more about Chronicles that isn’t relevant here. I wished we’d had this cheat-sheet guide to hand out to non-Epic staff back then.

As Chronicles is very much an internal Epic-built creation that evolved over the decades, it did not integrate with a set of “off-the-shelf” tools without Epic putting resources to provide APIs enabling tools to access an Epic system. (Not too surprising as Epic is a commercial proprietary system.)

One of the challenges I kept running into during my tenure on Cohort was that I often wanted a holistic view of a Chronicles record (and in particular a recurring need for a specific project I’ll talk about later). Chronicles has built in reporting tools (Report Generator), but they weren’t targeted at developers and they weren’t great for ad-hoc developer needs either. So, one afternoon I built a little experiment that allowed me to see all of the data for a Cohort Database, I’m pretty sure it was OVR, which was Cohort’s storage for lab results. There were lots of items in the database and finding the data quickly was painful. Decoding categories and networked items …, it was cumbersome.

I showed the horribly basic results to my TL and she did not discourage me from continuing. 😁 Over the next few weeks on and off working extra hours I built a nice little viewer. But I hadn’t stopped at the basics. I reverse engineered the ANSI escape codes for VT100+, scoured what few technical books I could find (I found ONE BOOK!), and built a reusable scrollable viewer for this new tool I’d built. REMEMBER: No Internet for research.

The basic steps to use the tool were super developer friendly (I was doing DX before DX became a loved and unloved term!):

  • Pick the database by typing in the 3 character code.
  • Tool would respond with the friendly DB name
  • Select the records using some code that existing to allow record ranges, etc.
  • Finally, select the Chronicles items to view.

The tool then would brute force it’s way through the various Chronicles items and display results in a 128 column viewport (if the terminal supported it), allowing scrolling up and down. Now in 1993/1994, this wasn’t a common user experience, especially for a developer tool. The tool wasn’t efficient because of how Chronicles was structured. It would look at the database dictionary, and then go on a hunt for the items that may be available.

My TL and others on my team started using the tool and I was encouraged to contribute not only the code for the tool but some of the screen code to the Foundations team so that everyone could use it (and start to build more DX friendly Chronicles/Foundation tools).

After a few weeks of more work getting things moved over, documented, waiting for release cycles to connect, the tool was born:

EAVIEWID

I didn’t use a lot of my normal 40-hours working on this initially, but the little bit of encouragement from my TL and team inspired me to create not just a one-off tool, but an actual utility that was used for decades (and then thankfully got rewritten after discovering that it was being used at customers — it was not intended for production use given how it worked).

No SQL

(And if you’re wondering, … there was no SQL available at the time for Chronicles data.)

Opportunities

Have you built something like this for your employer? Have your managers been encouraging? Why?

Subscriptions on the Way!

1 min read

My blog post planned for today was to announce that I had a subscription email set up so you could have my blog posts delivered directly to your email inbox.

Instead of spending the time writing the post, I worked on setup instead. And then … it went longer and longer, and it seemed like the settings and choices never ended and … it’s not going to happen today.

So, this post is about a thing that is coming soon.

I think I got caught up a bit in the “look how shiny and easy this is… marketing fluff and enthusiastic user testimonials,” and didn’t see that getting things off the ground wasn’t so straightforward.

I’m very likely to use the services from buttondown, but the onboarding has been a little clunky, and I’ve lost some data due to some frustrating user interface choices. It’s probably super easy to get something ugly and basic together using buttondown, but I’m not happy with that.

Till next time …, when I plan to be actually done with the setup.

So many hats, so many responsibilities

6 min read

I mentioned in a previous post that early Epic employees, like in many startups, had many responsibilities.

While making coffee and cleaning coffee pots didn’t specifically improve a skill I’ve needed since then (I still don’t drink coffee), we did much more day to day.

So many Hats

One thing really missing from the modern Epic employee experience, and frankly, from most software developer employment careers is taking on work that would normally be done by other roles/staff.

The primary differentiator is that the team I was on was self-supportive. The four of us had to handle everything that was happening as it related to our software application.

Everything.

Of course, this included software development. It included all quality assurance from both code review to end-user quality assurance. There were dependencies on other team’s products and their lifecycle and release schedules (back in the 90s Epic releases were done product by product on their own schedules and weren’t always coordinated cross-team). We had to package our own releases AND either deliver them to customers and walk them through the upgrade, or perform the software upgrades.

Sure, I can hear you saying, “just like a startup.” Hold on.

In addition to creating the software, we wrote about the software as well, both from a technical perspective and a end-user perspective. While our writing skills may not have been worthy of an International Award in Technical Writing, our output was as good as other software companies at the time. Just for clarification and to put this in perspective for my readers, there were no “screenshots” as this was all terminal development. If we wanted a screenshot of something to be included in documentation, we had to draw it by hand using ASCII art (our documentation was using a fixed-width typeface at the time in a very rudimentary Epic-built text editor—not something like Microsoft Word). Think Linux man page in terms of what was possible.

With the exception of being able to use modern tools to write documentation, you’re still thinking, “sounds like a lot of small startups.”

The Phone

When I look back at those years though, the most interesting and I think useful role though was that we did hands-on customer support on the telephone. There wasn’t a layer of phone support (tier 1, 2, 3): it was just us. If a customer would call with an issue, they’d speak directly to one of us (after going through the main Epic number to get routed to us). We had a rotating duty of doing Monday through Friday support, but also carried a pager for after hours emergencies (thankfully, those were rare at that time given the nature of the usage of the software we were selling).

I’m an introvert on most days, so getting me to take unexpected calls and have a meaningful interaction took some practice. I wasn’t thrown to the wolves though as the TL made sure that I had back-up available so that A) customers would get the highest quality service and B) so that I wouldn’t have a support-breakdown. After a few calls though, and getting to know the folks on the other end of the phone, it became second nature and a bit of challenging fun to work through the issues they were having.

As expected, I don’t remember most of the calls as they all blur together. I know sometimes they’d call very unhappy about something that had happened. In fact, there were more than a few times there was yelling unfortunately, so being able to listen and respond calmly regardless of the target of their frustrations proved to be invaluable. I recall one time when a customer called very angry about something that was happening with their system. He was yelling and extraordinarily unhappy. He was loud enough that I remember someone else from the team coming over to see if they could help (and this wasn’t on a speaker phone!). After 10-15 minutes of yelling and walking through the problem, and addressing the concerns, he was laughing and joking and we were talking about what plans we had for the weekend (and he did apologize for his demeanor earlier in the call).

I learned a better sense of empathy for what it’s like to be a customer for the products we were building that you can’t get through surveys or through an immersion trip (I definitely have a post planned about Epic’s immersion trips). When a customer was calling, it was very unlikely they were calling to see how we were doing (on rare occasions, it did seem like they were mostly bored and just wanted to chat). Far more likely was that they were calling because they needed a helping hand. It could have been something they caused, or something the software was doing (or hardware or … who knows!). I know they appreciated being able to talk with the developers that made the product rather than going through a support-tree.

The direct customer interaction when things weren’t going well — it’s had a life long impact on the way I’ve built and thought about software. It helped me develop troubleshooting techniques that a lot of people could use.

A few years after I started, it was common that teams had at least one person in a full-time support role, so software developers stopped taking direct calls. It’s a shame too, as a lot of developers at Epic would have benefited from the experience, even if it was only for a year or two.

If you’re saying to yourself, “but I do support!” Great! Is it direct from the first interaction? That’s what can really help you learn about your own software and the perception others have. If it’s being filtered by other support staff, you’re not seeing the whole picture. And, if you’re a manager who thinks reading summaries or even details about support issues is a substitute for actually doing the work: LOL.

Try it yourself!

And while in a world of software that sells internationally makes a phone call direct to an engineer extremely complex when dealing with language and time zones, I’d urge new startups and existing companies to consider how they could get their software developers on the front lines of support occasionally, especially if it can be personal, on the phone/video, and not only through email, or chat. The ability to think on your feet, maintain composure, troubleshoot, empathize, organize, prioritize, and solve problems is a skill that can help every developer level-up.

If you’re in an environment where you would like this opportunity, talk to your manager and see if there’s some way you could be more involved with customers directly. You won’t regret it…, most of the time. 😁

Have you done direct customer support over the phone?

Discount Ends Soon! (April 5, 2024)

And one more thing, there’s only one more week left for a discount on my résumé/LinkedIn profile review service. Don’t miss out on the savings!

Cohort needs a Windows App

4 min read

Cohort, if you recall from earlier posts was Epic’s public health laboratory management system. RIP. As it was the first product I worked on, I was interested in its success even though the product wasn’t compelling to me. One function was designed for rapid data entry as lab techs were doing hundreds of tests and then inputting the results manually in many cases.

While some lab machine interfaced directly and stored results into the Epic system, there were many that required manual input. Data input was frequently completed in a 80x25 character table like layout. Fields often needed validation at both a field level and a “row” level as some inputs could only be validated with other values in the row being present.

A simple example might be a type where depending on the type selected, the range of acceptable values for a numeric result might change. Or, the user might type in a number, like 65, then choose the unit of measure like mg/dL. If the lab result were for an HDL test (for cholesterol), then those results would make sense. But, if the user had instead typed in 450, it’s very possible that Cohort would have alerted the user that a value was likely out of range. Or, if the user had typed 65 but selected mg/mL for the unit of measure, again, it would have been flagged.

(I’ll fully admit that I don’t remember if the unit was automatically selected when resulting a test like HDL preventing that type of error. There were definitely connected fields however with more complexity).

As my Cohort knowledge expanded, I had little specific awareness regarding what was going on with Epic’s Legacy (later renamed to EpicCare) at the time other than they were building the application with a tool called Visual Basic (version 2 when I first started!). I didn’t have a license for Visual Basic at work or at home.

But, what I did have was a license for Microsoft Visual C++. It had been released in February of 1993 with another version, 1.5 released in December of 1993. (Honestly, I’m not sure why I owned a copy, it may have been through an educational discount).

During the majority of my time at Epic, I spent many hours at home every day learning new technologies that I could potentially apply to my job, both to satisfy my desire to learn and try new things and to try to make Epic a better place.

At some point during my Cohort time, I decided to make a graphical Windows 3.11 Cohort app. Yep. It sounds a bit ridiculous but as you’ll learn as I continue my Epic experience on this blog, that’s not uncommon.

I dove into learning Windows programming and learned what I could about the Microsoft Foundation Class Library. A big shout-out to Charles Petzold’s “Programming Windows” book at the time which helped me immensely.

After a few weeks, I showed my progress to my TL and she was impressed. I admitted that I had no idea how to make it into something more, but that exploring the opportunities was compelling. In some ways, I knew that I was showing initiative and also that my day job wasn’t challenging enough for me. I wanted more. I asked my former TL what she’d done after showing her the demo and while she didn’t remember the specific time, she is confident that she would have relayed on my success and interests to Carl who was her TL at the time.

The application I created wasn’t comprehensive — it supported data entry, saving, etc. But, by no means was it even 2% of the functionality of a Cohort deployed system. But, for me, it was an exciting journey to learning something new and having an opportunity to show it off at work.

Over the years of working at Epic, many people seemed to believe that I had access to some secret sauce that helped me succeed. I didn’t consider it a secret and have been willing to tell folks that it was at its core: hard work and lots of hours. I learn by doing not by talking or even just reading about tech. There’s not a secret shortcut or “one easy step.”

During building this little demo application, I started to learn about how the Windows operating system that would be the focus of much of my professional development career worked internally. That early awareness grew into broad and often deep knowledge that helped me make better design and development choices for decades.