Yelling at the Foundation ...

8 min read

It’s not something that I’d been trained to handle as a team lead at Epic. OK … laughing now, as there was no formal training for team leads back in the 1990s. We learned by watching, osmosis, and more than a bit of luck. I was wholly unprepared for the day that I was yelled at by another Epic employee. This wasn’t just a “raised voice” type of yelling, but a full volume absolute angered yell.

It all started with someone on my team stopping by to say that there was another Epic employee that had been asking for some support for a problem he’d been encountering with Chronicles. He was on the technical services team for one of the Epic products and was investigating a potential customer issue. The response from the Foundations team member was curt but polite as his question was answerable generally by reading the documentation at the time. Rather than repeating the documentation, he’d asked if he’d checked the documentation.

Note to readers — this was long before the era of “just put it on a wiki” and send a link. The documentation was either authored occasionally in early versions of Microsoft Word, or using the Epic Breeze text editor. Either way, there wasn’t a way to just point at it.

Second note to readers — the Foundations team was responsible for the Chronicles database, global mappings, some release tools and all of the system libraries at the time. In addition, we were building the new Foundations GUI libraries and communication platform (EpicComm).

You can probably guess where this leads and you’d likely not be wrong.

My team member knew his response might not have gone over well given an email exchange that they’d had afterward. Further, the employee then stopped by after I’d talked with my team member. The generally very calm demeanor of my team member had dissolved into a small crisis. He hadn’t intentionally wanted to anger the other employee but was certain he had.

Some weeks support was relatively light with ample time for development projects. This wasn’t one of those weeks though as he was generally very busy with support and the other employee had attempted to jump the line in order to get his question answered immediately. The other employee though was furious and had stormed from our office area extremely driven to “correct” the Foundations team’s mistake (through what means I have no idea other than complaining to his TL and up the chain).

His issue was important to HIS customer.

Realize that Foundations not only had every R&D employee as a customer but also all Epic customers (via their Tech Support reps). Some weeks it was definitely more than a single person could handle on their own. There were at the time around 8 people in total on Foundations and 150+ at Epic total. The support ratio was not in our favor. It was becoming more common that support from one week would spill into the next which meant multiple developers were doing support rather than doing new development.

I don’t remember if I called or walked over to see unhappy employee to see if we couldn’t resolve the issue amicably. I wasn’t going to apologize as we weren’t in the wrong directly. Maybe the messaging hadn’t been delivered in the best way possible for the employee, but that had already happened. No more than 10 minutes afterward I was at the employee’s door. We couldn’t talk there as he shared an office (yep, even back then Epic was constantly running out of office space). I asked if we could find another location to chat and thankfully a nearby conference room was available. It’s so vivid still to me weirdly even 25+ years ago as I know which conference room we were using in the newest wing of the Epic Tokay building, overlooking what was then the glorious (ha ha!) Westgate Mall.

We started talking about what had happened in his voice. I wasn’t looking to blame anyone, but I needed to hear it again and wanted to make sure he had a voice in the conversation rather than me being protective of my team. He angered more as I listened. I explained how the Foundations team did support and that it needed to be triaged. It wasn’t first come first serve and it wasn’t a queue of requests. It was very clear he wanted the best for his customer, but he couldn’t understand why his issue wasn’t the most important. Honestly, I don’t recall what the issue was — I’m confident if it had been urgent, my team would have taken appropriate action. (Epic tends to train ‘firefighter’ mentality more than proactive fire-prevention).

I failed to provide a response or explanation that he wanted to hear. At that point, extremely emotional, he stood up and I could read the anger on his face and unfortunately tears, and he started to raise his voice. I calmly asked that he sit back down so we could talk a little more and — BOOM. His voice raised to maximum volume and he stormed from the conference room yelling a few obscenities and general frustrations at me, at Epic, at the WORLD.

It was surreal. I’ve only been in one other situation that I remember where there was yelling at Epic (and that was two other people). This type of situation didn’t happen at Midwest Nice Epic. But, it was.

I raised my voice to be heard — again, as calmly as I could. I’m certain my adrenaline was pumping and that came through my voice… I wasn’t sure how this was going to play out. I knew I needed to deescalate the situation rapidly. Not only is yelling in the office very unprofessional, my “flight” mode had begun to activate wanting to just leave the situation.

I firmly asked him to return to the conference room so we could talk about solutions (which I’d begun to try to do before he’d left yelling). He turned around and I responded to that gesture by saying I wanted to help him.

Defeated, but willing to try, likely realizing what he’d done, he walked back to the conference room.

During the yelling I’d had a eureka moment that I was about to explain to him to see if he could both get behind the idea and also help drive the idea. My idea solved multiple problems with this one quick trick … OK. Not super quick, but the idea was sound and worked for decades in some form.

With his help, we’d form a group of employees, one from each product team at Epic that would become the first responders to Foundations questions. The employees would essentially be a Tier-3 level support as Foundations experts. When they couldn’t address a question or issue, they would directly contact Foundations. Only that employee (except in emergencies of course!) would be allowed to contact Foundations.

He was very excited about the idea and willing to help even if he wasn’t going to be the representative for his team.

He’d fully regained his composure after we talked it through. He apologized for his unprofessional behavior and left the conference room. I took a moment to contemplate and walked to Carl’s office to discuss what I was planning to do. I think I wasn’t going there to seek approval specifically, rather I wanted encouragement to continue the effort.

I didn’t stop by to see Carl often; he knew my dropping by without an appointment had to be important.

Thankfully he was available and we talked about what had happened and the resolution. The only wrinkle we discussed was picking the right people and whether it would be a rotating duty on teams — but decided we’d let the broader group of TLs work that out.

That was a day.

The new support process for Foundations was just the relief that my team and Epic needed. Rather than having a few experts, we’d distributed the support load and more importantly the responsibility and knowledge across a much larger group of employees. Some larger teams had multiple support representatives helping. Support was more efficient and better targeted. Having someone who knows the product do lower level Foundations’ support was far more efficient than having someone with only shallow knowledge trying to help.

Maybe I’ll post some day about Epic’s internal R&D tool Null Exception which was created to solve a larger problem that was happening more than a decade later.

Please don’t take this as an excuse to yell at me to get results. 😁

The Passion of Epic

12 min read

Many many years ago, in a building and fantastical land far far away, I enjoyed my job every freaking day.

Every day. 100%!

I worked LONG honest hours. Days not filled with meetings and going to long lunches and chatting up everyone about anything. It was work. Challenging new work. It tickled all the right parts of my brain in just the right cognitive ways.

It’s interesting that as I reflect on that period working at Epic and how that passion ebbed and changed over the decades and that even recently, working on my own I found that same energy again — multiplied beyond what I experienced in those early formative Epic years.

My Day

An average day back in 1996-1998 for me was as follows:

  1. Wake up about 5:30am
  2. Shave, shower, eat breakfast. Pack lunch.
  3. Leave my apartment around 6:45-7:00am
  4. Arrive at Epic about 10 minutes later — there were plenty of underground parking stalls available at that time of the day (about 7:15am).
  5. Work solid till about 7:00pm.
  6. Return home, eat, do more programming and watch a TV show.
  7. Sleep.
  8. Repeat.

I didn’t do that on weekends very often as I did need a bit of recovery time and had things I did need to do — generally mundane things like grocery shopping, laundry, exercise, cleaning, etc.

And honestly, looking at it as a list like that — it might seem tragically boring and full of missed opportunities to many of you I suppose. I hadn’t met many new friends — so zero meaningful social life, and the reality of it was that I enjoyed doing different things than many of my friends at the time. They went to work and then wanted to disconnect from “technology” and programming.

I have some moments of envy for adults at a similar age today with their modern tech that would have likely filled evenings with video games and other distractions, but they’re only fleeting moments. I like learning more than I like “play.”

During that period I was leading a small team creating new building blocks for the next generation of Epic applications. We were creating components and frameworks for a new Visual Basic 5 based Foundations library. New Epic applications would use the framework to build their applications and older applications would start to migrate and use what they could from the new code base (which seems funny saying now as no GUI app was older than 4 years at that point).

Foundations GUI

The project didn’t have a fancy name. No fancy logos. No marketing department.

It was Epic, so none of that should come as a surprise. Back in 1996, there definitely wasn’t a marketing department. We literally added “GUI” to the Foundations team name. 🤣

We weren’t exactly starting from scratch. There was some precedence for the work established by the few teams that were already using Visual Basic (see EpicCare/Legacy/Cadence). Most of the code wasn’t usable though in a component model and couldn’t be distributed as Foundations code without minimally moving it to a Foundations namespace.

MUMPS routines at Epic were grouped by application, using one or two leading characters to indicate the originating team. It wasn’t a perfect way of organizing code, but it was effective enough and reasonable to manage. The new code we were creating would need to be E* which was reserved for Foundations (in addition to a few special “system” level routines starting with %Z. )

At first I think we were all a bit naive about how complex it would be to build a Foundations GUI control set that applications could use. That naivety kicked us in the project plan not too long after.

No rosy pictures were painted for my manager at the time, wholly indicating that we’d work our butts off, but there was still a monumental amount of work that needed to be done. Think scaling Everest, not just minor mountains like Pike’s Peak.

The application code really wasn’t designed for reuse and had not included many features that were core to the Foundations database Chronicles. No fault to them — they built the features they needed only, nothing more.

It wasn’t feature complete and had too much functionality that was not designed for scale. So, while some code was copied for a quick win, 99% of it was rewritten. At scale, it just needed help to handle a larger variety of uses than it was originally designed for and at the same time we introduced a number of performance optimizations and learnings. We were generalizing the code and adapting it to a much larger set of functionality.

Further, there were a significant amount of connections in the code to the Electronic Data Interchange (EDI) team and a number of their databases (AI*). That code and implementation was one of my least favorite parts of what EpicCare had built. It was … bad to put in mildly.

Chrontrol

After some debate, we did decide that a few of the controls minimally needed developer “fun” names. Not many stuck, but the entirely original Chrontrol was one of the favorites. The Chrontrol represented a single value of any supported Chronicles type (single response for those of you keeping track). It included the ability to do most everything that a screen paint entry field could do. Basic text entry to dates and everything in between. It turned out that — well, it was extensive. Nothing like taking 17 years of development at Epic and cramming it into a new platform in a few short years.

Frustratingly, a tremendous amount of code in Chronicles made an expected but unwelcome choice. It frequently manipulated the terminal device. Like, OH MY JUDY, code was often littered with the expectation that there was a terminal device that could be written to at any time.

Why was that a problem? Because the communication protocol back in the 1990s used TELNET. Refer to MUMPS command for more details. Since we were adding this functionality to Foundations having copies of Foundations or application code wasn’t desirable at all. Instead, the team needed to walk through each block of code, every tricky programming point, everything … to look for unexpected IO and decide whether it needed to be conditional or eliminated.

HOLD UP!!. Before you think walking/stepping through code wouldn’t have been too bad — THERE WAS NO DEBUGGER. So, “stepping” through code was either manual or had to output tracing to globals (as writing to the screen was obviously off limits).

I’d like to say we caught them all through just a code review. Some may have been left unintentionally behind.

Thankfully there was a tremendous amount of testing and use before any code left the building. It wasn’t infrequent especially at first that we’d encounter or get a report of some issue with IO. It wasn’t always Foundations code, but we investigated each one. Fun!

No. Not fun. Satisfying when solved, yes.

You may wonder what drove my passion during this time.

Challenge, Curiosity, and Creativity

What we were doing wasn’t easy. It wasn’t straightforward. It pushed the edge of Visual Basic 5 beyond what I know Microsoft was expecting and intending (I later had some conversations about just that — many Microsoft developers on the Visual Basic teams were “impressed” by what we’d done with their tool).

We weren’t just building a component library. We were establishing new APIs on the client and the server, exploring the often mind-boggling experience that was Windows 95.

We divided up the work with a huge amount of overlap. I took ownership for what was soon to be called ChronGrid. The team did a remarkable job building and testing the new Foundations GUI.

The ChronGrid nearly broke me multiple times. SNAP. CRACK. POP.

The challenge pushed me along though. Some days were certainly a drag: bug, missing feature, missing feature, missing feature, bug, … As I was applying so much new knowledge of the internals of Visual Basic 5 and pushing the Windows API to bend to fit our requirements, I stayed on track.

Weirdly, many of our user experience requirements were driven by functionality of Chronicles Screen Paint and basic text interfaces. While I suppose that may sound ludicrous to some degree, the workflows and efficiencies built into applications were in part key to their successes. Functionality was important, but the performance of workflows was elemental in Epic’s infrastructure and application designs. Watching an appointment scheduler zip through a text based scheduling workflow was very motivational.

It was an interesting conundrum in many ways. How to make rendering hundreds to thousands of rows of one to dozens of columns (related groups for those who continue to keep an Epic score), with each cell having customized rendering based on the core type and configuration — that was a lot. Heck — there are many modern JavaScript based grids that can’t do that well on modern hardware using all the latest tricks. Having just written part of a virtual grid in JavaScript late last year for a project — it’s not easy. Remember — these computers that Epic customers would have been using would be considered “retro computing” these days. We achieved often stunning results I think that were frequently underappreciated as it was just expected that the delivered products would be as fast as possible.

It was more than a lot for the Visual Basic 5 platform and Windows. Many common interactions with the UI and standard patterns simply didn’t behave the way we needed.

What was frustratingly interesting about Windows APIs at the time (and for many decades later), managing the user input focus was a Jurassic Park sized pain. It was as if: the Microsoft developers didn’t have experience building enterprise applications. Huh.

A Different Kind of Focus

The APIs lacked precision and fine grained controls. There were two core issues:

  1. When the input focus was moved to another field either through keyboard control (like the tab key being pressed) or using the mouse, there was no way to prevent focus from being lost by another control. The newly focused control would get focus before the previous control lost focus — there was never an indeterminate state of “nothing has focus.” This meant that input fields such as the Chrontrol might be invalid for the moment. While that might seem OK — it wasn’t as there were often dependent fields in a workflow. The next field might change or disable based on another field’s yet to be validated user input. This meant that the input was bouncing around (faster than the eye could generally see), but it was happening.
  2. Controlling tab order within a control like a grid was … fun. For the Grid, it meant that there were actually 3 input fields, two off screen and one primary editor. The ChronGrid couldn’t create instances of actual Chrontrols without quickly depleting all GUI resources available in Windows. Instead, it hosted one control instance and kept resetting it based on the current cell. But, to manage focus, there were two inputs as I mentioned. One for “forward” and one for “back.” Depending on the state of the grid and settings, one or both of these hidden inputs might be enabled. When the user tabbed forward, if the “forward” input was enabled, it would briefly get focus and in doing so trigger the grid to … do one of many many things based on the current configuration of the grid. For example, add a new row or move to the next column in the current row.
  3. And as a bonus: many Chrontrols were matched to database values for category (pick lists) or database records (like selecting a medication). When the user typed in some part of the name and tabbed away, the app needed to immediately confirm that the field was valid and retrieve either the exact match or the list of potential matches for the user to select from. The user experience desired was that no field would be left invalid.

One frustrating hold-over from text based applications was that in most workflows the tab and ENTER keys were accepted as a way to move to the next field. For a brief period, it was a hill I was willing to die on: eliminate the ENTER key as navigation as it was very non-standard (and still is today). The applications weren’t behaving like other Windows based applications. I stepped off the hill eventually, it wasn’t a great experience, but I conceded I didn’t care enough to continue the argument. It was also a common source of bugs when developers failed to account for it. Remember — this was non-standard behavior, so Windows had zero support for the behavior. The ENTER key behavior made more sense in the grid, but not other fields.

The team really excelled at that time — churning out great code and consistently delivering. If they read this: You Did Great Work.

At Home

After this intense period I found that I transferred the passion from being “on the job at the office” to the home. 80% of my hobby and evening time was all consumed by projects and experiments directly related to Epic projects and needs. It was just on my own time with no time commitment. In some ways it was perfectly freeing as I chose my path and could explore without concern of misusing Epic “on the clock” time.

Passion

Was the passion repeated at Epic? I had a few more periods working on very interesting projects both short and long term, but rarely as intense and all consuming. It wasn’t age or maturity. I did my best with the hours I spent. Weirdly, I think I wasn’t working on big enough challenges. I craved that.

What’s been great in the last year is that I rekindled that same energy in several projects — multiplied beyond what I had even back then when I was … ah … much younger. I’ve found myself doing 8-10 hour days 6 or 7 days a week, driven by this same nearly insatiable quest for knowledge.

Crossroads

I’m at a crossroad right now though, deciding what to do next and looking for the next opportunity.

I’ve got several options, but none at this point are lighting the fires that translate to a longer term passion that I’d like to rekindle.

If you have opportunities or suggestions, please send them!

Are you energized by your job? Are you being rewarded for that energy?

Hi! Before you go...🙏

I really appreciate you stopping by and reading my blog!

You might not know that each Epic blog post takes me several hours to write and edit.

If you could help me by using my Amazon affiliate links, it would further encourage me to write these stories for you (and help justify the time spent). As always, the links don't add cost to the purchase you're making, I'll just get a little something from Amazon as a thanks.

I'll occasionally write a blog post with a recommendation and I've also added a page dedicated to some of my more well-liked things. While you can buy something I've recommended, you can also just jump to Amazon and make a purchase. Thanks again!

My Current Development Stack

5 min read

My intention is to blog more about the ups and downs of my software development technologies I’m currently using. While I’m not ready to announce what I’m working on (primarily to prevent any more “SHIP IT YESTERDAY!” anxiety than I already have), I can talk about the tools and tech without concern about shipping schedules or the “what” yet.

The List - April 2025

One thing to note — my 99%-time project right now is building a web-based application. So, the list is definitely influenced by that. These are in no particular order:

  1. Rust. Sure, it’s popular, literally everyone is rewriting everything using Rust 😁, while all the vibe-coders are wishing their LLMs would do better Rust coding. I’m definitely not a vibe coder. I picked it for 2 reasons: cross-platform capabilities (Windows, MacOS, and Linux*). Further the crate ecosystem (packages) is robust and for my project, I need a robust web-server host. While it’s definitely not a favorite programming language from a syntax perspective, there are many things I do like about it. Further, with Axum, I can bundle the entire content of the web application into a single executable, so distribution is a snap (no, not Snap).
  2. Rust Axum - this is the web application framework I’m using with Rust. It’s fast and straightforward to use. I tried a few other options and many are “fine” as well, this one just was slightly better for my needs and is well documented (and used, so it’s easier to get help if I get stuck).
  3. TypeScript - I like types. Even inferred types. JavaScript is fine, but sprinkle on a few TypeScript type declarations and I am confident my code is more accurate and will have fewer bugs. I’ve tried JSDoc — and it’s more overhead for less benefit. One common issue with TypeScript is that it’s errors are not always clear, and a lot of libraries do gymnastics with types to attempt to provide an ultimate level experience. When it works — great! When it doesn’t, it can be very frustrating.
  4. Svelte - I’ve used a lot of web and UI frameworks in the past few decades, but Svelte is the one that I keep coming back to. It’s the closest to “bare-metal” web development that is available. It has a few bells in whistles (like Runes/signals) that kick it up a notch. Svelte 4 was good, but Svelte 5 is the one that helps me be consistently productive. The primary downside is the availability of pre-built UI widgets. There are a few options that I’ll discuss in a later post, but the community hasn’t created the same quantity as is available for React.
  5. Svelte Kit - I use Svelte Kit for static (MPA/SPA) site generation. I’m not using it for dynamic content generation or server rendering. From what I’ve read, this is not uncommon. You’ll get client side routing and static generation as desired.
  6. Visual Studio Code - Look. I’ve tried other editors, and the extensions available for VSCode are so numerous that it’s rare when something isn’t available for VS Code. Unlike other developers who seem to enjoy suffering through printf or console.log as a technique for debugging an application, I make heavy use of debuggers. Debugging Rust works like a charm (for what it is). I’ve tried many others, and I keep returning to VS Code.
  7. MacOS & Mini - I bought the M4 mini late last fall. It’s plenty fast. I routinely switch to using Windows so I frequently use the wrong keyboard combinations for copy/paste/cut. Ugh.
  8. Ubuntu Server - I do nearly all development in a VS Code remote dev container over SSH. Not only does it make my development environment accessible from anywhere, I’m assured of consistent development environment experiences every time. I’ve crafted a few Docker configurations that handle all of my needs. I have Ubuntu running exclusively on a centralized hardware in our house so that I don’t need to hear it’s fans during the day.

Productivity Boost

The wisest thing I’ve done regarding my productivity is buying the Mac.

Buying the Mac was not for the reason you might imagine. I’m not more productive with a Mac specifically.

I now have a computer (the Mac) dedicated to “work” and a PC dedicated to “everything else.”

It’s remarkable how much more focused I’ve been able to be with this configuration.

I couldn’t trust myself to not-hobby during my work hours (especially as I work in the same location as my hobbies are located). By buying the Mac, most of the “hobby” software isn’t available either as I’ve historically bought software for Windows. Even having no easy access to personal email address is a big productivity boost. I could have bought a second PC, but buying the Mac made better sense since I eventually plan to have the app I’m building work on MacOS as well.

Maybe Soon

  1. Elixir - Depending on how other things go … my wife is working on a training course and needs a web app/site for it. I’ve learned Elixir (and Phoenix) and it’s a serious contender. I could use SvelteKit, but there’s a lot of “noise” it adds to just building a web site (plus features like auth, and payments).
  2. Zig - I like Zig. I’ve been very productive building things in Zig. It’s refreshing simple when compared to other languages. Biggest downside really is the lack of a complete package system. I’ve even explored using Rust and Zig together so that I could get the best of both.

🤓 - Did you know that I had a quote on the TypeScript web site for a year or more?

Where is Delphi?

6 min read

Had you heard that Epic dabbled with Delphi back in the 1990s? It’s true!

It was far more than dabbling though.

2025 marks the 30th anniversary of the 1.0 release of Delphi. Regardless of whether you pronounce it del-fee or del-fy, it’s impressive that the platform and language is still available for sale today 30 years later (and has a healthy open source alternative Lazarus)! While Delphi never achieved the fame and fortunes of the elite technical stacks, it remained a valid and reasonable choice for building software for many companies for decades (and still does today on a much smaller scale).

Why isn’t Epic using it today if it was more than just a dabble?

One word: Borland.

A small R&D team had formed shortly after Delphi was released at Epic to investigate the possibility of building new and rebuilding existing Epic Windows applications using Delphi rather than the current choice of Visual Basic. While there were a number of technical issues with Visual Basic that could have been rectified by Epic teams better designing and architecting their applications, Visual Basic had a number of hard limits that were making application design and construction more challenging.

One slowly growing issue was the fact that applications weren’t “dynamic.” There was no practical way for a Visual Basic developer at the time to create an application from components and run an application efficiently based on configuration. There was no practical way to share code except for copies to be made. Creating anything reusable was a chore on a good day. It was disappointing, but we all knew even if we were afraid to admit it that Visual Basic was not the ideal programming language for a rapidly growing suite of connected applications.

Delphi offered Windows up on a wonderful platter of tasty IDE goodness. The IDE was as polished as Visual Basic (which was top of the class back then by a long shot). It exposed Windows APIs when needed, and provided ways to build components and libraries in a far more usable fashion. The provided component library (the Visual Component Library) was extensive and full source code was included! There was no magic as to how they’d built the library, so we could dive in deep and learn without guesswork.

Once the initial investigation happened, we cautiously green-lit the development of a replacement shell and library for all applications. Applications would be provided a shell and an extensive component library with comprehensive ways of building a more integrated and dynamic application experience.

There were 2 FTEs and one about half-time to quarter time.

We spent about 8 months working on the project and then in an afternoon we archived the project to shared NAS folder. Goodbye Delphi. 😢 It was never heard from or seen again at Epic (although last I knew the code was still in a folder on the shared NAS development drive zipped up until someone nukes it and no one misses it).

Delphi was a great platform that allowed us to fully exercise the Windows operating system and not be held back by the numerous limitations of Visual Basic. The development environment and langauge was remarkably freeing without the complexities of using Windows APIs directly or 😲: using the alternatives like the Microsoft Foundation Class libraries paired with C and C++ (my positive spin on that library is that it’s come a long way since it was first released in 1992 and has gone through a lot of growing pains).

What happened to Delphi at Epic?

Borland stock tanked: A LOT.

It’s frustratingly difficult to find the stock prices for a company that has undergone sales, etc. over the years. I can’t find the numbers. BUT, I recall the stock fell from around $80 to $2 in the course of a few months. Borland was struggling with product and market fit. Their revenue was down across the board and competitors like Microsoft were stepping up.

Betting the future Epic development platform on a product that may be sold or cancelled entirely was not a tenable plan. It was a disaster brewing. I know we fielded more than a few questions from then active Epic customers about whether Borland was a safe path for us.

Good news though!

In October of 1996, Microsoft announced Visual Basic 5 (beta). Epic had actually been on the beta program for quite a while, but relying on a “future” unpublished version wasn’t wise — there were a lot of changes happening and frequent updates (weirdly, I was the only person actually approved to access the beta due to the agreement we’d signed as it was some connections I’d established with a few amazing Microsoft support engineers that had ultimately provided us with access). Like with many Microsoft apps and platforms, the final shipping product often differs from the betas quite significantly.

Microsoft radically improved Visual Basic by adding control creation as part of the core environment and language. Not only could a developer create a packaged control that could be used by any ActiveX host, reusable code could be assembled into a compiled DLL (still as ActiveX, but with no UI). It was remarkable. It was all in Visual Basic. The final solution and techniques were a lot easier than Delphi too.

These changes were what Epic teams really needed to scale and share better than they had been. VB 5, now only 32 bit (thankfully) also allowed native code compilation, so gone were the days of interpreted code performance smells. Performance dramatically improved in many areas (but I’ll say that contrary to many untested opinions, the VB runtime interpreter was remarkably fast and was not a bottle-neck for many applications). This was still the era of Windows 95 being the primary OS, so there were still a lot of road-blocks, but the language and environment was no longer the stumbling block it had become in VB3. (Visual Basic 4 was primarily an upgrade to optional 32 bit support and some spit and polish).

Very shortly after, I pivoted to working on Visual Basic 5 infrastructure and building out the core Foundations GUI components that would be used for decades (thankfully retired now!).

I don’t miss Delphi at all, but it was a fun project.

Hi! Before you go...🙏

I really appreciate you stopping by and reading my blog!

You might not know that each Epic blog post takes me several hours to write and edit.

If you could help me by using my Amazon affiliate links, it would further encourage me to write these stories for you (and help justify the time spent). As always, the links don't add cost to the purchase you're making, I'll just get a little something from Amazon as a thanks.

I'll occasionally write a blog post with a recommendation and I've also added a page dedicated to some of my more well-liked things. While you can buy something I've recommended, you can also just jump to Amazon and make a purchase. Thanks again!

How to Flash Tasmota to a Sonoff S31 With no Soldering

7 min read

If you buy something from a link, Acorn Talk may earn a commission. See my Affiliate Programs statement.

Most instructions for flashing the Tasmota firmware to a Sonoff S31 wifi smart plug include a step of temporarily soldering some wires to the S31. No thanks. While I can probably manage it, they’re small solder pads and I didn’t want to mess up a brand new S31 with a botched solder attempt.

Here’s a way that involves a few small purchases that you can use in other projects and future S31s you may purchase.

Exposing the Internals

First, disassemble the S31 by using either a small blade flat-head screw driver, your stronger than mine finger nails (ouch), or something like the iFixit Jimmy. I used the Jimmy for many projects and use it often. It’s a no fuss simple way of gently prying open electronics. A thin screw driver may work as well, but be careful or you may damage the case.

Prying like us

UNTIL YOU HAVE COMPLETED ALL STEPS AND REASSEMBLED THE SMART PLUG, THE SMART PLUG MUST NOT BE CONNECTED TO MAIN ELECTRICITY (like 120V). DO NOT PLUG IT IN UNTIL THE PROCESS IS COMPLETE AND EVERYTHING IS SNAPPED BACK TOGETHER. SERIOUSLY. AMPS CAN KILL.

After some careful prying:

Gray cover nearly removed

The gray end pops should snap off (don’t worry, there are no wires under the gray plastic cap that you may damage). There are two small screw covers that slide off the edges of the back, revealing 3 screws in total.

The secret screws are revealed

Once you’ve unscrewed those you can pull the separate the two remaining parts of the smart plug. You’ll be left with a plastic piece (the front) and all of the electronics.

Solder-Free Connections

The trick to the solder-free option is to buy an inexpensive set of breadboard jumper wires with test leads, like these Goupchn Test Hooks. The linked option is exactly what I bought and used. They worked like a champ.

Now, you’ll also need a way to get the flash the binary over USB. I bought and used the Moyina USB to TTL/Serial adapter.

Make sure that the switch on the device is set to 3.3V. Failure to make this change is likely to permanently damage the S31 rendering it e-waste.

Next connect to the serial adapter the following:

  1. VCC
  2. GND
  3. TXD
  4. RXD

USB Connections

The colors I selected don’t mean anything specifically, just be certain you’ve made the correct connections (I’ve shown them in the diagram below connected to the corresponding pad and pin).

Now, using the test hooks on the Sonoff S31, you’ll connect:

  1. USB VCC -> S31 VCC
  2. USB GND -> S31 GND
  3. USB TXD -> S31 RX
  4. USB RXD -> S31 TX

Note that the USB adapter TXD is connected to the RX on the Sonoff and the RXD on the USB adapter is connected to the TX on the Sonoff. (TX = Transmit and RX = receive — makes sense that the devices can transmit and receive signals/data bidirectionally).

There are 6 solder pads on the S31. Nothing will be connected to the 2 pads directly next to the GND pad. Do not connect the USB adapter to the D-RX or D-TX pads on the S31.

Correct wiring to the S31

It’s a tight fit to get the 4 test hooks onto the pads, but with a bit of patience, they hold firmly. Before you plug the USB adapter into power (your computer/laptop), double and triple check that the hooks are connected to the correct pads based on the required wiring and that none of the hooks are touching multiple pads. Check once more for safety (don’t rush — you’ll potentially wreck the S31 and or the USB adapter).

Gripping!

Gripping, angle 2!

The USB Serial Adapter is at 3.3V, RIGHT?

Also, now’s a great time to confirm that the USB adapter is set to 3.3V. It is right?

Finger Gymnastics

Next up are a few finger twisting moments where you’ll need to hold down the button in the center of the Sonoff S31 board, while carefully not dislodging or moving any of the test leads, AND simultaneously plugging the USB adapter into your computer or laptop (Windows, Linux, or MacOS are supported).

Here’s a simple tip: start to inserting the USB adapter into the device of your choosing, but don’t fully insert it (so that contacts aren’t made and the device isn’t recognized). Then, push the button and slide the USB adapter in the remainder of the way.

You’ll need to hold the button for a long 5 seconds and then you can release as it’s been placed into flash mode. Nothing happens visually on the S31.

Busy working!

Open a web browser to https://tasmota.github.io/install/.

If everything has gone to plan … when you click the connect button on the page, your Chromium-based browser will ask for permission to access the USB device:

Grant Permission to continue

Confirm that the device is correct (the specific COM# may be different for you) by clicking USB Serial Port (COM#) - Paired, then click Connect.

If you’re using a Linux distribution, like Ubuntu, this step may fail. However, it’s fixable by using the terminal and granting permission to the USB port that was selected (when I tried it, the USB port was at the bottom of a long list of possible devices).

shell
sudo setfacl -m u:{USERNAME}:rw /dev/ttyUSB0

Substitute {USERNAME} for your current user and ttyUSB0 with the USB device you want to authorize (it will be shown in the device selection popup).

Anyway … the device dashboard is next … select INSTALL TASMOTA:

Select Install Tasmota to continue

Warning — this will erase the device completely.

Warning!

Select Erase Device and click Next.

Erase Device Confirmed

Confirm your intention again and click Install:

Confirm you've selected the correct install

Now wait a while. Stretch. It takes no more than a minute or two. Don’t touch the device at this point.

Installing...please wait

Then, the moment you’ve waited for … it’s installed!

Installed!

WiFi Configuration Step

Now, you’re not quite done. Some devices will automatically launch into the configuration for the device, but it doesn’t appear to happen for the S31s.

Gently unplug the USB adapter without disconnecting any of the cables. You won’t need to remove it fully from the USB socket though. Wait a few seconds and push it back in.

After 5-10 seconds, check for a new access point via WiFi. Mine looked like this:

Find the new temporary AP

Select the appropriate WiFi. Now, through seemingly mystical … luck …, your web browser may automatically connect to the Tasmota configuration web app. Or it may not right away. But, my Chromimum based browser did every time for me, eventually. If it’s not working, you’ll need to see what the IP address range it assigned to your computer is — mine was 192.168.4.### and the gateway was 192.168.4.1. From the browser, I could have navigated to http://192.168.4.1 manually. But, as I said, the browser did automatically.

On the main screen:

Main Tasmota, select Configuration

Then Wifi:

Select Wifi

Then fill in your WiFi Information:

WiFi connection information needed

When you click Save, the device will reboot and provide the new IP address you’ll need to connect to (assuming the connection was successful and DHCP is doing its thing correctly!):

Finally, done!

Once your computer has disconnected from the temporary access point, the S31 will be available on your chosen WiFi network at the IP address shown.

Celebrate.

Reassemble

Reassemble the S31 by putting the two halves back together, putting in the 3 screws, sliding the corner pieces back on, and carefully snapping the gray plastic end piece back on. Be careful to align the gray plastic piece properly and it’s likely it will need a firm push and maybe a second push for good measure before it snaps back into place. There are two alignment pegs that prevent it from being installed in the wrong orientation.

One More thing

From the configuration menu of the Tasmota, select Module and choose the Sonoff S31 (41) from the drop down menu. If you don’t do this, you’ll still have a basic working smart plug, but you’ll not be able to see the current power usage and energy totals. I’m not sure if there’s a sort to the list — for me it was 11 from the start of the list. 🤷‍♂️

Final Configuration

If you’re going to connect it to Home Assistant with MQTT … well, that’s a topic for another post.

Hi! Before you go...🙏

I really appreciate you stopping by and reading my blog!

You might not know that each Epic blog post takes me several hours to write and edit.

If you could help me by using my Amazon affiliate links, it would further encourage me to write these stories for you (and help justify the time spent). As always, the links don't add cost to the purchase you're making, I'll just get a little something from Amazon as a thanks.

I'll occasionally write a blog post with a recommendation and I've also added a page dedicated to some of my more well-liked things. While you can buy something I've recommended, you can also just jump to Amazon and make a purchase. Thanks again!