Compiling a Tree Sitter Grammar to WASM

2 min read

If you want to compile a Tree Sitter grammar to WASM, you might find this Docker Build/BuildKit example helpful. I didn’t want to set up all these tools directly on my machine, so this provides a helpful one long step to isolating the build and tools, and at the end, saves the created WASM file in the directory from where you launched the build (you can change the location by adjusting the output argument).

You can use build arguments or change the file directly.

Replace the TREE_SITTER_GRAMMAR_GIT_URL with the git URL where the grammar is located. For example, it might be:


Replace the TREE_SITTER_NAME with the destination directory name that is used for the clone. For example using the repository name from above, it would be tree-sitter-sqlite.

Then, from the folder where you’ve saved the Dockerfile:

docker buildx build -t tree_sitter_sqlite_wasm  --output . .

After a lengthy download and build process, you’ll end up with a file called:


For example, it might be:


Hope this helps!

# docker buildx build -t tree_sitter_sqlite_wasm  --output . .
FROM rust:latest AS tree-sitter


WORKDIR /tree-sitter
# Remove imagemagick due to
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
    && apt-get purge -y imagemagick imagemagick-6-common \
    && apt-get install -y git \
    && apt-get install -y curl \
    && apt-get install -y python3 \
    && apt-get install -y cmake

RUN curl -fsSL${NODE_VERSION}.x -o \
    && bash \
    && apt-get install -y nodejs

# Rust and Cargo is installed already
RUN cargo install tree-sitter-cli

RUN git clone
RUN /em/emsdk/emsdk install latest
RUN /em/emsdk/emsdk activate latest
WORKDIR /em/emsdk
RUN chmod +x /em/emsdk/ \
    && . ./

# Setting the path via the call doesn't persist, so we need to set it here
ENV PATH="/em/emsdk:/em/emsdk/upstream/emscripten:${PATH}"

WORKDIR /tree-sitter
RUN tree-sitter generate
RUN tree-sitter build --wasm

# Without the extra step here, the buildkit copy to the host doesn't work
FROM tree-sitter AS built
# This is a hack to get the wasm file out of the tree-sitter layer
WORKDIR /built
COPY --from=tree-sitter /tree-sitter/${TREE_SITTER_NAME}/${TREE_SITTER_NAME}.wasm .

FROM scratch
COPY --from=built /built/. .

10 minutes of content in 2 minutes

4 min read


I really appreciate you stopping by and reading my blog!

You might not know that each Epic blog post takes me several hours to write and edit.

If you could help me by using my Amazon affiliate links, it would further encourage me to write these stories for you (and help justify the time spent). As always, the links don't add cost to the purchase you're making, I'll just get a little kickback from Amazon.

I'll occasionally do some posts with recommendations and I've also added a page dedicated to some of my more well-liked things. While you can buy something I've recommended, you can also just jump to Amazon and make a purchase. Thanks again!

I didn’t spend too long on Cadence before switching to another project. But I have one short and very memorable moment from the Epic User Group Meeting in 1995 while still on the Cadence team. Cadence GUI was to be demonstrated for the first time in front of all attendees. We were sitting in a very large room at the Alliant Energy Center complex as the attendance had outgrown the Concourse Hotel space (bigger, but not better).

Three of us sat in the left front row waiting for the main session to start. (It was an awful place to sit so we weren’t taking up seats that were desirable by customers). We were mostly just joking around with a bit of tense energy.

Carl was “headlining” though and had to do a presentation before we did our debut demo-dance. He sat down next to us, visibly nervous. He was normally composed, so it seemed out of character from my experiences with him. I don’t remember the exact words, but this captures the moment:

I have about 10 minutes worth of content. I’ve practiced and I’m going to do it in about 2 minutes. —Carl

We laughed thinking he was joking, but he assured us, he was not. As ever helpful coworkers, we provided no great advice to calm his nerves, as we had none. It was too late anyway.

Bless his heart. He got up in front of the audience and slowed down and went over his prediction by no more than a minute or so. It was RAPID FIRE INFORMATION. It was awesome and hilarious: it was impossible to keep up. 🤯

It took me more than a few years to be far less nervous doing my own presentations.

It’s funny that he gave me advice after a sales presentation … “you know this stuff inside and out. Just talk about it.” And I realized, I did! I didn’t need to write everything down. I knew what I wanted to talk about. If I left something out, I was the only one that knew really. If I ad-libbed a bit of content, again, I was the only one that knew.

While “knowing the material” isn’t enough to get rid of presentation nerves for a lot of people, that little piece of advice helped me a lot. I started to remove a ton of words from my slides after that. If the words were there for me as anything more than a direction/general topic, then I’d delete them. The words and images/diagrams were there for everyone else. Not me. That was a shift for my approach, but it worked so well.

If I didn’t know the material, then I learned it. It shouldn’t be a surprise how much better a presentation can be when the speaker is confident in the material. It doesn’t make them a great speaker, but it certainly helps establish credibility which is far too often lacking in technical presentations.

Additionally, I would think about what questions I’d ask if I were listening to the presentation and prep for those specifically (and modify content accordingly). If someone caught me off-guard with a question that I’d not considered, I’d write it down and not let it bother me that I’d missed something.

”I don’t know, but I’ll find out” is much better than trying to sidestep and faking it.

The Cadence GUI Demo

It went fine as demos go. I recall we had a glitch, but recovered (and we made it clear it was a work-in-progress). Was there thunderous applause after the demo and a standing ovation? Abso-gui-lutely! OK, not so much. A polite applause. (“Clap clap, when is lunch?“) Since it wasn’t a new product, the interest was mostly about new workflows and improvements to scheduling efficiencies that would result.


What’s helped you doing presentations? Or what still bothers you about doing a presentation in an unfamiliar setting? I’ve heard so so so many people over the years say: “I hate doing presentations.” Are you one of them? If you need help with your corporate review or “big pitch” presentation, please consider my 1 or 6 hour consulting services.

A better way to water trees

4 min read

If you buy something from a link, Acorn Talk may earn a commission. See my Affiliate Programs statement.

This is embarrassing. We’ve been using a combination of a few simple devices as a very reliable way to water some trees we planted last summer, and today I decided I’d provide a few details about the products. But, when I went to Amazon to grab a link for the primary part of the solution, it’s no longer for sale! On it’s own, that’s not too surprising as resellers come and go seemingly every hour on Amazon. This one however is because the small company (person) making them has retired and has stopped making them. 😒

Over the years, we’ve tried a lot of different ways to water new trees and bushes. So many failed miserably. From those large and small tree watering bags (thanks for the leaks!) to just the plain hose end. The key to watering trees is generally go slow — so that water can saturate the roots.

The breakthrough for us in convenience and reliability was the discovery (and subsequent purchase) of this: Waterhoop.

The Waterhoop

It’s not complex, but that’s what has worked so well for us. Once connected to a common hose and the spigot is turned on, you control the water volume directly on the hoop (so no walking back and forth to the spigot to get it “just right”). Water drains or sprays (depending on volume) through a series of small holes around the hoop ends. And, that’s it. As it’s flexible, it adjusts to a variety of situations. Having local control is brilliant — it’s such a simple feature that saves time!

As we live in a suburban neighborhood now, the water pressure changes during the day. Before we had this combination, we’d set up a hose at the base of a newly planted tree, set a timer for an hour, get the flow just right … and return an hour later to find that the water was just dribbling out, nothing like we’d originally set it. Now, with the simple flow adjuster at the end, we can be assured that the water amount we want will be the amount an hour later (as we turn the flow up at the spigot far beyond what we actually need as it’s now regulated near the target rather than the source).

While setting a timer on a phone, smartwatch, etc., works, it’s been even more useful to put a mechanical timer at the end of the hose, again right near the target. So, we can set it for 60 minutes, and not be concerned that we’ll over-water if we forget to go immediately out when a timer signals. We’ve had digital timers on the source end, and frankly, they’re more than we needed for this, and not needing a battery has been great too!

Orbit Mechanical Timer

The Orbit Mechanical Watering Hose Timer has performed flawlessly for us. By moving it to the end of the hose, we can quickly set the timer, adjust flow as needed (with the flow adjustment on the Waterhoop), and be done with it.

While these things may seem unnecessary (and truly are), they’ve been a big time saver for us as we were watering weekly, and more importantly, we were able to control the amount of water far more precisely than before.

Overall, I’d totally recommend both of these if you’re watering trees, bushes, etc.

OK, but I know — the Waterhoop isn’t available anymore. I’d picked the Waterhoop because of the very good reviews and the fact that it was made in the USA. Instead, if I couldn’t make one myself (I’d try!), I’d switch to using a soaker hose for trees with a few tweaks. I’d either get rid of the “Y” part or make it a quick connect on one end with a product like the Eden Quick Connect. That way, it would be a snap to remove it from one tree and move it to another. If I wasn’t concerned about a “true” loop, then I’d use a hose end cap. I usually have a few of those around. It’s possible that the drip of these would be slower than I’d need for our soil, so I’d carefully make a series of very small punctures around the hose to increase the drip speed. It probably wouldn’t matter much, but rather than having two potentially fiddly valves on the “Y” adapter, I’d add an independent coupler with flow control.


11 min read


I really appreciate you stopping by and reading my blog!

You might not know that each Epic blog post takes me several hours to write and edit.

If you could help me by using my Amazon affiliate links, it would further encourage me to write these stories for you (and help justify the time spent). As always, the links don't add cost to the purchase you're making, I'll just get a little kickback from Amazon.

I'll occasionally do some posts with recommendations and I've also added a page dedicated to some of my more well-liked things. While you can buy something I've recommended, you can also just jump to Amazon and make a purchase. Thanks again!

When Cadence GUI entered the Epic stage, the team was provided with a complete copy of Legacy/EpicCare to do with as we saw fit. There was zero process in place for any formal code sharing at the time, so we stripped our copy down to the bare bones leaving only a small communication engine and a developer hub Visual Basic Form (that I plan on talking about later).

Options for input and output with a socket from the MUMPS server were very limited back when EpicCare was started. While there were built-in functions for communicating to external devices, they were focused on brief communications and had very limited control options. Acting as a “server” for a long running persistent communication channel was far more challenging. Further, the support across vendors for options wasn’t consistent.

Faced with this communication challenge, the Legacy team built a resourceful alternative. All MUMPS server hosts supported connections via a (now thankfully waning) protocol called Telnet. The Telnet protocol shows its age these days and isn’t commonly used (and was phased out at Epic decades ago). But in 1992, it was a common service that was available on all operating system platforms Epic and customers were using.

As a brief aside, as the Telnet protocol was an operating system service, each OS (and version!) had its fair share of quirks. It was fortunately uncommon, but we would encounter situations where a version of an OS, for example HP/UX, would improperly handle a documented Telnet command. Of course, as bugs often go, it was only in certain circumstances and combinations which made troubleshooting cumbersome.

MUMPS_Command was Born

The Legacy team exposed the functionality in Visual Basic by creating a function called MUMPS_Command. When the Visual Basic application launched, using a configuration file that was dubiously secured, tied with an application hard-coded decoding key, the host application would make a Telnet connection to the MUMPS server host OS.

It would then send … locally stored credentials … (sigh, yes, that’s how it worked back then). Upon a successful login and some scripting magic in the OS, a MUMPS job (process) was started immediately and began to execute a specific MUMPS routine. This was a “captive” session.

If someone knew the user name and password for this special MUMPS_Command user, they’d be launched directly into the Epic created protocol for communication from client to server.

Some of you may have used a Telnet client to connect to a OS service for troubleshooting.

> TELNET localhost 5555
GET / HTTP/1.0

It definitely doesn’t put the “fun” back in functional though.

After a brief exchange of a control sequence to verify the connection and some important settings, MUMPS_Command was ready for duty. Although “doody”(💩) may be more appropriate for the early versions.

How did it work?

I cannot say that the early versions of MUMPS_Command were robust, or secure, or reliable. They weren’t. Telnet isn’t secure. The protocol and services are terrible and have no baked-in security. And yet at the time, Telnet was used most often for connecting to a remote host. Using Telnet back in the mid 1990s wasn’t unusual, so Epic using the protocol as a connection didn’t raise any general concern across customers. (Live encryption of this data on the wire would have been unheard of back then given it was running on a secured Intranet and the massive increase in computing power required made it a non-starter).

The core idea was that a captive MUMPS job/process was either running MUMPS code or waiting for input from the end user via a terminal or pseudo-terminal (teletype TTY or pseudo-teletype PTY). Given that the Visual Basic client (Cadence and EpicCare) were remote connections to this captive process, the MUMPS code was in an infinite loop.

WAIT ;  
    READ input:TIMEOUT

It’s a slight oversimplification of what the code looked like, but it’s not far off. It’s not done yet. (I also used non-abbreviated MUMPS instruction names).

The Visual Basic client, over the Telnet protocol would “send” a request to the captive MUMPS session by “typing” it and sending a carriage return (0x0D). The connected MUMPS job would receive the text and store it in the variable input as shown above. Using the syntax as shown above, the MUMPS READ command only ends when it receives a line of input (ended by a ‘terminator’ character which included the carriage return).

Now that the input has the request sent from the Visual Basic application, what’s next? The team decided on what became an unfortunate choice a number of years later. It was effective and easy. But it was fragile. And it offered no reasonable security (especially in the early versions).

MUMPS has a wonderful command/instruction called Xecute (think of it as eXecute). The string provided to this command can be any valid MUMPS expression is immediately executed. Many interpreted languages have a similar feature. JavaScript has eval for example. eval has been used many times over the years for perfectly fine JavaScript browser code, humorous hacks, and too many noteworthy nefarious reasons.

Remember, this Xecute command allows execution of any valid expression.

Some of you may be shuddering already. Good!

WAIT ;  
    READ input:TIMEOUT
    IF input="**END**" GOTO EXIT
    XECUTE input 
    WRITE !,"**ENDED**"

(It’s weird for me to type out the full command XECUTE, as I don’t know that in my 25 years of Epic I ever used anything but the abbreviated X!)

The loop reads the input, checks to see if the client connection wants to end the connection, and if not, executes the expression that was passed.

As requests need a response, the expression sent needed to store the result in the variable X.

S X=$$getFut^SCHED1(" 20240705",53711)

The Xecute command parsed the expression and executed it. In the case above, the code is calling into a second MUMPS routine to get future appointments for a specific patient identifier from a specific date.

Upon evaluating the expression, the MUMPS_Command WAIT loop would write to the Telnet session whatever value was stored in the variable X.

WAIT ;  
    READ input:TIMEOUT
    IF input="**END**" GOTO EXIT
    XECUTE input
    W X  ; sent back to client
    WRITE !,"**ENDED**"


MUMPS for much of its existence has had extremely short string storage/length limits in memory and when stored in MUMPS Globals. Because of this limitation, and because of how the Epic invented protocol worked at the time, it meant that the Visual Basic developer had to make certain that the MUMPS_Command requests did not exceed the string limit (this number was passed to the client as part of the initial handshake).

Long requests and responses had to be broken into segments. It was very common to need to send a “request # of N” as part of a series of MUMPS_Command calls. A free text field on the client that was longer than allowed by MUMPS required that the developer break the string into pieces (pun intended) and send them in chunks. Reverse that for the server sending that same field to the client. Learning to return a “is there more work to do” was a common pattern on client and server.

Line terminator characters had be carefully transformed before being sent over Telnet to the server. If an errant 0x0D (carriage return) was in the data, it would cause the XECUTE to begin evaluation of the expression immediately. Then … the whole communication channel was broken as the client and the server protocol would be mismatched — the protocol was a simple state machine on both ends. Either waiting or receiving. There was no way for an out of band communication to be handled properly.

Line feeds and carriage returns were transformed (or $TRanslated in MUMPS) to non-line-terminating characters 0x01 and 0x02. Occasionally, those would be the source of weird Telnet service issues over the years. 😒

Little Bobby Tables

If you’ve done any database development in the last decade, you’ve likely seen this:

Exploits of a Mom From XKCD, Exploits of a Mom

Well, MUMPS_Command had a problem that was an extension of the XECUTE command. The XECUTE command evaluated and executed all valid code.

If the client sent an unsanitized string, like a patient name with a value of: JORDAN, MICHAEL ") K ^ER S %1=(""

This would get sent to the server:


Rather than the intended:


Needless to say, there was some non-Midwest nice words used in the offices for a few days as developers scrambled to fix the issue.

It really wasn’t ideal that a user of a client application could say, delete an entire global structure: K ^GLOBAL. That’s syntax in MUMPS for deleting an entire GLOBAL (KILL ^GLOBAL). This would be similar to a SQL statement where an entire table is dropped (and a huge disaster).

I’m not aware of any customer related security issue to this design. The lack of properly handling quotes caused application stability issues most noticeably.

There became a little “quote” coding challenge to verify that there wasn’t unexpected quotes and that all Xecutes worked with expressions that had been sanitized properly.


Epic was more frugal in the 1990s than it has been in decades. When the company needed to spend money, it did, but reluctantly, and not without putting up a licensing fee fight. For many years, Epic relied on a third party vendor for providing a Visual Basic compatible Telnet client (weird weird since Epic has so often been considered a “We Build It Here” software factory).

While I believe there were two vendors that had been used, the most used vendor was from a company called Distinct. The product was called … Distinct Telnet. (HOLY MIDWEST COWS READER! Distinct exists and still sells an ActiveX version via what seems to be their FrontPage 98 web front end using ASP web pages). Frankly, it made little financial sense for Epic to write that code. Visual Basic wasn’t fast enough yet to make it viable (as it wasn’t compiled to native machine code in early versions), and there was no one with experience in creating a Visual Basic 2/3 component using C/C++ for a communication protocol, and with Telnet.

Eventually Epic reached a licensing deal with the authors of the Telnet component that seemed high at the time. The real nuisance was that it was a per-install license for customers and Epic internal use rather than a site-license. Epic software wasn’t licensed “per-machine” so tracking client devices was unnatural.

Hello Epic Telnet!

Years later, when I was the team lead of Foundations, we embarked on creating our own Telnet implementation as the costs of the Distinct Telnet component had risen and it was difficult to justify the cost to all customers — and more importantly it wasn’t as fast as we thought it should be. During the course of a few months, a talented developer created “Epic Telnet” and we tested it internally (albeit slowly and with a significant amount of internal trepidation); after a significant amount of internal testing we began to cautiously install and use it at new customers. While my team and my own feelings were more optimistic and pragmatic, I know that others felt this new Epic Telnet was far too risky to implement and far too outside of the norm for us to be developing. While I understood their apprehensive and cautious response, the deployments went well and we demonstrated how the new communication system was more reliable and faster than it had been using just Distinct Telnet (in part, not due to a fault in Distinct Telnet, but more being able to tune our code to our exact needs). The addition of Epic Telnet removed a cost and several pain-points related to the older component. Strangely, I know it took YEARS for the older Telnet functionality to be fully removed and not used. It was a slow process, not entirely atypical for healthcare IT I suppose across the spectrum of customers and cultures.

A Positive Connection

A huge amount of work was performed over the years without issue with this simple protocol created at Epic. Sometimes simple to start is all that is needed, and for Epic, it was a great early fit. It had growing pains over the years though.

We did some other cool things over the years with this Telnet based communication system, but this post has gone long …, so for another time!

MUMPS_Command becomes M_Command becomes RPC_Command

In the mid-1990s, using the name “MUMPS” was becoming less tolerated by potential customers and the ISO standard had approved using M as an alternative. For reasons that are still amusing to me, Carl “strongly requested” that all uses of MUMPS_Command be renamed to M_Command. While customers weren’t encountering the name (unless they were browsing the source code, or doing their own custom development), dropping the name meant it was less commonly spoken and less used during every-day conversation at Epic. M was the new better name, for reasons. It later became RPC_Command fully dropping the M in favor of Remote Procedure Calls as that sounded more inline with terminology and techniques that were gaining favor at the time (mid-late 1990s).

I’m going to stick with MUMPS for a while longer though, unless Carl pays me to stop. 🤓

The App is Too Fast!

7 min read

We’d just finished a review session of the latest development of Cadence GUI with Judy. The feedback was generally positive except for one thing:

When switching between future and past appointments: “Slow that down somehow to make it more obvious what’s happening.”

We’d just spent a month adding more features to the new graphical user interface for Cadence, Epic’s outpatient patient scheduling application. One of the goals we’d set for the project was that a workflow would be as fast as the existing terminal application. Sincerely, that was a lofty goal in many ways, especially as we were in very uncharted territory and also wanted to add a number of often requested features to common workflows.

Of course, the existing terminal-based application experience couldn’t be modified to make Cadence GUI work (especially workflows or speed). There were a lot of developer hours put into to how to maintain these two distinct applications that needed to share a common code. If you’ve ever worked on a terminal/console application that has input and output, it may not surprise you that these types of operations are …, frankly, everywhere. No architectural dig was necessary to find that IO code was spread around the code as we began to adapt code and was so frustrating. It wasn’t a bug either — it was just the way code was written then. Sometimes we’d find them early in development and sometimes the elusive buggers would be uncovered in a QA pass (obscure configurations often aided in their discovery). It would have been an unnecessary abstraction to build an application in the late 1980s that could have IO that was directed at anything but a terminal. The application performance would have suffered for zero gain for the user experience. MUMPS code needed to be tight and efficient.

I’m planning a blog post specifically about some of the challenges we faced regarding this type of work and the communication channel, so, I’ll skip ahead for now.

One of the first screens we wanted to show was a patient summary view. The screen would show a summary of the patient demographics, DNK appointment statistics and upcoming appointments. It would serve as a “patient dashboard” and launching point to other application functionality. This experience was not similar to the one that the EpicCare team had been building. While we shared some general UI patterns (big row of buttons floating at the top of the screen), they’d made some choices that wouldn’t work well for a Cadence user (unsurprisingly, EpicCare Ambulatory needed to make quite a few design and architectural changes years later to accommodate improved workflows and capabilities). Specifically in this case (and in contrast to EpicCare at the time), a Cadence scheduler might need to open more than one patient at a time.

Cadence Patient Home

The app workflow to patient selection and launching the review screen was snappy. In a head-down side by side comparison, the terminal UI was faster, but it wasn’t offering as much utility. The additional features were ones that were requested, but not available directly and consistently in the terminal app. We were happy with the results.

While obviously nervous about the demonstration to Judy we were reasonably confident that it would show well.

And it did go well, except for feedback about switching between future and past appointments.

It wasn’t a lot of data and the request to fetch past appointments was quick. When the Cadence scheduler would click on the “past” appointments tab/label, the new list would pop in what seemed like instantly (for back then — it was 1994, so the common experience was that things would be a bit sluggish).

”Can you slow that down?” — Judy

The team lead wisely, after a few rounds of “huh?”, said we’d look into some options.

If you weren’t doing application development for Windows 3.11, you may have already “solved” the problem we had with many modern solutions.

  • Animation
  • Colors or Opacity
  • Fonts
  • Layout changes

Windows 3.11 Development Issues

Here’s what wasn’t available to us:

  • We had 16 colors available generally, or a dithered 256. Opacity was either 100% or 0%.
  • There was no animation framework (and frankly at the FPS of a common computer back then, it would have been annoying)
  • We were limited to the fonts installed on the OS — and those weren’t many. The standard font used in VB at the time was MS Sans. It was perfectly ordinary. Sometimes we used bold. Other times, not. Purely using it as an indicator wasn’t great.

The Limitless Color Palette

The Joy of Colors, Visual Basic 3.0 Style

Let me zoom that for you. It may not be obvious yet though …

The Joy of Colors, Visual Basic 3.0 Style, zoomed 1

That may have not been enough, so one last time:

The Joy of Colors, Visual Basic 3.0 Style, zoomed 2

As you can see, all but the middle red color are dithered. While we did use dithered colors occasionally, we did try to avoid situations where they were used with text on top as it was too hard to read. Dithered colors looked … odd … generally.

We essentially had 16 pre-chosen by Microsoft solid colors to use where we could be assured they’d look OK to most folks.


Visual Basic 3.0 was a single threaded application. If the application was animating, it wasn’t doing other things for the user. There wasn’t a graphic processor that was able to offload animations … there just wasn’t a good way to do animations that were effective.


We tried quite a few different things before the next demonstration with Judy. At the time, there was a silly way to animate a GIF file — but it was all on the main application thread. For a brief period, we had a a build of Cadence that would show a silly little dancing bear that would pop-up and dance when switching between future and past appointments.

Needless to say, that wasn’t an option. I recall us floating the idea to her (with something other than a bear!). But, we didn’t like it either because while it was animating, the application was blocked. That wasn’t a great way to make the application “as fast” as Cadence text/terminal.

Solved with this one Hack

Oh, my head hurts that this was the primary solution that we used for quite a while to satisfy the issue:


Sub btnPast_Click()
    fraFuture.visible = True
    fraPast.visible = False
End Sub

DoEvents. A necessary evil in many Visual Basic applications over the decades. When something didn’t quite work as expected, developers would often turn to DoEvents as a solution without fully appreciating that there were grave risks to its use. The core functionality of DoEvents was to allow the application to process events/messages in the message queue for the application. The events were in order, but unless the developer had planned for them, it could lead to disastrous results.

Internally, hiding the fraFuture by using the visible property would pop a WM_PAINT message onto the message queue for the application (along with the dirty region). When application code wasn’t running, Windows would process the queue (to empty), including WM_PAINTs (which were in the queue, but combined to prevent cascading updates). As I mentioned, this is a single threaded application, and processing the queue only happens when there isn’t Visual Basic code executing (VB handled this automatically).

When using DoEvents though, it would force the queue to be processed. So, the screen would update immediately (along generally with anything else that may have been queued).

So, what we’d done: introduce a flicker.

  • Hide
  • Repaint
  • Show
  • Repaint (which happened naturally by showing the new frame)

DoEvents was the source of a number of issues over the years as it “solved” problems — and created dozens of new problems.

Developers frequently neglected to handle situations where the user had interacted with the application during a busy state, DoEvents would allow those queued requests (like typing, or clicking a mouse) to process — and happen immediately, even though the normally sequential code hadn’t returned.

Sub btnPast_Click()
    fraFuture.visible = True
    DoEvents ☠️ 
    fraPast.visible = False
End Sub

☠️ DoEvents: Process the queue including user originated events. Had the user clicked on a button that closed the form they were using? What if something else made the fraFuture Invisible? Or maybe they clicked back on the other tab while it was busy or … INSTABILITY!!


She was happy with the result. We weren’t, but moved forward regardless. There was lots more to do.


Do you know that acronym? It was an important statistic for schedulers.