Taming huge collections of DOM nodes

How to work with thousands of DOM nodes using pure JavaScript and DOM API

Hajime Yamasaki Vukelic
codeburst

--

DOM is slow? How much more dramatic can you get?

In an almost biblical revelation, we’ve learned that DOM manipulation is slow… ten years ago. It may look like things are so much better today, and they certainly are in the common cases. But today we do more things with the DOM than 10 years ago, and we are faced with new challenges. There are still things you don’t get to be careless with.

Thousands of DOM nodes sounds like a lot, but I don’t think it’s that rare. If you’re thinking you may never need to worry about this problem, you may want to think twice. We had tables with just under a hundred rows that had 20-something DOM nodes per row. A hundred times 20 is 2000 DOM nodes. A lot of the DOM nodes comes from fluff — wrapper elements that exist to support CSS cosmetics. But then a lot of them are the meat.

Whatever framework you use, if you take a naive approach and just use the APIs you first learned while doing a Hello World application, you are going to run into performance issues as soon as you run into a case like the one described above. It’s almost guaranteed to happen. I know. I’ve been there many times myself, staring blankly at the profiler, wanting to die… For some reason, it keeps coming back, too.

We’ve been using Vue.js for a while when we noticed that the rendering performance for one of our data tables was getting pretty horrible on low-power convertibles. It’d been sneaking up on us over the time, and it’s finally caught up. For one reason or the other, none of the recommended performance tweaks (and Vue does not have a whole lot of those) simply did not work.

The decision was made to attempt a pure JavaScript port. For the past week or so, I’ve been preparing for this project, making small tests to find the optimal solution. In this article, I will try to share what I’ve learned thus far without boring you to death with implementation details. It will be a condensed version, though, so if you want to learn more, feel free to get in touch.

For the purposes of this article, when I say ‘performance’, I really mean responsiveness — how fast the application responds to your actions. I specifically do not mean fluidity, or frame rate in other words.

TL;DR

If you’re in a hurry (and who isn’t these days!), here are my conclusions in no particular order:

  • If you are looking for performance, don’t use frameworks. Period.
  • At the end of the day, DOM is slow.
  • Repaints and reflows are even slower.
  • Whatever performance you get out of your app, repaints and reflows are still going to be the last remaining bottleneck.
  • Keep the number of DOM nodes down.
  • Cache created DOM nodes, and use them as a pool of pre-assembled elements you can put back in the page as needed.
  • Logging the timings in IE/Edge console is unreliable because the developer tools have a noticeable performance hit.
  • Measure! Always measure performance first, then only fix the issues you’ve reliably identified.

You will find the code for the final test here (logs timings in console).

Initial testing

I started off by creating a simple benchmark. I used Vue, React, Snabbdom, and pure JS to build a table with a 1000 random rows, around 4 DOM nodes per row. In Chrome on an Intel Core i7 dual-core CPU, the average time to complete the DOM operations (not including the paint/reflow) was as follows:

  • Vanilla JS (innerHTML): ~60ms
  • Vanilla JS (createElement()): ~70ms / 1.16x slower
  • Snabbdom 0.7.1: ~100ms / 1.66x slower
  • Vue 2.5: ~160ms / 2.66x slower
  • React 16.2.0 (production build): ~370ms / 6.16x slower

I’ve been working with Virtual DOM implementations for the past year and a half. It was difficult to admit that I just cannot get anywhere near pure JS performance using any of them. The numbers were clear, though.

Interesting thing to note is that, whatever you (don’t) use, the reflows and repaints are still going to take a similar amount of time. For the initial render, you won’t notice much of a difference between browsers and implementations because the painting phase takes much longer than anything else. We are talking about a lot of time for this number of DOM nodes. Therefore, if by performance you mean high frame rate, there is no other way than to keep the number of DOM nodes on the page low, and avoid adding a whole bunch of them at once.

A more fully-featured test app

I decided to write a more realistic example app to see what kind of performance we can get out of using no frameworks in different update scenarios. I won’t bore you with every little detail. I’m sure you can arrive at more or less the same conclusions without my guidance. I will, however, discuss in more detail one of the techniques I’ve discovered during this trial.

First off, the requirements:

  • Render a configurable number of rows (1000 by default) of randomly generated data, a table of products with ID, type, make, and price columns.
  • Filter the table according to type (with select list, refresh on change)
  • Filter the table according to maximum price (with input box, real-time)
  • Add fake currency conversion (show prices in EUR by default, convert to USD on click, refresh as soon as user clicks)
  • Have no external dependencies
  • Make it as fast as possible

Now let me get some short notes out of the way before we get back to those details I wanted to discuss.

Don’t optimize your JavaScript

I’m not saying that you lack creativity to kill performance without touching the DOM, nor that non-DOM performance cannot be improved. What I am saying is that the pure data processing in your JavaScript code is almost never going to be the performance bottleneck even if it’s quite slow. I write crazy slow code from time to time. I’ve just learned that it doesn’t matter as much as I (would like to) think.

Whether you are using a framework or not, when you see that your application is getting slow, open the profiler. Look for excessive repaints and reflows or massive waves of DOM manipulation. These are the common bottlenecks.

Clearing the contents of a DOM node

Clearing a large number of DOM nodes can be done fast using node.textContent = '' or node.innerHTML = ''. Either solution works in IE11+ and all modern browsers at their latest versions. (And no, IE11 is not a modern browser.)

Creating DOM nodes

Creating DOM nodes using document.createElement() and adding them to the DOM tree is almost as fast as assigning HTML strings to the node’s innerHTML property. Furthermore, if you need to add event listeners while you are rendering the DOM nodes, using innerHTML is slower because you need to select the element later to add the event listeners.

Manipulating DOM nodes

Manipulating DOM nodes (e.g., text contents, attributes, etc) is faster than re-creating a node with modified attributes/properties. This one is is kind of obvious, but it’s still worth keeping in mind. What’s less obvious is that this is orders of magnitude faster than re-creating the nodes.

Maintaining references to DOM nodes

Once created, you can maintain a reference to DOM nodes regardless of whether it is attached to a node in the page or not. As you will see a bit later, this is the key for getting great update performance improvements.

Avoid wrapper elements when you can

Most people understand that adding thousands of DOM nodes at once will hurt performance if not outright kill it. What some might be missing is that adding a DOM node here and there can still add up to thousands of DOM nodes.

If you are writing components, and you use the component in a hundred different places, adding just one wrapper element to that component will mean a hundred additional DOM nodes in your view. Be as careful about adding wrappers as you are about adding complex structures. First look at how you can solve the problem using CSS alone (pseudo-elements and such).

Keeping a leash on your data

Even when using pure JavaScript, in my experience it’s good practice to manage your data in a store that emits events when updated (observable of sorts). This makes things much more streamlined.

I’ve implemented a store that emits both the updated state and the old state in its message so that I can explicitly test whether some piece of data had changed. Since DOM is ultimately very slow compared to the rest of your application, having a way of determining whether you want to perform a DOM operation or not is a must.

If you want to take the data thing to the next level, you may be interesting in what S.js has to offer. It was a bit too much for me, but it may just click for you.

Using JSX

Even with bare metal JS apps, JSX comes in quite handy. In my trials, I’ve made a hyperscript-style JSX helper in some 30 lines of code.

Although the idea was to keep it as simple as possible, it does have a couple of bells and whistles for convenience. The gist of it is that it creates the elements using document.createElement(), sets HTML attributes on it, and appends any children to it.

In it’s raw form, it’s used like this:

h('div', {class: 'product'}, [
h('span', {class: 'id'}, product.id),
h('span', {class: 'type'}, product.type),
])

If you are using Babel with the react preset, you can use the @jsx h pragma to use JSX with the h() function:

/** @jsx h **/const renderProduct = product => (
<div class="product">
<span class="id">{product.id}</span>
<span class="type">{product.type}</span>
</div>
)

If you want something with more stars on GitHub, there’s HyperScript, which does more or less the same thing, but with more code and more features. I just like to keep my stuff as simple as possible. HyperScript sets DOM node properties rather than attributes. I might do the same in production, but this got the job done for my tests.

Updating a large collection of nodes

And now for the main dish. Most of the small bits and pieces were pretty easy to get right the first time. Sure, the initial rendering performance was great, but what good is an app if you cannot update anything? (News flash: it’s no good.) Having a few thousand DOM nodes to update can be quite slow, though.

My initial naive attempt was to blow up and re-create the table from scratch each time something changes. It resulted in a disappointingly slow implementation. It went something like this:

list.textContent = ''
renderRows(state).forEach(node => {
list.appendChild(node)
})

Naively clearing the row nodes and then re-creating them took almost twice the time of the initial render, especially if new set of rows is similar in size to the initial one.

After a bit of trial and error, the final solution was to keep the rendered nodes cached.

The implementation of the product list is such that you can sometimes show all of the rows, and sometimes only a subset. Regardless of what you are showing at any given moment, you first render all the rows on the first view and cache those.

The products have an ID. I used those to identify the associated nodes, and create an object that maps the IDs to the nodes.

const productRefs = {}
state.products.forEach(product => {
productRefs[product.id] = renderProduct(state.currency, product)
})

I only needed to do this once when the entire table is rendered initially. From then on, I just used the pre-rendered nodes, so subsequent updates are not wasting CPU cycles on node creation. Interestingly, this little bit of extra work had absolutely no perceivable impact on the performance. Initial render times were still averaging about the same as without it.

Next, I created a list of nodes that will be in the table. First I generate the list of products, and then map their ID’s to the nodes. It’s a relatively fast operation even though it’s iterating the products multiple times. The operations in the next snippet will take less than a millisecond on my development machine:

const finalList = finalizeProductList(state)
const products = finalList.map(product => productRefs[product.id])

Now when I want to update the table, I only need to re-run the last two lines, and replace the product list:

list.textContent = ''
products.forEach(node => {
list.appendChild(node)
})

I can also do all of that in one go, but that’s micro-optimization with no noticeable effect.

This change makes a big performance difference compared to the original naive approach: roughly three times faster, from around 25ms down to about 8ms to update the entire table in Chrome on my machine.

I’ve seen this implementation being called ‘keyed’. As far as I know, it has nothing in common with how the key properties are used in Virtual DOM implementations. There is at least one implementation in pure JavaScript, which is where I got the idea from. You can see it in the krausest’s js-framework-benchmark repo. I just call it cached node index because it sounds better to me. 😁

To my surprise, manipulating the cached nodes in-place turns out to be cheap. As long as you are not creating new nodes, the existing nodes, whether attached to a node that’s visible on the page or not, is going to be rather fast to update.

For example, to update only the price column of the table, I used this code:

state.products.forEach(product => {
const node = refs.products[product.id]
const price = priceToCurrency(
state.currency,
product.price
)
node.lastChild.textContent = (
`${state.currency} ${price.toFixed(2)}`
)
})

It updates all rows, even the ones that are hidden due to filter settings. This only took a few milliseconds across 1000 rows.

How well does this all scale? Increasing the number of rows to something like 5000, we get a linear increase in time it takes to render and manipulate them. That means everything is roughly five times slower. I imagine that the performance is mostly correlated to the number of DOM nodes that you have to work with regardless of the operations you perform on them.

Conclusion

It’s worth repeating: no matter what you do, keep the number of nodes down. That’s the fastest way to performance. Regardless of your skill level, you can only do so much before you are out of options when faced with an army of angry nodes.

1000 rows in the test is clearly excessive, in the sense that it can get pretty slow even on desktop machines. Having a pager or tabs to limit the total number of rows shown to something more manageable, like say a 100 per page, will dramatically improve the responsiveness of your app.

Measure! Measuring performance is key. You don’t know what you are optimizing until you know what’s slow. Use console.time() and console.timeEnd() calls to log the timings of various code paths. Don’t forget to use the browser’s profiler, as it also shows things that are beyond your app that might be affecting the perceived slowness. Note that in IE and Edge, these timings are pretty much useless because the developer tools in those browsers will have a big hit on your application performance. In this case, you are better off logging things into the page (e.g., create a textarea and modify its value).

Frameworks are not built for performance. They are built for developer experience. Performance usually comes as an afterthought. Even if you see a framework that makes other frameworks eat dust, it’s just one framework against another. You really ought to consider the performance difference between that framework and Vanilla JS. You’ll quickly realize that frameworks are generally always slower than well-crafted pure JS solutions (save for very rare exceptions like S.js where developer experience clearly took a back seat for some of us mere mortals.)

Frameworks are not just built for experience, either. They are built for the specific experience their authors were looking for. It may fit your needs just fine, or it may not, even if you are ready to sacrifice performance for it.

Does this mean you trade ergonomy for performance when implementing your app without a framework? Not quite. In just under a hundred lines of code, you can write your own state store, pub-sub system, and similar tools that greatly improve the experience. In fact, I imagine that’s how most frameworks got started. It’s just a matter of how much custom-tailored that experience will be to how you go about developing the app, and the needs of your app. In addition, you have much more freedom to optimize the critical parts of your application without interference from the usually opinionated framework APIs.

Finally, even if you don’t plan on doing Vanilla, it’s a good starting point. You learn more about the challenges you face developing the app. Eventually, you able to make a more educated guess as to whether some framework will work for you or not. I’ve made the mistake of doing it the other way around, but from this day on, I will most definitely start all of my projects Vanilla first.

Lastly, direct DOM manipulation and frameworks are not always mutually exclusive. In many frameworks, you can directly manipulate the DOM as needed. Even if you don’t do a complete rewrite like we are about to, you can get a lot more out of your existing code (it’s just a bit dirty).

--

--

Helping build an inclusive and accessible web. Web developer and writer. Sometimes annoying, but mostly just looking to share knowledge.