Polyfills, transpilation, and browser support

Dominic Fraser
codeburst
Published in
17 min readDec 9, 2020

--

Jump to ‘Tooling’ to dive straight into the detail, or start from the beginning to learn about the problem.

excited group discussing around a table
Yes, it is that exciting! Credit: Jopwell
- The web is always changing
- Who to support?
- What is necessary to support older browsers?
- New JavaScript Features
- Tooling
- Browserslist
- Transpilation
- Polyfilling
- Lodash
- eslint-plugin-compat
- It's not just JavaScript - Compatible CSS
- What is the ideal setup?
- Considerations
- The ideal
- The ideal analysed
- The next best alternative

The web is always changing

The technology that makes up the web evolves over time, new replacing the old and the old eventually becoming unused.

JavaScript as a language is the same, with new features being added regularly.

Browsers run this JavaScript and in order for new JavaScript features to be used then the browser must support it, or usage will simply create errors.

As Mungo Dewar outlines:

Support of JavaScript is not the same for all browsers nor are all common features in today’s browsers present in older browsers. Typically this difference originates due to browser vendors not being completely aligned on all aspects of feature design but also because the web ecosystem continues to evolve and develop, leaving older browsers in a state of incompatibility with today.

Browser vendors have to update their products in order to keep up with developing ECMAScript specifications (the spec for JavaScript), improve and add features to support more and more feature-full applications (think location, camera, mic support) and loads more! This continued support gives advantages to the whole of web (developers, designers, publishers, users etc) everyone benefits from the web moving forward.

In modern browsers, this is not too great of a complication for users. Modern browsers are ‘evergreen’ in that when a new version is released the browser will prompt the user to update and this, from a user perspective, is as simple as a restart of the browser. Of course, there will always be outliers who update slowly or perhaps never update, but it is seen as a safe assumption that the majority of users will update when prompted — even if they don’t fully understand what this means.

By contrast, older browsers required a higher degree of user interaction to update, increasing the likelihood of either user resistance to change or lack of understanding of the process, or simply where they were replaced by entirely new products rather than new versions (think IE11 and Edge). This decreased the chance of users updating themselves and also meant updates were far slower to happen at the corporate enterprise level, where a ‘new download’ may require internal approval and not be compatible with other internal systems. Older hardware also may simply not support newer browsers, making it impossible to update your browser until you first get a new device.

This means there is a very long tail of users with older browsers — whether waiting until they can afford a new device, limited within the enterprise, or simply not technically literate.

Who to support?

Choosing to not prioritize tooling, testing, or bug fixes on certain browser versions should be done with proper thought, as it has a direct revenue impact.

Most businesses maintain a ‘supported browsers list’. It outlines which browsers, by vendor and version (e.g Chrome, v50), it is essential to ship a functioning product in. The ideal would obviously be to be perfect in every browser but, knowing that supporting older browsers is an engineering time investment, strategic decisions are made about where this time is best invested.

How to determine the contents of this list is for every individual business to decide depending on their specific context. For example, a new startup targeting mobile users will have an immediately different audience to that making enterprise software for exclusively government desktop machines. Some will be less niche and aimed at a far wider audience, or some may have existed for a long time already and know they have a core of loyal users who may share certain characteristics, including browser choice.

Where possible it is recommended for this to be informed by real user metrics — analyzing real traffic and seeing the percentage of traffic on each browser version. This can highlight important factors like ‘x% of users use IE11’ or ‘in y market z% of users use Firefox’.

Thresholds can then be set as to which % means the browser must be supported, and this can be re-assessed regularly to see if the list can be updated.

This list can be captured in code for use in engineering tooling in common browsers list, a package installed in each front end project. This will be explored more later on.

What is necessary to support older browsers?

So, what is necessary to be able to support this section of our users?

Luckily there are tools that allow us to find a balance between supporting up to date development while also having a product that works for older browsers. What it is necessary for us to do is decide on which trade-offs we wish to make when balancing product features, performance, and developer experience — and then the exact implementation that fits that chosen strategy.

First, let us look at the two sides of ‘modern’ JavaScript language development — features that can be solved with transpilation and features that must be polyfilled. As we will see the difference between the two is important.

New JavaScript Features

Looking at two example features we can see support for each is mixed, with neither supported on IE11.

// es6 "arrow" syntax
// https://caniuse.com/arrow-functions
const fn = () => "hello";// Array.includes global method
// https://caniuse.com/array-includes
[].includes("value")
Arrow functions unsupported on IE11 https://caniuse.com/arrow-functions
Array.includes unsupported on IE11 https://caniuse.com/array-includes

It’s important to note that while we use IE11 is as the most common example, as it is famously frustrating for its lack of feature support while still having a high amount of users, a surprisingly high amount of users are often seen on older versions of Chrome or Samsung Internet, using older hardware that never updated.

We’ll use these two examples as we go on, but its important to note that its not just smaller (if common and useful) language features like this that suffer compatibility issues, but also large features like getUserMedia for if we want to ask for and use the device’s cameras from the browser.

The first question to ask is the often ignored ‘can we simply not use these features?’. While some have obvious product benefits, like “getUserMedia”, surely others are not essential?

This is a valid challenge, and we must not let ourselves get caught up in being ‘modern’ simply for the sake of it. However, there are two broad reasons to push to support as many modern features as possible. Firstly, these were each developed for a reason. The process of adding new features to a language is long and arduous, so if something has been added we can assume there are real benefits — whether in reducing bugs, improving performance, or making previously complicated tasks simpler. Secondly, it is simply a fact that developers are attracted to using the most modern tools available, and — in a competitive hiring market — being ahead of the curve can help attract talent while being too far behind can directly lead to attrition. This may not be as obviously a user problem, but business problems must also be considered.

We can take from this that, while the specifics must be looked at closely, it is not simply an option to ‘not use’ features as a strategy to maintain older browser support.

So, what tools are most commonly used within the industry to solve the above two examples?

Tooling

Browserslist

‘browserslist’ provides a common syntax for declaring a list of browser names and version ranges alongside a series of tools for working with this list, user agent strings, and the caniuse feature support API.

This is how we codify our product’s ‘supported browsers list’, an example of which could look like:

module.exports = [
"Android >= 86",
"Chrome >= 71",
"Edge >= 86",
"Explorer >= 11",
"Firefox >=67",
"Opera > 72",
"Safari >=11",
"Samsung >=12",
]

Transpilation and Polyfilling

Transpilation (a subcategory of compilation) and Polyfilling are both techniques used to allow modern code to work in older browsers. If we take our two examples above, arrow syntax and Array.includes, we would enable the first to work via transpilation while the second would be polyfilled.

Transpilation

Transpilation takes the existing JavaScript and transforms, at build time, the newer syntax to equivalent syntax that older browsers can understand.

// **input**// es6 "arrow" syntax
const fn = () => "hello";
// **output**"use strict";
// arrow syntax has been turned into a function
var fn = function fn() {
return "hello";
};

This allows developers to write using the latest syntax but ship to users code that is compatible in older browsers.

Babel has become the gold standard of tools for this job and is what is used in Create React App/React Scripts. We’ll refer to react-scripts from this point forwards, but Babel can also be configured on its own if you do not use react-scripts or Webpack.

Babel has many configuration options, but these are abstracted away behind react-scripts. Whenever we run ‘react-scripts build’ before deploying code react-scripts will use Babel to transpile it for us. This is also where our browserslist becomes important to set up correctly — because react-scripts uses the present-env package, Babel will use our browserslist to determine what level of transpilation it will perform. For example, if our browserslist stated we only supported Chrome >=50 then no transpilation of arrow functions would need to occur as from >=45 arrow functions are supported natively.

You can play around with input, output, and different configuration settings at https://babeljs.io/repl.

Several points are important to note:

  • If a browser is not listed in our browserslist then Babel will not do any work to support it, so we cannot assume it works
  • Babel adds an additional volume of code when transpiling, making our bundles bigger for every user whether they need it or not
  • If using react-scripts we do not have to specify specific syntax to transpile as it is set up to use @babel/present-env, Babel will run across all our code and execute wherever it can

Polyfilling

So what about Array.includes, does this not also get transpiled? Simply, no, this is not possible. Instead a polyfill must be added — an injected snippet of code that allows the method to work anywhere in our code.

Why is this not possible I hear you ask? This is because Array.includes touches a browser global — no amount of syntax transformation can touch this.

As Tyler McGinnis’ great comparison of transpilation and polyfilling states:

What’s the difference between compiling and polyfilling? When Babel compiles your code, what it’s doing is taking your syntax and running it through various syntax transforms in order to get browser compatible syntax. What it’s not doing is adding any new JavaScript primitives or any properties you may need to the browser’s global namespace. One way you can think about it is that when you compile your code, you’re transforming it. When you add a polyfill, you’re adding new functionality to the browser.

How exactly to provide polyfills requires slightly more thought than transpilation, where the default of using Babel is the obvious best choice.

This polyfill snippet could be written by the developer, provided by a third-party service like polyfill.io, injected by Babel if the ‘useBuiltIns’ config is set to true, or provided by explicitly importing libraries such as core-js.

Some points that stand out when comparing each of these are:

  • If we polyfill every feature we add significant weight to our bundle, babel-polyfill is around 90kb for example
  • Even if we polyfill only a subset, if this is done at build time to a single bundle set then any extra weight is felt by all users — even those on modern browsers
  • Requesting a polyfill at runtime introduces a blocking call, risking slowing page load times
  • Per-request polyfilling per user requires either hosting, and maintaining, our own polyfills service, or paying for a SAAS solution and managing the integration
  • Per-request polyfilling of ’n’ features per browser introduces the highest amount of entropy, making debugging errors more complex

We’ll look at this in more detail later, but the solution that I currently use in the main project I work on is a polyfill middleware. An internal npm package is used which specifies exactly which polyfills to include, and uses browserslist-useragent-regexp to serve them only to users who required them by adding a self-hosted script to the initial HTML document, passing on no performance cost of downloading them to browsers that do not require them.

Depending on your own constraints you may choose a different solution, and this is completely valid.

Lodash

It’s worth mentioning that features can also be provided by installing npm packages that implement those features for you — very similar to using a polyfill but rather than allowing the native feature to work a package-specific version can be imported and used.

Lodash is the most well known, and loved, package of this nature. Rather than using the Array.includes method it provides its own ‘includes’ function. These functions can, and should, be imported individually to reduce their impact on the overall bundlesize. These functions are often more versatile and powerful than the native browser methods, functions that work equally on Objects and Arrays for example, so are not always used simply to work around adding a polyfill.

import includes from 'lodash/includes'

The other stand out example is Axios, which avoids the need to polyfill the browser native ‘fetch’.

This is a way to avoid having explicit polyfills, but this is not necessarily the ideal path to take.

It was also important to know that Babel lists a caveat to how it transpiles — for some syntax it expects and depends on specific polyfills being present. Importantly here for the Object spread syntax, it requires Array.from to be polyfilled. At a minimum this is required to be polyfilled, as well as any others listed by Babel if the specific features it mentions are to be used.

Babel caveats which polyfills it expects as standard to work https://babeljs.io/docs/en/caveats

Preventing writing incompatible code: eslint-plugin-compat

So that all sounds great, we can use Babel to transpile our code and a polyfill provider to polyfill our chosen global features, but how do we prevent ourselves from writing (and shipping) code that uses features not specified to polyfill? Yes, we would see errors logged in production so know we had used something our tooling doesn’t support and know to fix it, but our users shouldn’t have to suffer errors if we can prevent shipping them in the first place.

eslint-plugin-compat can help us here! This linting plugin uses our browserslist, Babel config (held within react-scripts), and a manually added optional list of polyfills to detect which features are supported within a codebase, and throw lint errors if any non-supported features are used.

This is hugely useful!

It’s not just JavaScript — Compatible CSS

In the same way, JavaScript language features update over time so too does CSS. While broken CSS will not prevent code running this actually increases the risk — as no errors are logged only manual detection or a drop in a business metric detects an unsupported feature is causing a page layout to break.

The transpilation and polyfilling approach taken for JavaScript is different for CSS. This is mainly due to how hard it is to polyfill CSS without them being prohibitively non-performant and actually work reliably, and the industry guidance towards using progressive enhancement via feature queries rather than polyfills for the same reasons. Solutions like Houdini that aim to improve CSS polyfills are not yet widely supported enough themselves to be recommended.

One area does have the tooling to support its automatic inclusion, however — vendor prefixes. This now deprecated strategy of releasing new CSS features left behind it many features that are used in production but were never released without the prefix. Remembering which prefixes to include is hard, and as the definition of ‘modern’ browsers changes over time remembering to clean up old prefixes from CSS (to reduce file bloat) is unlikely to realistically happen. This is where Autoprefixer comes in, a CSS post-processor plugin that can consume a browserslist and do both of these for us! It also provides a CLI command of npx autoprefixer —-info. This lists which prefixes will be added due to the browserslist in a project (if any).

This is something react-scripts already uses. It uses the postcss webpack loader, which comes with autoprefixer, so all projects benefit from this as standard. It also includes the tried and tested postcss-flexbugs-fixes plugin, which solves some old flexbox bugs for browsers that never received native updates.

For helping to prevent writing incompatible code to begin with stylelint-no-unsupported-browser-features can be used in the same way as eslint-plugin-compat, again consuming our ever key browserslist!

What is the ideal setup?

Considerations

Performance

Performance falls into two main areas here: bundle size and time of download (blocking vs non-blocking).

Both transpilation and polyfills introduce extra code that must be downloaded — and that cost is felt by the user. This leads to two considerations:

  • Can we only ship extra code to those that actually require it, rather than all users receiving the extra code?
  • Can we limit which features we enable to only those we deem ‘essential’ — evaluating every feature and balancing how useful it is vs what it costs the end-user?

Time of download refers to how we add the additional code. If we request this as an extra resource then this becomes a blocking operation, that cannot be parallelized as the rest of the app depends on it, potentially increasing page load times. If this is a dynamic request to service this introduces even more latency, and the potential of the service being down. Including it in the main bundle avoids this, but if we ship one bundle to all users this means every user will feel the cost.

Ownership

Ownership is a real cost — whether paying for a third party service in money and time to manage the integration, time to run and maintain an internal service or library, or in managing the adoption and education of an internal company standard. This can, and should, be taken into consideration and realistic decisions made for long term support of any solution.

System Complexity

Closely related to ownership the long-term impact of overly complex systems must be considered. How often is a new thing written simply because the new owners do not fully understand the previous one, or incidents occur? A brittle complex system that at the time of first writing produces a ‘perfect’ output is of lesser value than a maintainable and understandable system that produces a ‘good’ output.

Similarly, iterative development where steps do not back us into a corner but release value in concrete steps should be preferred over high investment that may be abandoned.

Developer Complexity

The obvious here is the effort involved to use the tooling, setup, maintenance, etc, but consideration is also the complexity of switching between different internal front end projects. Knowing what is supported where, keeping track, and not writing incompatible code, is as much a part of the overall solution as what is actually shipped to browsers.

The ideal

Here we see that each browser receives only the additional support (polyfills and transpilation) it individually needs.

Here we see that each browser receives only the additional support (polyfills and transpilation) it individually needs. This means individual bundles per browser, ideally produced at build time so that no blocking requests for additional code are introduced. Realistically the cardinality of per-browser bundles adds too high system complexity, but having a ‘modern’ and ‘legacy’ set still brings significant advantages with clearly lower cardinality.

The difference of limiting to two builds can be clearly seen here — the minimum legacy browser defines the size of the additional support code that is used across all browsers in its group. This looks particularly bad in Evergreen Browser 1, which falls just sky of the ‘minimum feature requirements’. However, remember that the fact that this browser is evergreen is in itself the solution! It is anticipated that over a short period of time, unlike in the non-evergreen browsers due to the difference in friction of updating, the number of users on this specific browser version will decline.

Here then we see fully supported older browsers with no performance degradation to modern browsers. The performance impact on older browsers varies but the most important factor is fulfilled — their features work!

The performance impact on older browsers can be capped by deciding on and enforcing this minimum feature set. Rather than including all available polyfills we could use the “useBuiltIns: ‘usage’” to limit to only those features we use in code. If we then use eslint-plugin-compat to specify which polyfills we support, matching our intended minimum feature set, we thereby create a system where our linting will fail if we use something not in our minimum feature set. Even though Babel could polyfill more for us we stop ourselves before we unthinkingly ask it to; we simply don’t write the initial code in a way that requires it to.

While using eslint-plugin-compat in this way may be novel, producing this two bundle output is a solved problem — and one described clearly in Smart Bundling: How To Serve Legacy Code Only To Legacy Browsers. The tools described here, Webpack with Babel, Browserslist, Browserslist support plugins, are all tools we described earlier.

The ideal analyzed

For performance this solution achieves its ideal aim — the cost of supporting older browsers is not felt either in bundlesize or blocking additional requests by modern browsers.

Developer complexity in switching between projects is mitigated by configuring flexible linting tools to catch any unsupported code written.

Changing traffic patterns can be handled by updating the browserslist to match real traveler metrics.

Ownership from an external source is not an issue, no paid third parties are used and all open source libraries are widely used with regular updates. (Your company should consider supporting these financially if you were to be benefitting from them).

System complexity is the tipping point here. For new projects, where you also have complete control of your build pipeline, this is not simple, but not overly complex. When adjusting an existing project, or a set of decentralized projects, the system complexity can grow. In this case, it would require detailed analysis to ensure the costs (in maintenance, ownership, education) were worth the performance difference between this ideal and the next best alternative.

The next best alternative

providing a polyfill via a server middleware

Here we can see the result of using a polyfill middleware on the server to add a hosted polyfill bundle to the initial HTML document. This can be self-produced and hosted, or from something like polyfill.io. By having this middleware as an npm package it is possible to version and standardize its configuration if used in multiple projects.

An example of what this middleware could look like can be seen in this example polyfill-middleware repository.

By matching the user agent of the request against a list of user agents needing polyfill support, modern browsers feel no impact of the polyfills, (though they do for transpilation) as they download nothing extra.

We can see that by using our browserslist, Babel for transpilation, and a polyfill server middleware for feature detection based polyfill script addition we can see we are already very close to the ideal as described above.

The JavaScript and CSS linters mentioned above are still completely compatible.

The areas for improvement can be seen as:

  • Requiring an extra network request for the polyfill
  • Transpilation cost felt across all browsers
  • CSS prefix cost felt across all browsers

It is for you to decide if the ideal, if there were no constraints, is the ideal for your use or if the alternative here, or another, is better given your individual constraints.

Conclusion

Hopefully, the considerations, explanations, and resources linked here will help you to choose the best solution for you.

Let’s make the web usable for as many people as possible, while still benefiting from modern features!

--

--