In a discussion around optimizing front-end assets, I recently had occasion to
explain how browsers process <script>
tags — which seemed useful enough to be
reposted here.
My original assertion was that concatenating (or bundling) JavaScript and CSS assets[1] might improve performance by reducing load times, but inevitably the conversation ended up including topics such as moving scripts to the bottom, minification, CDNs and HTTP/2.
In order to assess the consequences of any such decision, it helps to
understand how browsers work: When the browser processes an HTML document, it
does so from top to bottom. Upon encountering a <script>
tag, it halts (“blocks”)
further processing[2] in order to download the referenced script file.
Only after that download has completed and the respective JavaScript code has been
processed, HTML processing continues.
Let’s imagine the following document:
The browser might actually begin rendering the page even before it has fully
downloaded the HTML file. Thus you might see the browser window reading “Hello
World” (thanks to the <title>
tag) while the page is still blank.
Once we arrive at <script src="foo.js">
, processing halts as described above.
Afterwards, we continue to <script src="bar.js">
, repeat the same procedure,
and then move on to <script src="baz.js">
for the final piece. That leaves us
with the following sequence:
Concatenation would mean combining these files into a single one:
While the amount of content transferred remains identical[3], this is generally faster because there’s less networking overhead. (Obviously I’m simplifying a bit here.)
As you might have guessed from this (poor man’s) visualization, there’s another approach. We could parallelize the retrieval of JavaScript files:
Browsers these days support this with the simple addition of
a dedicated attribute:
<script defer>
(implied by <script type="module">
).
In fact, there’s also another, similar attribute:
async
—
except this one doesn’t guarantee order of execution; see
Asynchronous vs Deferred JavaScript for
details.[4] However, these attributes don’t work for inline scripts (of which,
unfortunately, there were a few in the project at hand), so those would likely
execute before the deferred external scripts they depend on become available.
Now, you might argue that HTTP/2 makes all of this a non-issue because it reduces protocol overhead — but in fact, even HTTP/2 is still prone to the laws of physics:
As described above, <script>
tags are processed sequentially — which means
that the browser doesn’t know it should retrieve bar.js
until after foo.js
has been fully loaded. Thus it actually has to wait before even requesting
that file from the server:
Depending on connectivity, that latency can be significant.
However, if we were using defer
, those <script>
tags would be non-blocking,
which means the browser could request both files simultaneously:
This is why concatenation can actually be a net negative with HTTP/2, as it prevents parallel downloads:
Network protocols aside, it’s generally good practice to relegate script tags
to the bottom in order to avoid unnecessarily blocking static HTML content. In
the example above, even if the entire HTML document has already been
downloaded, if foo.js
and/or bar.js
are slow to load (for which there are
myriad potential reasons),
they’d prevent the content below from being displayed.
-
i.e. instead of serving source files individually, combining them into a single file for distribution ↩
-
for somewhat arcane historical reasons related to
document.write
↩ -
minification, by contrast, reduces the amount of content (e.g. by removing whitespace that's only relevant for us puny humans) ↩
-
noted performance pioneer Steve Souders makes the case for preferring
defer
overasync
↩