Html5-boilerplate: script loading solution

Created on 10 Aug 2010  Â·  132Comments  Â·  Source: h5bp/html5-boilerplate




This issue thread is now closed.

It was fun, but the conversations have moved elsewhere for now. Thanks

In appreciation of the funtimes we had, @rmurphey made us a happy word cloud of the thread.

Enjoy.





via labjs or require.

my "boilerplate" load.js file has LABjs inlined in it, and then uses it to load jquery, GA, and one site js file. if it helps, I have an integrated RequireJS+jQuery in one file: http://bit.ly/dAiqEG ;)

also how does this play into the expectation of a build script that concatenates and minifies all script? should script loading be an option?

javascript

All 132 comments

kyle: "@paul_irish i don't agree. http://bit.ly/9IfMMN cacheability (external CDN's), parallel downloading, script change-volatility..."

james burke: "@paul_irish @fearphage @getify RequireJS has build tool to do script bundling/minifying, so can have best of both: dynamic and prebuilt"

The easiest way for developers to get started with script loading would probably be using $Lab.js, because it's already using chaining syntax that allot of jQuery users are familiar with.

If they are building big enterprise apps they can always migrate to require.js if needed.

currently there are three main script loading techniques:

  1. HeadJS
  2. ControlJS
  3. LABjs

use it or not, which one to use is kinda debatable: http://blog.getify.com/2010/12/on-script-loaders/

There is also requireJS and EnhanceJS just to let you know the alternatives to HeadJS ControlJS and LabJS. Even Yahoo and google offer something similar.

With the release of jQuery 1.5 and deferreds -- http://www.erichynds.com/jquery/using-deferreds-in-jquery/ , Boris Moore's utilizing them in DeferJS, a new script loader project: https://github.com/BorisMoore/DeferJS

By default script loading stops all other downloads, so downloading modernizr in the header is bad. Inlining loader make sense, because loaders can download script in parallel and in not blocking mode. For example if you do not need all modernizr features, you can inline head.min.js which is only 6kb or custom build of modernizr (http://modernizr.github.com/Modernizr/2.0-beta/). Inlining CSS sometimes make sense too. Google uses inlining, they inline css, js and empty 1x1 gifs through datauri.

LabJS is becoming pretty widely used and is a good solution - also it can be included asynchronously so doesn't need to block.

http://blog.getify.com/2010/12/on-script-loaders/ is by the author

http://yepnopejs.com/ just went 1.0 and doesn't break in new webkit, unlike LAB and head.js. Script loading is hard.

yepnope is also integrated into Modernizr as Modernizr.load.. http://modernizr.github.com/Modernizr/2.0-beta/

So we'll probably have a script loader in h5bp by way of Modernizr.load pretty soon.

I don't think it'll make 1.0 but once i take Modernizr up to 1.8 we'll toss that into h5bp 1.1. Yeeeah

Hi Paul

I've porting an existing site to use your H5BP and I want to use the yepnope.js script loader. It's really nice to see it all the bits and bots put together as you have done.

What would you recommend using at the moment?

  1. Include yepnope.js along with modernizer.js at the top of the page
  2. Include it at the bottom of the page, to load after the HTML has finished loading.
  3. Use the beta version of modernizer.js
  4. I could concatenate yepnope.js with modernizer.js into one include.

Regardless of how best to include it, how do you recommend loading the scripts with yepnope,js?

I figure we should be doing it around here : https://github.com/paulirish/html5-boilerplate/blob/master/index.html#L52 and use yepnope to load the CDN / Local copy of jQuery and our other scripts.

But, do you think it's best to use an external script include or render a script block within the html, which then loads the scripts via yepnope.js?

Many thanks.

Andy

Oh and another thing.

As yepnope can load css via, I would say it's best to include the main css as you would normally and use yepnope to only include css for specific fixes.

For example including some css that is only applied to older versions of IE.

hokapoka,

Use the beta version of modernizr.. just include what you need (and include Modernizr.load()) and then put that at the top of the page.

the actual code for the jquery fallback with yepnope is on http://yepnopejs.com/

And yes i like your idea of the conditional load of IE css.

tbh there is too much blind faith around script loaders wrt performance and i dont think we're ready to say THIS IS THE RIGHT WAY.

we need more research around filesizes, bandwidth and network conditions that indicate smart recommendations on script loading but right now the field is nascent and we'd be naive to recommend a blanket solution of script loading.

so.

closing this ticket and asking anyone who cares to do the comprehensive research and publishing required to make it easier for developers to make a smart choice about this one

i have done quite a bit of research about concat vs. parallel load. i still, without reservation, make the recommendation to combine all js into one file first, then chunk it up into 2-3 ~equal sized chunks, and load those in parallel.

I'd love to be able to take my research and make it wide spread and to scale, so that it was viable as "fact" in this area. The problem is I've tried and tried to find hosting bandwidth where it won't cost me lots of $$ to actually run the tests at scale, and have failed to find that hosting provision yet.

If I/we can solve the bandwidth issue for testing, I have the tests that can be run to find out if the theory of parallel loading is in fact viable (as I believe it is).

@getify what do you need as far as a testing rig?

I can do about 1.5TB more data out of my personal server than I'm currently using. I have Nginx installed and that can handle somewhere around 4 trillion quadrillion hits per microsecond. I don't feel like the technology is the barrier here.

If we're worried about locations, we can spoof higher latency, and/or find a couple other people with a little extra room on their boxes.

BTW, I take a little bit of issue with "blind faith".

It is easy, provable, and almost without question true that if you have an existing site loading many scripts with script-tags, using a parallel script loader (with no other changes) improves performance. This is true because even the newest browsers cannot (and never will, I don't think) unpin script loading from blocking DOM-ready. So even in best case browser loading, if there's no other benefit, drastically speeding up DOM-ready on a site is pretty much always a win (for users and UX).

Your statement is a little bit of a false premise because it assumes that we're trying to compare, for every site, parallel-loading to script-concat. Most sites on the web don't/can't actually use script-concat, so really the comparison (for them, the majority) is not quite as nuanced and complicated as you assume. If they don't/can't use script-concat (for whatever reason), the comparison is simple: parallel-loading is almost always a win over script tags.

If they are open to script-concat (or already use it), then yes, it does get a bit more nuanced/complicated to decide if parallel-loading could help or not. But script-concat is not a one-size-fits-all silver bullet solution either, so there's plenty of sites for whom parallel-loading will remain the preferred and best approach.

Just because some sites deal with the nuances/complexities of deciding between parallel-loading vs. script-concat doesn't mean that the greater (more impactful) discussion of parallel-loading vs. script tags should be lost in the mix. The former is hard to prove, but the latter is almost a given at this point.


All this is to say that, all things considered, IMHO a boilerplate should be encouraging a pattern which has the biggest impact in a positive direction. If 80% of sites on the internet today use script tags, most of which would benefit from moving from script tags to parallel-loading, then parallel-loading is a very healthy thing to suggest as a starting point for the boilerplate.

It's a much smaller (but important) subsection of those sites which can potentially get even more benefit from exploring script-concat vs. parallel-loading. But a minority use-case isn't what should be optimized for in a boilerplate.

Just my few cents.

@paulirish @slexaxton --

As far as bandwidth needs, I estimated that to get 10,000 people (what I felt was needed to be an accurate sampling) to run the test once (and many people would run it several times, I'm sure), it would be about 200GB of bandwidth spent. For some people, that's a drop in the bucket. For me, 200GB of bandwidth in a few days time would be overwhelming to my server hosting costs. So, I haven't pursued scaling the tests on that reason alone.

Moreover, I have more than a dozen variations of this test that I think we need to explore. So, dozens of times of using 100-200GB of bandwidth each would be quite cost prohibitive for me to foot the bill on. I didn't want to start down that road unless I was sure that I had enough bandwidth to finish the task.

They're just static files, and the tests don't require lots of concurrent users, so there's no real concerns about traditional scaling issues like CPU, etc. Just bandwidth, that's all.

We can take the rest of the discussion of the tests offline and pursue it over email or IM. I would very much like to finally scale the tests and "settle" this issue. It's been hanging around the back of my brain for the better part of a year now.

I can do unlimited TB on my dreamhost VPS so this won't be a problem. right now i'm doing 72gb/day and can handle way more. :)

I agree with paul, and think there is quite a bit of misinformation about how and when script-loaders are going to be of any benefit to anyone.

Your first paragraph says it's 'easy', 'provable' and 'without question' that script loaders improve performance.

I made a similar postulation to @jashkenas a while back, and he and I put together some identical pages as best we could to try and measure performance of our _best_ techniques. He's a fan of 100% concat, and I tried 2 different script loading techniques.

https://github.com/SlexAxton/AssetRace

The code is all there. Obviously there wasn't a huge testing audience, but the results at best showed that this script-loader was about the same speed as the concat method (with your similar sized 3 file parallel load guidelines followed), and at worst showed that script-loaders varied much more and were generally slower within a margin of error. Feel free to fork and find a solution that beats on or both of ours, even if it's just on your machine in one browser.

As for the "false premise" because h5bp assumes that people concat their js. This argument is entirely invalid because h5bp offers a script build tool, complete with concat and minification. So the argument that parallel-loading is almost always a win over multiple script tags may be true, but it's not better than what h5bp offers currently. That is the context of this discussion.

I think the worst case scenario are people taking something like yepnope or lab.js and using it as a script tag polyfill. That's absolutely going to result in slower loading (of their 19 JS and 34 CSS files), as well as introduce a slew of backwards and forwards compatibility issues that they'll be completely unaware of.

I think in the spirit of giving people the most sensible and performant and compatible default for a _boilerplate_, a build tool goes a lot further to ensure all three.

@slexaxton

... the results at best showed that this script-loader was about the same speed as the concat method (with your similar sized 3 file parallel load guidelines followed)...

I'll happily find some time to take a look at the tests you put together. I'm sure you guys know what you're doing so I'm sure your tests are valid and correct.

OTOH, I have lots of contradictory evidence. If I had ever seen anything compelling to suggest that parallel script loading was a waste or unhelpful to the majority of sites, I would have long ago abandoned the crazy time sink that is LABjs.

I can say with 100% certainty that I have never, in 2 years of helping put LABjs out ther for people, found a situation where LABjs was slower than the script tag alternative. Zero times has that ever occured _to me_. There've been a few times that people said they didn't see much benefit. There've been a few times where people were loading 100+ files and so the crazy overhead of that many connections wiped out any benefits they might have otherwise seen. But I've never once had someone tell me that LABjs made their site slower.

I have literally myself helped 50+ different sites move from script tags to LABjs, and without fail sites saw performance improvements right off the bat. Early on in the efforts, I took a sampling of maybe 7 or 8 sites that I had helped, and they had collectively seen an average of about 15% improvement in loading speed. For the 4 or 5 sites that I manage, I of course implemented LABjs, and immediately saw as much as 3x loading speed.

Of course, when LABjs was first put out there, it was state-of-the-art for browsers to load scripts in parallel (only a few were doing that). So the gains were huge and visible then. Now, we have almost all browsers doing parallel loading, so the gains aren't so drastic anymore.

But the one thing that is undeniable is that browsers all block the DOM-ready event for loading of script tags. They _have to_ because of the possibility of finding document.write(). Parallel script loading is essentially saying "browser, i promise you won't have to deal with document.write, so go ahead and move forward with the page".

Take a look at the two diagrams on slide 10 of this deck:

http://www.slideshare.net/shadedecho/the-once-and-future-script-loader-v2

Compare the placement of the blue line (DOM-ready). That's a drastic improvement in perceived performance (UX), even if overall page-load time (or time to finish all assets loading) isn't any better.

...h5bp offers a script build tool...

The faulty assumption here is that just because h5bp offers this tool, that all (or even most) users of h5bp can use it. Even if 100% of the users of h5bp _do_ use it, that doesn't mean that if h5bp were rolled out to the long-tail of the internet, that all of them would use that concat tool. There are a bunch of other factors that can easily prevent someone from using that. There are _very few_ reasons why someone can't move from using script tags to using a parallel script loader.

As such, parallel script loading still offers a broader appeal to the long-tail of the internet. It still is easier for the majority of sites that do not use script loading optimizations, to move from nothing to something, and that something offers them performance wins. Few of those long-tail sites will ever spend the effort on (or have the skill to experiement with) automated script build tools in their cheap $6/mo, mass shared hosting, non-CDN'd web hosting environments.

I think the worst case scenario are people taking something like yepnope or lab.js and using it as a script tag polyfill. That's absolutely going to result in slower loading...

I could not disagree with this statement more. LABjs is specifically designed as a script tag polyfill. And the improvements of LABjs over regular script tags (ignore script concat for the time being) are well established and have never been seriously refuted. If you have proof that most (or even a lot of) sites out there using LABjs would be better off going back to script tags, please do share.

There is absolutely no reason why parallel script loading is going to result in slower loading than what the browser could accomplish with script tags. That makes no sense. And as I established above, script tags will always block DOM-ready, where parallel script loading will not.

introduce a slew of backwards and forwards compatibility issues that they'll be completely unaware of.

What compatibility issues are you talking about? LABjs' browser support matrix has absolutely the vast majority of every web browser on the planet covered. The crazy small sliver of browsers it breaks in is far outweighed by the large number of browsers it has clear benefits in.

LABjs 1.x had a bunch of crazy hacks in it, like cache-preloading, which indeed were major concerns for breakage with browsers. LABjs 2.x has flipped that completely upside down, and now uses reliable and standardized approaches for parallel loading in all cases, only falling back to the hack for the older webkit browser. In addition, LABjs 2.x already has checks in it for feature-tests of coming-soon script loading techniques (hopefully soon to be standardized) like "real preloading".

I can't speak definitively for any other script loaders -- I know many still use hacks -- but as for LABjs, I'm bewildered by the claim that it introduces forward or backward compatibility issues, as I think this is patently a misleading claim.

to elaborate slightly on why i intend for LABjs to in fact be a script tag polyfill...

  1. older browsers clearly are WAY inferior at handling script tags loading that parallel loading can handle. it was in those "older browsers" (which were the latest/best when LABjs launched 2 years ago) that we saw the ~3x page-load time improvements. almost by definition, that makes LABjs a better script tag polyfill, since it brings a feature (ie, performance of parallel loading) to browsers which don't support it themselves.
  2. newer browsers are obviously a lot better. but they haven't completely obviated the benefits of script loaders. chrome as recently as v12 (i guess they finally fixed in v13 it seems) was still blocking image loads while script tags finished loading. even with the latest from IE, Firefox and Chrome, they all still block DOM-ready while scripts are dynamically loading, because they all still have to pessmisitically assume that document.write() may be lurking.

So, for the newer browsers, LABjs is a "polyfill" in the sense that it's bringing "non-DOM-ready-blocking script loading" to the browser in a way that script tags cannot do. The only possible way you could approach doing that in modern browsers without a parallel script loader would be to use script tags with defer (async obviously won't work since it doesn't preserve order). However, defer has a number of quirks to it, and its support is not widespread enough to be a viable solution (the fallback for non-defer is bad performance). So you could say that, in the very most basic case, LABjs is a polyfill for the performance characteristics of script tag defer (although not exactly).

Honestly, I still think we should petition standards for a script loading object. Having to create a script tag of a different type than text/javascript to trigger the cache (or worse, use an object tag or an image object or whatever a new version of a popular browser will require) is jumping a lot of hoops for nothing and performance will vary depending of too much variables. I can understand we still load stylesheets using dom node insertion (but that's only because of order) but when it comes to script, I think it doesn't make sense at all anymore (I wish google would stop using document.write in most of their scripts but that's another story entirely).

Also, I think we're missing the biggest point regarding script loaders here: to be able to load js code on-demand rather than load everything up-front (even with everything in cache, parsing and initializing takes time and it can get pretty ugly with a non-trivial ammount of concatenated scripts). Having some wait-time after a UI interaction is much less of a problem than having the browser "hang" even a little at start-up (DOM may be ready all-right, but what good is it if the code to enhance the page and add iteraction hasn't been executed yet: ever noticed how some sites load immediately then something clunky occurs?).

So strict performance measurement is all fine and dandy, but I still think perceived performance is the ultimate goal... and is sadly far less easy to estimate/optimize/compute.

This is intense.

@jaubourg--

Honestly, I still think we should petition standards for a script loading object.

There is much petitioning going on regarding how the standards/specs and browsers can give us better script loading tech. First big win in this category in years was the "ordered async" (async=false) that was adopted back in Feb and is now in every major current-release browser (exception: Opera coming very soon, and IE10p2 has it).

The next debate, which I'm currently in on-going discussions with Ian Hickson about, is what I call "real preloading". In my opinion, "real preloading" (which IE already supports since v4, btw) would be the nearest thing to a "silver bullet" that would solve nearly all script loading scenarios rather trivially. I am still quite optimistic that we'll see something like this standardized.

See this wiki for more info: http://wiki.whatwg.org/wiki/Script_Execution_Control

Having to create a script tag of a different type than text/javascript to trigger the cache (or worse, use an object tag or an image object or whatever a new version of a popular browser will require)

This is called "cache preloading", and it's an admitted ugly and horrible hack. LABjs way de-emphasizes this now as of v2 (only uses it as a fallback for older webkit). Other script loaders unfortunately still use it as their primary loading mechanism. But 90% of the need for "cache preloading" can be solved with "ordered async", which is standardized and isn't a hack, so well-behaved script loaders should be preferring that over "cache preloading" now.

So, I agree that "cache preloading" sucks, but there's much better ways to use document.createElement("script") which don't involve such hacks, so I disagree that this is an argument against continuing to rely on the browser Script element for script loading. If we can get "real preloading", the Script element will be everything we need it to be. I honestly believe that.

I think we're missing the biggest point regarding script loaders here: to be able to load js code on-demand

Very much agree that's an important benefit that script loaders bring. But it's sort of a moot argument in _this_ thread, because the "script concat" folks simply cannot, without script loading, solve the use-case, so it makes no sense to "compare" the two. You can say as a "script concat" proponent "fine, we don't care about that use case", but you can't say "we can serve that use-case better using XYZ".

Perceived performance _is_ huge and important, I agree. On-demand loading is a huge part of making that happen. On-demand loading will also improve real actual performance (not just perception) because it tends to lead to less actually being downloaded if you only download what's needed (few page visits require 100% of the code you've written).

Perceived performance is also why I advocate the DOM-ready argument above. Because how quickly a user "feels" like they can interact with a page is _very_ important to how quick they think the page is (regardless of how fast it really loaded). That's a fact established by lots of user research.

Gotta love the passionate, long comments by @getify
Kyle ...

If I can contribute in any way to the research, I would _love_ to.
Bandwidth (costs) doesn't seem to be the problem, so @getify, what do you propose on moving forward?
Do not hesitate to contact me via email (aaron [at] aaronpeters [dot] or twitter (@aaronpeters)

@kyle

Yep, I followed the script tag "enhancements" discussion regarding preloading and I just don't buy the "add yet another attribute on the script tag" approach as a viable approach. I've seen what it did to the xhr spec: a lot of complexity in regard to the little benefit we get in the end.

What's clear is that we pretty much only need the preloading behaviour when doing dynamic insertion (ie. doing so in javascript already) so why on earth should we still use script tag injection? It's not like we keep the tag there or use it as a DOM node: it's just a means to an end that has nothing to do with document structure.

I'd be much more comfortable with something along those lines:

window.loadScript( url, function( scriptObject ) {
    if ( !scriptObject.error ) {
        scriptObject.run();
    }
});

This would do wonders. It's easy enough to "join" multiple script loading events and then run those script in whatever order is necessary. It also doesn't imply the presence of a DOM which makes it even more generic. I wish we would get away from script tag injection altogether asap. Beside, it's easy enough to polyfill this using the tricks we all know. It's also far less of a burden than a complete require system (but can be a building brick for a require system that is then not limited to browsers).

That being said, I agree 100% with you on perceived performance, I just wanted to point it out because the "let's compact it all together" mantra is quickly becoming some kind of belief that blurs things far too much for my taste ;)

fwiw, defer is supported in IE4+, Chrome, Safari, and FF 3.5+. Not supported in Opera.

So that means.... 98.5% of users have script@defer support already.

@getify

However, defer has a number of quirks to it,

details plz? i haven't seen anything about this

Do scripts with defer execute before or after DOM ready event fires?

Is execution order preserved in all browsers?

How about exec order and coupling external with inline scripts?

@paulirish--

...98.5% of users have script@defer support already.

support may be there in that many browsers, but that doesn't mean it's reliable in that many browsers. that's what i meant. (see below)

However, defer has a number of quirks to it,

details plz? i haven't seen anything about this

Lemme see... IIRC:

  1. support of defer on dynamic script elements isn't defined or supported in any browser... only works for script tags in the markup. this means it's completely useless for the "on-demand" or "lazy-loading" techniques and use-cases.
  2. i believe there was a case where in some browsers defer'd scripts would start executing immediately before DOM-ready was to fire, and in others, it happened immediately after DOM-ready fired. Will need to do more digging for more specifics on that.
  3. defer used on a script tag referencing an external resource behaved differently than defer specified on a script tag with inline code in it. That is, it couldn't be guaranteed to work to defer both types of scripts and have them still run in the correct order.
  4. defer on a script tag written out by a document.write() statement differed from a script tag in markup with defer.

I don't have a ton of details ready at my fingertips on these issues. I recall about 2 years ago (before LABjs) trying to use defer, and running into enough of them in cross-browser testing that I basically set it aside and haven't really re-visited it much since.


I should also point out that defer is not really the same thing as what LABjs (and other parallel loaders) provide. I said that above with the caveat that it's only sorta like it. In fact, what parallel script loading provides (at least, for LABjs' part), is "ordered async", which has absolutely no way to be achieved only through markup.

The difference between "ordered async" and "defer" is that "ordered async" will still start executing as soon as the first requested script is finished loading, whereas "defer" will wait until the `DOM-ready before starting executions. For a simple page with little markup and no other blocking markup calls (like other script tags), this difference is small. But for a page with lots of resources, when scripts are allowed to start executing can be drastically different.

So, I'd honestly like to not get too much off on the tangent of defer, because in reality it's not a great comparison to what parallel script loading provides. It was just the closest example in markup-only that I could use to describe the execution ordered behavior I was getting at. I probably shouldn't have even brought defer up -- just muddies the discussion.

Let me just rephrase from above: "For modern browsers, LABjs is a kind of 'polyfill' for 'ordered async' behavior, which is not possible to opt for in markup-only in any browser."

I like "ordered async", that's a good phrase.

Kyle > afaik, scripts with defer will execute _before_ onload, even before domready.
Scripts with async attribute will execute asap, and _always_ _before_ onload, but not necessarily before domready

@aaronpeters--
I think you may be slightly off track. Here's how I understand it:

async scripts (whether in markup or dynamically created) will execute ASAP, meaning any time before or after DOM-ready. In otherwords, async scripts should wait on nothing (except the JS engine availability itself). However, if they are requested before window.onload, then in almost all browsers they will "hold up" the window.onload event until they load and execute. I think there was a documented case where the async scripts didn't hold up window.onload, just not remembering the exact details.

defer on the other hand specifically means: wait until after DOM-ready. Moreover, there's a "queue" of all scripts with defer set on them, such that the queue is not processed until after DOM-ready This means they should all execute strictly after DOM-ready (or, rather, after the DOM is ready and finished parsing, to be exact). But they may be delayed even further (if loading is going slowly). They should hold up window.onload though. I just recall from vague past memory that in some versions of IE the actual practice of this theory was a bit fuzzy.

@getify

Didn't want to derail this thread even more so I posted my thought on script preloading and your proposal on the WHATWG page here: http://jaubourg.net/driving-a-nail-with-a-screwdriver-the-way-web

async scripts (whether in markup or dynamically created) will execute ASAP, meaning any time before or after DOM-ready. In otherwords, async scripts should wait on nothing (except the JS engine availability itself). However, if they are requested before window.onload, then in almost all browsers they will "hold up" the window.onload event until they load and execute.

This is probably easier to understand once you realize JavaScript is single threaded. (I know it took me a while…)

Similarly, if you use setTimeout(fn, 0) to download resources, and they enter the download queue before onload fires, then loading these resources will (still) delay onload.

I think there was a documented case where the async scripts didn't hold up window.onload, just not remembering the exact details.

I’d love to get more info on this. Please remember! :)

Yay script loaders!

A problem I have had implementing them across AOL's network of sites is dealing with race conditions. For example, loading jQuery asynchronously in the head, then say a jQuery plugin midway in the document asynchronously delivered inside a blog post.

Thusly, I started my own script loader science project (Boot.getJS) to deal with this. The idea is to download all scripts in parallel and execute them in order no matter what, as soon as possible. It also supports deferring to ready or load, and caching of scripts. Most ideas are borrowed (stolen) by people on this thread, so thanks guys. :)

Since you were discussing benchmarks I figured I'd share a test page I created to understand differences in performance, syntax and behavior of the various script loaders out there, check it out here:

http://artzstudio.com/files/Boot/test/benchmarks/script.html

To see how various loaders behave, clear cache, and watch the network requests and the final time as well as the order that the scripts execute in.

Dave (@artzstudio), txs for sharing your thoughts and the link to your test page.

Question: why do you load LABjs on the '<script> tag in head' page? That seems wrong.

@artzstudio also, you are using an old version of LABjs. Is that intentional? If so, why?

@aaronpeters At AOL we have scripts like Omniture an Ad code (and more) that need to go in the head, so thats where the loader library goes in our use case. Also when scripts are at the bottom, there's a FOUC issue in some of our widgets so the sooner dependencies load (like jQuery) the better.

It was not intentional, this test is a couple months old. I'll update the libraries when I get a chance.

FYI (hope this is somewhat interesting/relevant), I ran a few test on Webpagetest.org to see what happens in IE8 when loading some of the @artzstudio test pages.
Script tags: http://www.webpagetest.org/result/110810_C7_752b756180e132f50a3ef065e9e059ca/
Yepnope: http://www.webpagetest.org/result/110810_8S_a53f4ed2e16179c328fc57c572e71076/
LABjs: http://www.webpagetest.org/result/110810_ZV_1ece92044799e52ed5199aed6b407133/
RequireJS: http://www.webpagetest.org/result/110810_Z3_a1537e41a0b0570286151859973d0cfa/

Video comparing Yepnope and LABjs: http://www.webpagetest.org/video/view.php?id=110810_074cb94c1b04a7ac9bd6538ec3fdd8d3c07f842d

Some notes:

  • Gzip is off on the server, so the way larger file size of RequireJS has an impact
  • As mentioned before, the Script tags page loads LABjs in the HEAD (does not make sense) and that of course has an impact

For these two reasons I created a video only showing Yepnope and LABjs.

What I find interesting is that the Start Render time is a lot better for LABjs. Why is that? Would love to better understand.

Closing remark: I am not posting this with the objective to favor LABjs over Yepnope or anything like that. Just sharing data ...

Oh sorry I see what you meant about the LABjs in <script> test. Fixed now, along with an upgrade to LABjs.

@artzstudio--

Thusly, I started my own script loader science project (Boot.getJS) to deal with this. The idea is to download all scripts in parallel and execute them in order no matter what, as soon as possible

So, were you aware that this is _exactly_ what LABjs is designed for and does very well (if I do say so myself)? What I mean is, did you just want a different API, or what about the parallel script loading functionality was not sufficient?


In any case, as much as I love to brag about LABjs, I don't think it's effective to bog down this thread with "look, my script loader is better at X" type of discussions. Those discussions are useful, but elsewhere.

In the end, all script loader technology boils down to a few simple ideas. No matter what kind of fancy API you layer on top of it, or what use-cases you cater to, the tech is the same. There must be 50 different script loaders these days, and really, none of them provide anything different in terms of the tech, just different APIs. So, comparing APIs is really a rather irrelevant discussion to be having.

What we should focus on is, can the base script loading technology we currently have available in browsers be used to improve the performance of page loads compared to just using script tags in markup. This is a long held premise of mine that it's absolutely true, but that premise has been called into question in this thread. So task #1 is to answer that question.

If we find out that script tags are simply just better than script loading, then we can just stop all this madness and shut down all our projects. I suspect that will not be the case, though. ;-)

Task #2 is to find out once and for all if script-concat is always better than parallel load. Again, my premise (and my testing) show that concat'ing all files into one is good, but then you have to chunk that big file into 2-3 roughly equal pieces and parallel load those chunks. So, we really need to test that theory as well.

If we find out that script-concat is always best, then again, script loaders are still useful when you consider that most sites load scripts from more than one location (jquery from Google CDN, google analytics from google, facebook/twitter/g+ buttons, etc). So we would need to then, as task #3, determine if concat is so much better that you should host your own copies of all those files, concat'ing them together with your own code.

Kyle, can you view source of my example and let me know how I would instrument LabJS to execute all scripts on the page in order (even outside the chain)? I could very well have misread the API (as Paul said, script loaders are hard, %-).

On Aug 10, 2011, at 9:09 AM, [email protected] wrote:

@artzstudio--

Thusly, I started my own script loader science project (Boot.getJS) to deal with this. The idea is to download all scripts in parallel and execute them in order no matter what, as soon as possible

So, were you aware that this is _exactly_ what LABjs is designed for and does very well (if I do say so myself)? What I mean is, did you just want a different API, or what about the parallel script loading functionality was not sufficient?


In any case, as much as I love to brag about LABjs, I don't think it's effective to bog down this thread with "look, my script loader is better at X" type of discussions. Those discussions are useful, but elsewhere.

In the end, all script loader technology boils down to a few simple ideas. No matter what kind of fancy API you layer on top of it, or what use-cases you cater to, the tech is the same. There must be 50 different script loaders these days, and really, none of them provide anything different in terms of the tech, just different APIs. So, comparing APIs is really a rather irrelevant discussion to be having.

What we should focus on is, can the base script loading technology we currently have available in browsers be used to improve the performance of page loads compared to just using script tags in markup. This is a long held premise of mine that it's absolutely true, but that premise has been called into question in this thread. So task #1 is to answer that question.

If we find out that script tags are simply just better than script loading, then we can just stop all this madness and shut down all our projects. I suspect that will not be the case, though. ;-)

Task #2 is to find out once and for all if script-concat is always better than parallel load. Again, my premise (and my testing) show that concat'ing all files into one is good, but then you have to chunk that big file into 2-3 roughly equal pieces and parallel load those chunks. So, we really need to test that theory as well.

If we find out that script-concat is always best, then again, script loaders are still useful when you consider that most sites load scripts from more than one location (jquery from Google CDN, google analytics from google, facebook/twitter/g+ buttons, etc). So we would need to then, as task #3, determine if concat is so much better that you should host your own copies of all those files, concat'ing them together with your own code.

Reply to this email directly or view it on GitHub:
https://github.com/paulirish/html5-boilerplate/issues/28#issuecomment-1772315

One would think Physics say concat is the best. Every new HTTP connection is another slow-start + a 100ms tax in worse case scenarios from the CDN.

However the truth about documents is that they can be very long. So loading one BFJS file in the head may unnecessarily slow down initialization of modules. Loading it at the end can cause annoying FOUC. There may be mobile implications to big files: http://www.yuiblog.com/blog/2010/06/28/mobile-browser-cache-limits/

I think this is the motive behind Souders' "split the payload" rule (http://oreilly.com/server-administration/excerpts/9780596522315/splitting-the-initial-payload.html). We need to do what is perceivably faster too.

And unfortunately what this boils down to is an "it depends" sort of answer, which makes this problem interesting enough to keep us all entertained.

I'm playing around with a hybrid approach where getJS calls are queued up and concat'ed periodically at a set time interval, as well as concating on a module dependency level (for example concating RequireJS dependencies instead of loading one at a time), all on the fly on the front-end.

It's a science experiment that as you point out, will hopefully be pointless soon, but is interesting nonetheless.

@getify: I know that this is pretty much just pissing on the parade at this point, but still.

If we find out that script tags are simply just better than script loading,
then we can just stop all this madness and shut down all our projects.
I suspect that will not be the case, though. ;-)

I could say a lot of things about snake oil, but a demonstration will work just as well:

http://jashkenas.s3.amazonaws.com/misc/snake-oil/labjs.html

http://jashkenas.s3.amazonaws.com/misc/snake-oil/vanilla.html

That's a page with 100k of text, 10 images, and 171k of JavaScript. The vanilla version uses a single minified file that includes jQuery, jQuery-UI, Underscore, and Backbone, as well as the timer.js file that writes out the load time results. The LABjs version loads each of the 5 (separately minified) JavaScript files using LABjs.

You'll find that not only is there no benefit to the LAB version, but that the extra HTTP requests only hurt load performance, and have to compete with other assets on the page, like images. But all of this has been said many times before...

LABjs

Vanilla

I can anticipate a counterargument about loading your scripts in bits and pieces ... but that's entirely orthogonal to the script loading technique, so please, leave it out of the discussion.

By all means, stop the madness.

@jashkenas Full, 100% ack. Script loaders just add overhead and complication and points of failure. A single, server concatenated file loads fast(est), both in terms of pure transfer time and in terms of efficiency of JavaScript interpreters, and as a bonus, gzipping works better if you just have a single file.

I have to say, the LABjs version _sometimes_ loads faster in my browser (compared to the vanilla page), but not consistently. Also onload doesn't always fire, which seems... odd.

Yes, gzipping gives you even more of an overall win with less HTTP requests.

And this isn't about dogma, it doesn't have to be a _single_ JS file for the entire app -- two or three with defer are fine for finter-grained caching, as is loading more later.

Some research by the Google Page Speed team:

http://pagespeed-velocity2011.appspot.com/#8 see slides 8-14, which lends more inconclusiveness to the discussion


I am still keen on the script @defer attribute and think that's a wise basic default, unless you plan on sinking many hours/days into perf testing of your own variations.

@miketaylr: Yes, please shift-refresh each page many times to get an overall feel. S3 latencies and image loads will make things a bit unpredictable -- behaving more like a real app.

Well labjs _always_ loads fastest in my browser (Safari 5.1) even with shift-refresh or when elements are cached.

Of course using a script loader without concatenating will be slower than a concatenated script tag. That's why people (YUI, requireJS) created script loaders that load concatenated files and services that concatenate them on request (https://github.com/rgrove/combohandler).

C'mon this discussion doesn't make any sense. Script loaders are for loading scripts on demand, particularly after user interaction, for instance loading the logic behind dialog and form validation when clicking on a "log in" button.

I have a sneaky suspicion that @jashkenas and @madrobby are oversimplifying things.
Steve suggests parallel downloading has several benefits for a range of blocking issues and browsers _(yes that means non-WebKit)_. He also mentions a strategy of loading the bare minimum JS required for dom-load tasks and then loading the rest later as needed. Because situations and dev needs vary I donno if a script loader belongs in a boilerplate _(enabled by default)_ but I wouldn't throw the baby out with the bath water just yet.

If it wasn't clear in my original post. I tend to agree (with jdalton) that there are quite a few benefits of script loaders in highly tested and specific environments that require special attention. I don't think it's an appropriate default.

I agree with @jdalton: there no size-fit-all loader. I personally use different script loaders depending on my actual needs and projects. Sometimes something simple like yepnope or LabJS is fine, others, RequireJS is a godsend. I'm not sure a boilerplate has to force one in. It's tricky because the idea would be for the boilerplate to make it easy to switch to a script loader... so I wouldn't throw the baby out with the bath water just yet either.

Also, @getify, pretending all script loaders actually use the same tech underneath is a very uninformed statement.

For what it's worth...
this

var script = document.createElement('script')
script.src = 'foo.js'
document.getElementsByTagName('head')[0].appendChild(script)

is better than this

<script src="foo.js"></script>

for the one main reason that it is non-blocking. Thus subsequent images and CSS files will need to wait until that file is downloaded with the latter version. The former, is async — This, everyone should know regardless of whether you decide to use a script loader or not.

re: "pretending all script loaders actually use the same tech underneath is a very uninformed statement."

If they're not doing it the above way, they're doing it wrong

Well to be perfectly fair, using appendChild has fallen out of favor... :D

However, I added that test case to AssetRace

https://github.com/SlexAxton/AssetRace/blob/master/asyncconcat.html

It makes onload fire faster, so there _could_ be some perceived benefit. But the finish time is about the same...

@ded: We're not talking about incompetent large blocking <script>'s in <head> here ... we're talking about script tags with a defer or async attribute, loaded at the end of the <body>, where there's nothing left to block.

@jaubourg--

Also, @getify, pretending all script loaders actually use the same tech underneath is a very uninformed statement.

This is a complete mis-representation of what I was getting at. In fact, most script loaders are NOT doing the things that I think they should be doing (and that LABjs now is) in terms of using the best tech. My point was, even if all of them did use the best tech, there's still a finite limit to what we can do tech-wise. I'm pretty sure there are no loaders out there which are using some magic silver bullet that LABjs is unaware of or not using. You can't get light to go faster than the speed of light, no matter what you do or how you fiddle with the numbers.

Arguing about the tech underneath (by way of saying "hey, look at my cool and better API") is pointless. The best tech in script loading is a known finite quantity (even if a lot of loaders are irresponsible and not using it). We can push for better tech (which I am), but debating who has the better API on their loader does nothing for that goal.


This thread really seems to have the most point in trying to determine if script tags are good enough on their own (with or without defer) or if script loaders assist in getting better performance. Secondarily, we need to figure out if concat really is the end-all-be-all of script loading.

It's also a moot point (for this thread) that script loaders have all these other use-cases they can do, which markup script tags cannot do (like on-demand/lazy-loading). Again, that's basically a given at this point, so trying to re-establish that fact is pointless.

NO U

LOAD RAGE!

loaders make me rage

View original posting here.

Also, any ppl/persons offended by a cartoon penis: welcome to the internet! I highly recommend you start your journey here.

OK, I've created 3 tests to illustrate some points. First up, manual script tags (as the base-line):

http://labjs.com/dev/test_suite/test-script-tags.php

Notice that the DOMContentLoaded (aka, "DOM-ready") comes way late, after the scripts finish. This is the bad. While the actual load time of the page may be the same as the later tests, the perceived load time of the page will always be much slower if DOM-ready is being blocked (so many sites wait until DOM-ready to attach click behaviors, apply JS-driven enhancements to the content, etc).

Now, what happens is we use defer on our script tags:

http://labjs.com/dev/test_suite/test-script-defer-tags.php

Well, that's good, we've fixed the DOMContentLoaded delay problem, but now we have another problem. The inline script block doesn't work with defer. It executes immediately. Oops. BTW, this is not a bug, the spec specifically dictates this.

http://labjs.com/dev/test_suite/test-LABjs.php

The LABjs test gets basically the same (or better) performance numbers compared to the defer test, but it doesn't fail to get the inline code to run after the scripts finish.

Try those tests several times in modern browsers (Chrome 13, FF5, etc). For me, LABjs always performed about the same or better to the defer test. In all my attempts, I've never seen LABjs perform worse that then defer test. Try those tests in older browsers (like FF3.5 or IE7), and you'll see that the script loader starts to out-perform the other tests by noticeable amounts.

Even though the LABjs test has similar numbers to the defer test in the newest browsers, it's a deal breaker if defer can't be used to defer ALL code (only works for code that is loaded via an external file). LOTS of sites load scripts and then have inline code to activate/init the code they just loaded. defer offers us no solution for this.

Therefore, defer is unsuitable as a "general script loading" technology. The next best option is a script loader.

@getify

It's not all about micro-optimizations... and, YES, API is important and is generally a good indication of the kind of limitations the underlying tech has, mainly because of said micro (or even macro) optimizations. It's not always just about loading scripts. Complex dependency management, proper sandboxing and actual, real, modularity is not something to wash out just because you don't have any interest in it. Guess what? These are actually the things that people need and page load performance can be achieved at a reasonably good level with static script tags.

It finally all boils down to script tag injection not being the proper tool for the task: it actually never was. It's just a very ugly hack. In that sense, you are actually not pushing for a better tech: you're pushing for more of the same with new kinds of caveats none of us can infer yet. Please, think for a mere second and see if it finally clicks.

What's really infuriating is that you refuse to lay out a single argument in favour of script tag injection as opposed to a proper, native, javascript API for script loading. You just ignore the whole thing. I'll save you the trouble though: there is no argument there. But, heh, we can all have some mental masturbation about the ins and outs of defer and async and feel like we're the gods of javascript, right? Or debate about 50ms optimizations as if it was actually helping anyone in this industry.

If you finally decide I'm worthy enough of an intelligent reply (as opposed to yet another LabJS advert), do so on my blog and let's keep this thread alone. Thank you.

@jaubourg --
I read your post indepth last nite. I was planning to write a blog post in response, in large part commending and complimenting you for the good thoughts you presented there. Unfortunately, what you're suggesting has already been hashed out AT LENGTH by members of the discussion thread on W3C and WHATWG. You're pretty late to that party.

There were several people who supported a whole new loader API, and there were several important counter-arguments to why that likely wasn't the best way to go. Again, I was planning to write out a response to you in a careful and reasoned blog post, to help explain all that.

Too bad you have to go and be such a dick here. Now it makes me feel like that reasoned blog post will just be a waste of time. You obviously think I'm an idiot and have never considered the things you're trying to bring up. Because I haven't spend the better part of the last year absolutely obsessing about script loader technology and how to get the spec and browsers to make it better. Yeah, I'm an idiot. I clearly haven't ever thought about anything other than the script tag before.


You apparently didn't listen to the 15 times I've said that _this_ thread had the better goal of focusing on the specific questions Paul Irish and Alex Sexton brought up: is defer good enough? is script-concat better than parallel loading?

Those are the more important questions.

Not what underlying "loader" technology is used. There's a different and better forum for discussing what the underlying loader tech is. I get it, you don't like the script tag. Fine. Go spend dozens of hours on the W3C/WHATWG list trying to get Ian and others to listen to you. They'll probably all just yawn and say "we've already hashed that out, go away."

@getify: Creating ridiculous strawman tests isn't going to win you points, buddy. We all know that sequential script tags block the page. We also know that having inline script blocks run before "defer"ed scripts isn't a problem in any way for real sites.

If you test in order to confirm a pre-misconception ... your tests are always going to confirm that misconception. The debate has never been about 20 script tags vs. 20 LABjs script loads. It's about intelligently trimming, concatenating, minifying, gzipping, and loading your JavaScript in as few HTTP requests as possible, and then caching it.

In the one hand, we have a reliable, browser-supported, time-tested approach, that performs demonstrably better on real-world pages; and in the other hand, we have a hacked-together "technology" that in the past has actually broken every site that used it after a browser update, that performs demonstrably worse on average, and with a far greater variance of slowness.

It's a no-brainer choice.

@jashkenas--

We also know that having inline script blocks run before "defer"ed scripts isn't a problem in any way for real sites.

Uhh... I guess you haven't done view-source on about 98% of all sites on the internet, which do in fact use inline script blocks in the markup to execute/initialize the code they loaded in a prior (blocking) script tag call.

If @paulirish suggests that defer is good enough and that script loaders aren't necessary, then I feel it's important to point out why, in fact, defer IS NOT good enough.

YOU may only care about the few niche sites that you control, which you have complete ability to be highly optimized about build processes, etc. I on the other hand care about helping improve performance on the long-tail sites of the internet, the ones with half a dozen script tags (some of them inline script blocks!), where using half a dozen $LAB.script() calls would in fact likely improve the performance. That's what LABjs was always about. Just because it's not what you care about doesn't mean it isn't relevant.

The debate has never been about 20 script tags vs. 20 LABjs script loads.

The debate in this thread is about whether 3-4 script tags (with or without defer) performs worse, the same, or better than 3-4 scripts dynamically loaded using a parallel script loader. My "ridiculous strawman tests" are in fact intended to test exactly that.

In my experience script loaders shave many milliseconds off the page load time. But I think we've all missed the point here. JavaScript has some bigger problems:

  • Lack of import statements make it difficult to organize your code in a modular way
  • Global variables colliding, unless much attention is given to careful namespacing of everything.
  • No way to clearly see what a script's dependencies are

I don't use RequireJS because it loads faster, although that's a nice side effect. I use it so I can organize my JS app into small modules much like I would in NodeJS. Each module clearly lists its dependencies, and uses the sandbox pattern to keep the global namespace clean. Modules (and their dependencies) can be loaded up front, or loaded on demand (on user click for example), or lazy loaded. You can really fine-tune your performance with these techniques. And RequireJS also comes with a build tool that combines and minifies all the dependencies into a single (or a handful of) gzip-ready file(s) for deployment. Solving these three issues is a huge win for me.

I can see why people would debate about using a script loader that doesn't solve these problems. If performance is the only point, and its debatable, then sure. But use an AMD module loader like RequireJS and the debate becomes irrelevant. Modules are the future of JavaScript. Dave Herman from Mozilla is working with board members from Apple and Google to add native modules to the language itself. But in the meantime we can get all the benefits by using an AMD module loader. It isn't just about performance.

@getify

You cannot expect people to treat you any differently than you do others. Patronizing is not a clever way to get a decent reaction (and god are you patronizing) and, like I said in my blog post, I don't think you're an idiot, I just think you're obsessed (which you say yourself btw) and that it seriously impairs your judgment. Like I said in my blog post it's not up to W3C or WHATWG to handle this issue but EcmaScript itself: this is not a browser issue, it's a language issue. Now, don't make this reply if you don't want to, it's your prerogative.

Maybe I came as harsh but I just defend what I believe in.

I'll unsubscribe from this thread and comment on it anymore. Sorry to have derailed stuff @paulirish and @SlexAxton.

@getify

YOU may only care about the few niche sites that you control, which you have
complete ability to be highly optimized about build processes, etc. I on the
other hand care about helping improve performance on the long-tail sites of the
internet, the ones with half a dozen script tags (some of them inline script
blocks!), where using half a dozen $LAB.script() calls would in fact likely
improve the performance. That's what LABjs was always about. Just because
it's not what you care about doesn't mean it isn't relevant.

If LABjs is about helping mediocre sites load slightly less poorly ... that's a noble goal, I guess. But if you're serious about taking a slow-loading website, and have it load as fast as possible -- potentially literally seconds faster than LABjs would allow, then it behooves you to keep an open mind and acknowledge that the easier and less fragile technique is also more performant.

The debate in this thread is about whether 3-4 script tags (with or without defer)
performs worse, the same, or better than 3-4 scripts dynamically loaded using
a parallel script loader. My "ridiculous strawman tests" are in fact intended to
test exactly that.

The debate in this thread is about how to build a web site to load and execute its JavaScript as fast as possible. Selling snake oil to clients, and promoting it to web developers, is a disservice to both.

Latency exists on the internet. Concatenate, minify, and gzip your JS, and load it at the bottom of the page in as few HTTP requests as possible. Nuff said.

@jashkenas--

If LABjs is about helping mediocre sites load slightly less poorly ... that's a noble goal, I guess

There are hundreds of sites that I personally know about from the past 2 years which did nothing but replace their script tags with $LAB.script() calls, and across the board they all saw better performance (some drastically, some only modestly).

There have been articles written (completely independent of and not connected to me) focused on helping sites in various industries (like ecommerce, real estate, etc) get better performance (because better performance means more conversions), where those articles recommended to sites that they replace script tags with $LAB calls, and many people in those comment threads have responded in the affirmative that it helped them out.

Had those articles said "OK, what you need to do to get more performance is hire a server admin who understands gzip and can install ruby or node.js so you can do some automated build processes......." those people reading those articles would have glazed over and left without giving it another thought. But I like to believe that "Hey, replace <script> with script()" was a pretty easy message for them to understand and connect with.

What I wanted for LABjs is a simple solution that someone can easily drop in to replace their script tags without too much thinking. I recognize that if you can personally consult with a site and figure out best optimizations, you can squeeze a lot more performance out of a lot of sites. But I also recognize that this is far beyond my ability as one person to do for the long tail of the internet, and similarly telling all those mom&pop sites "hey, go get an automated build system, and make sure it uses gzip" is like speaking an alien language to them. OTOH, it's been quite successful to say "Hey, take those 3 script tags, and make them 3 script() calls. See how easy that was?"

Bottom line, my approach with LABjs was to hit the low-hanging fruit.

None of that is to suggest that more sophisticated approaches to optimization aren't possible -- they clearly are, and when I get the chance to consult, I definitely explore them. It's just to say that for a lot of the web, it's more involved/complicated than they're willing or able to get. And I'm just trying to help _those_ sites improve in a way that is easier for them to grasp.

@jashkenas--

potentially literally seconds faster than LABjs would allow, then it behooves you to keep an open mind and acknowledge that the easier and less fragile technique is also more performant.

There has never been any established evidence to suggest that LABjs is significantly slowing down any sites. There's LOTS of established evidence that it's helping a lot of sites. So I don't buy this -- what you're speaking of is a false premise assuming facts not in evidence.

@paulirish found a post that points out problems with the defer attribute:
http://hacks.mozilla.org/2009/06/defer/

Coming from a mobile performance perspective -- like @jashkenas said, it's always best to concatenate, gzip, and send it over the line as one package than to have multiple http requests due to latency incurred by 3g network connections.

There's a lot of research being done in utilizing inlining techniques where you base64 encode images into strings then store them as key:value pairs in localStorage just to reduce http requests and leverage 'caching': http://channel9.msdn.com/Events/MIX/MIX11/RES04 is a great presentation by James Mickens from Microsoft Research.

Here's a pretty good deck on mobile performance with http requests and its affects on user experience: http://davidbcalhoun.com/present/mobile-performance/

I work on RequireJS, and I want to make a clarification of what RequireJS is aiming to do:

1) Show the way for modular code in JS that works well everywhere JS runs.
2) Load scripts.

The "load scripts" part is a necessary part of achieving the first goal. In dev, it is not a good idea to just concatenate all your scripts because it makes debugging harder, the line numbers do not match up. Script loaders also make it easy to use a JS API to load code on demand. For webmail-size apps, this is a necessary part of the performance story. However, concatenating the scripts into one or a small number of requests is usually the best deployment option.

But the goal of requirejs is to be the shim/polyfill/whatever to show how to create and reference modular code units that can be shared with others in a way that discourages globals and encourages explicit dependencies.

It uses the AMD API which has been worked out with other people making modular script loaders (includes compliance tests), with the goal of helping to inform any discussions for a module format in JS. This approach, by making real world implementations and reaching agreement with others on the API is the way progress is made.

In particular, given the network nature of JS and its relation to web docs/applications, the loader plugin API is something that should be supportable in some fashion with the ES Harmony modules, and I am doing work on prototyping the ES harmony modules via a requirejs loader plugin, so I can better understand the harmony proposal and give feedback.

For the performance folks:

  • There are a few choices for loaders that support AMD (curl, Dojo 1.7, loadrunner, requirejs), even a very small one that can be used for the "all scripts in one JS file" optimization done for deployment. So it is possible to get great performance while encouraging best coding practices -- easier code sharing by avoiding globals, using explicit dependencies.
  • The requirejs optimizer runs very fast in Node, and it can run in Rhino. It is a command line tool, but the latest master branch code exports it as a module usable in Node, so for example, it can be run via a Node-based http server that can do the build on the fly. So you can always develop in the "one script download always" mode if you prefer, but then choose to leave out one or two modules out of that optimized file, so you can easily debug them.

In the context of this ticket: choosing an AMD-compliant loader (does not have to be requirejs) fits in with the goals of the HTML boilerplate: point the way to best practices, both in code and in performance. However, I appreciate trying to work out an HTML boilerplate is a very difficult thing to do, there are competing interests, some stylistic, so I appreciate not wanting to make a recommendation in this area at this time.

I just want to make it clear that the goal of requirejs and loaders that implement the AMD API provide larger benefit than just loading some scripts that dump globals and force the developer to work out the complete, sometimes implicit, dependency tree. Those goals are achieved with solutions that have solid performance profiles.

To refocus from earlier... comparing the defer test to the LABjs test... (and ignoring the fact that defer doesn't work on inline script blocks), is anyone seeing that the LABjs test is performing worse than the defer test? I've tried it on a bunch of browsers, and even on my mobile device, and still seeing roughly equal numbers.

http://labjs.com/dev/test_suite/test-script-defer-tags.php

http://labjs.com/dev/test_suite/test-LABjs.php

@getify

I have no idea why or how you can optimize this, but I have, on my 3+-year-old MacBook machine, a consistent 3000 of difference between the two, which favors @defer.

I have only tested with Firefox however.

@espadrine-- quite strange. would love to get to the bottom of that. which version of Firefox are you testing with? can you send me a screenshot of the results?

Just concatenate and minify all your JS and CSS and inline it right in your HTML page and be done with it. Single HTTP request FTW! :P

Seriously though, there are so many bigger problems that we should be focused on in this community than just how your app is going to load. Chances are, the simplest method (script tags at the bottom) is probably fast enough. Just write great apps and deal with loading performance at the end. Doing anything else is prematurely optimizing.

Is there a general consensus by the folks on this thread that AMD should be the gold standard for JS code organization? Haven't really seen other options but I agree the Boilerplate would be a great start to setting folks up right in organizing code.

Firefox UX 8.0a1 (2011-08-07) update channel.

defer
LABjs

Again, no idea why, and this is probably very specific. LABjs is probably very good with legacy browsers.

Please don't use @getify's test page for anything more than a laugh. To quote:

<script defer src="http://labjs.xhr.me/dev/test_suite/testscript1.php?_=4911710&delay=5"> <script defer src="http://labjs.xhr.me/dev/test_suite/testscript2.php?_=6146431&delay=3"> <script defer src="http://labjs.xhr.me/dev/test_suite/testscript3.php?_=9499116&delay=1">

@getify, if you want to make a real test, feel free to fork @SlexAxton's AssetRace repo and add a LABjs version ... or make a test page that uses _real_ of JavaScript files, with real latencies.

Also, make sure you actually concatenate the JS for a single script tag -- defer or not. The point is that the same content served over 1 HTTP request beats the same content served across 10 HTTP requests.

There has never been any established evidence to suggest that LABjs is significantly
slowing down any sites. There's LOTS of established evidence that it's helping a lot
of sites. So I don't buy this -- what you're speaking of is a false premise assuming
facts not in evidence.

What was demonstrated above, is that LABjs is indeed significantly slowing down sites, by having their JS compete across many HTTP requests with their images, CSS, and other assets. @getify: I'd love to see a link to a site that you think demonstrated greatly from your conversion of it over to LABjs. Perhaps we can download a copy of that, and use it as a test case you'll respect.

For the record, I think it would be wise to get some more images in the AssetRace repo test page. But it's certainly a good baseline right now.

@artzstudio organizing your JS with an AMD loader is indeed the gold standard, at least until Harmony's modules are finished and widely supported. Then there will be a clear migration path from AMD modules to Native modules.

AMD modules being the gold-standard is certainly an opinion (one that I may share). However, there are plenty of smart people (Yehuda Katz and Dan Webb come to mind) who don't like it and offer other solutions.

@danwrong 's loadrunner can kind of do both, if that's your bag too: https://github.com/danwrong/loadrunner

Some pretty good stuff in there. Potentially a little more practical for non-JS folk as well. I like AMD modules for my stuff, but not everyone wants to spend time converting each version of the libraries they use to be modules.

I know @strobecorp is working on their own solution that doesn't require a lot of the extra code that AMD modules require.

While I'd love AMD to be the default, it's probably not wise from a multi-library/newb standpoint, as much as I wish it was.

@jashkenas--

Please don't use @getify's test page for anything more than a laugh.

If you can't be civil, I have no desire to discuss anything further with you. I am acting in good faith. I would appreciate a little common decency.

@getify, if you want to make a real test

I'd sure like you to explain why what I'm doing is so crazy, laughable, and invalid. I took the approach directly from Steve Souders, who (in his great experience and wisdom) suggested in all his tests that you use server timing to control the scripts, reducing the amount of variance in your tests. That's exactly what I'm doing.

A more controlled test is a valid baseline test. That's established scientific practice. That doesn't mean that real-world tests aren't also useful, but it also doesn't mean that you get to snipe at me and say "laugh at him, what an idiot because he does his tests differently that i think they should be done."

feel free to fork @SlexAxton's AssetRace repo and add a LABjs version

I'll happily do so. But not because I agree that my other tests are invalid. If you have some reasoned, level-headed arguments as to why my test setup is not valid, please do share. But quit being such an ass about it.

@jashkenas--

The point is that the same content served over 1 HTTP request beats the same content served across 10 HTTP requests.

I know you (and others) keep ranting on here about how this discussion should be all about concat vs. not-concat. If you read much earlier in the thread, I conceeded that there were two questions that needed to be addressed. The two issues are, as far as I'm concerned, orthagonal. The first is if script tags in markup can be as good (or better) than dynamic script elements used in a parallel script loader. THAT QUESTION is what I'm still trying to address with my tests.

The second question, which we haven't gotten to yet, is about whether script-concat is always better. I know you're already convinced of it, but I have counter evidence to suggest it's not so simple. That question needs to also be thoroughly tested. But it isn't what I'm trying to work out right now in this thread.

By continuing to insist that your way is the better way, you just make the whole debate less pleasant to be part of. All I'm trying to do is methodically establish some evidence for each of those two main questions, so we can stop guessing and be more informed. Why isn't that something you can assist with, instead of trying to be a jerk to me because you disagree with me?

With respect to the defer test vs. the LABjs test, I just did a quick screencast capture of testing the two head-to-head in IE9, FF8(nightly), and Chrome15(canary).

http://www.screenr.com/icxs

To answer @paulirish's earlier question (https://github.com/paulirish/html5-boilerplate/issues/28#issuecomment-1765361) about defer quirks, look at how "DOMContentLoaded" behaves across IE, Chrome, and Firefox in the defer test.

In IE9 and Chrome15, the DOMContentLoaded event is held up (blocked) and not fired until after the scripts run. In FF, however, the DOMContentLoaded event is not held up, it fires right away, and the scripts start executing after it. That's a giant inconsistency across modern browsers, and one of the reasons why I don't think defer is sufficient.

As far as I can tell from reading the spec, I'm not sure which behavior is correct. But I do know that it's clearly quirky and inconsistent between browsers.

@getify I'm not trying to be a jerk. I sincerely apologize that I've hurt your feelings.

Naturally, what you see as ranting, I see as the point of the discussion ... and what I see as snake oil, you see as a helpful step forward.

The two issues are indeed orthogonal (language that I used in my original post).

The first is if script tags in markup can be as good (or better) than
dynamic script elements used in a parallel script loader.

We are in complete agreement on this issue -- it doesn't matter. Of course parallel loading will be faster than sequential loading for more than one script. And of course, doing it in a non-blocking fashion, either at the end of the <body> tag, or with defer, or with a script loader, will be better than blocking in the <head>.

But this misses the point. Putting in sequential script tags is a strawman to compare against, because no one who cares about the performance of their JavaScript would use that approach. Guess what's also faster than sequential script tags? _Anything_.

The second question, which we haven't gotten to yet, is about whether
script-concat is always better.

We have "gotten to" this question. In fact, it's @paulirish's question at the top of this page. If you're not trying to work it out in this thread, you need to be. It strikes at the heart of all your claims about what LABjs does, not just in this thread, but over the years.

That question needs to also be thoroughly tested.

To repeat myself, here's a (fair) test case. The same 5 real-world scripts, loading on to a medium-sized page with other assets present, one using LABjs best practices to ensure load order, and the other using a single concatenated script:

http://jashkenas.s3.amazonaws.com/misc/snake-oil/labjs.html

http://jashkenas.s3.amazonaws.com/misc/snake-oil/vanilla.html

If you have another test case you'd like to examine, or a real-world LABjs-using website you'd like to experiment with, please share it.

@SlexAxton Thanks. I'd be curious to hear Yehuda's take on it and other strong opinions (other than it's too hard to refactor). I found this but not the talk.

To clarify @geddesign's comment: as of today it looks like AMD modules can be converted fairly easily to harmony modules, but that harmony modules proposal I consider still to be in flux, it could change later. It has not been through a rigorous implementation testing yet, but starting to get some legs on it. On the plus side, AMD loaders + loader plugins can give solid feedback into trying out some of the harmony ideas.

To @SlexAxton's comment:

For loadrunner: it is not clear to me the syntax is any better, just different. It supports AMD, so it still works out.

For strobe: I have yet to see code from them on it. They seem fairly inward-focused, although I appreciate the work Yehuda has done to open up that development. Alex, if you have pointers to what they are thinking, I would appreciate getting them.

If the approach is going to allow nested dependencies (which is needed for broad code sharing), you need a syntax that:

  • gives a name to a unit of code
  • a way to specify dependencies
  • a function wrapper around that code to make sure it does not execute until dependencies are ready. Or always mandate a build or XHR access, which is not scalable across the spectrum of JS development.

This is what AMD provides, and the syntax is as slim as it can get. Anything else is just fighting over names and possibly some some types of punctuation. At some point something just needs to be chosen, and so far I have not heard from Dan Webb or Yehuda about structural weaknesses that make AMD untenable. Some AMD loaders, like requirejs can load just regular scripts, they do not have to be modules.

It is very easy to think up code syntax, particularly for modules, and I can appreciate everyone has their own personal preferences. However, AMD has a fairly deep history of doing the hard work of getting some kind of agreement, and more importantly real code and deployment to back it up. I feel the onus is on others now to really be very crisp and clear on why AMD is not a good fit (this ticket is not the place for it, feel free to contact me off-list, or use the amd-implement list).

But I appreciate @SlexAxton's view. Standardizing on an AMD approach for HTML boilerplate could be premature, and I am completely fine with that. If the boilerplate project decides it does want to pick one, AMD is a strong choice that fits a wide spectrum of JS development.

@SlexAxton I'm with you. My own code is AMD all the way. While I wish everyone wrote modules instead of scripts, luckily RequireJS can load plain scripts as well as modules.

If you're referring to Yehuda's handlebars.js templating, those work extremely well with RequireJS. Especially if you write a plugin that compiles/caches the template and returns its template function.

define(['tmpl!navigation.html'], function(nav){
   $('body').append(nav(data));
});

I disagree with this statement however:

While I'd love AMD to be the default, it's probably not wise from a multi-library/newb standpoint, as much as I wish it was.

Newbs need the clean structure that AMD provides even more than a seasoned developer, as they are more prone to global variable collisions, terrible code organization that leads to huge messy JS files that nobody wants to touch for fear of having to deal with merge conflicts, etc. Libraries benefit from modules enormously, which is why upcoming Dojo 1.7 and Mootools 2.0 are moving to AMD. I hope jQuery gets on board - one of its biggest complaints is that it's "all or nothing". You can't use its excellent DOM manipulation without also loading its animation, ajax, events, etc. onto the page as well. So yeah, AMD is a win-win. If HTML5 Boilerplate wants to point people to best practices, it would be a shame to leave out AMD. It elegantly solves so many of JavaScript's problems.

To be clear. I agree. I wish they used require all the way.

I just don't think they will.

I don't think people yet realize AMD is a buzzword, a "thing" every serious developer needs to know about. Once they do, they will want to say to their bosses and future interviews they know about it and use it.

If we all do our part and say "see, it's easy, and better, and important" and make it a buzzword, the herds will follow for the sake of their careers.

@jashkenas--

The first is if script tags in markup can be as good (or better) than dynamic script elements used in a parallel script loader.

We are in complete agreement on this issue -- it doesn't matter.

Actually, I started my participation in this thread assuming that everyone agreed that dynamic script element loading was going to lead to better performance than script tags. But both @paulirish and @slexaxton have called that assumption into question in _this_ thread.

@paulirish has suggested that defer is a sufficient way to make the plain ol' script tag as good (or better) than the dynamic script element loading alternative. I disagree that defer is sufficient, and I've established now several reasons why.

So, I think it IS valid for us to have examined the first question, and explored if defer was better than script loaders. There may be a few limited cases where you can get away with defer, but as far as the generalized case, script loaders handle/normalize all the quirks, whereas defer exposes you to those problems.

I'm still not sure that everyone sees or agrees with why defer is not sufficient.

To repeat myself, here's a (fair) test case. The same 5 real-world scripts, loading on to a medium-sized page with other assets present, one using LABjs best practices to ensure load order, and the other using a single concatenated script:

This is your (and others') false testing premise. I never ever ever ever claimed that loading 5 scripts instead of 1 was going to be faster. Never. Ever. Can I be any more clear? The premise has never been 5 vs. 1.

The first test was to test 3 script tags vs 3 script() calls, because that's a fair test. And I think the video and the tests illustrate that script loading, in THAT scenario, is beneficial.

The second, and much more complex to test question, is whether there's any way to improve on the performance of a site that is already loading all its JS in one file. Most people say that it's impossible to improve on that. I disagree.

NOTE: the reason this question is orthagonal is that you can load this single concat file either with a script tag, or by using document.createElement("script") type dynamic loading. Either way, the question of a single concat file is a valid question, but separate from if script tags or dynamic script loading are better.

What you have heard me say several times in this thread, and also in many other contexts (including all my conference speaking on the topic, blog posts, etc), is that I think it's possible that you could improve on the single JS file concat approach by "chunking" (that is splitting the big concat file) into 2 or 3 chunks (at most). If the chunks are of ~equal size, and are loaded in parallel, then it's possible that the page will load faster, even with the extra HTTP overhead, because of connection "Keep-Alive", parallel loading effect, etc.

In fact I was writing about this topic a LONG time ago, way back in Nov 2009, shortly after LABjs' first release: http://blog.getify.com/2009/11/labjs-why-not-just-concat/

In that blog post, and ever since then, I've said that IF you are in a position (not everyone is... in fact, most of the web isn't) to use build-processes to concat, you should do so. Period. Always. Always concat files from 10-20 local files down to much fewer.

BUT, I also say that once you have that single concat file, it might also be beneficial to try and load your single file in 2-3 chunks, loaded in parallel (using a script loader).

Why might this be better? I lined it out in that blog post, but in short:

  1. parallel loading effect is real. ask bit-torrent users about this. the HTTP overhead is also real, and acts to counter-act, and eliminate that benefit. But it doesn't mean that it's impossible to benefit. Using connection Keep-Alive, it's possible you can get 2 or 3 simultaneous connections (without 2-3 full connection overhead penalties), and load your code in a shorter amount of time. Will it be 1/3 the time (60-70% faster) if you load it in 3 chunks? No. Absolutely not. But it may be 20-30% faster.
  2. Serving all your code in a single file prevents you from doing different cache headers for different life-time code. For instance, jquery is very stable and never needs to be re-downloaded. but your UX centric code on your site may be very volatile (you may tweak it once per week or more). Doing short caching headers on the single concat file is stupid, because it forces more frequent re-downloads of stable code unnecessarily. Doing long caching headers on the single concat file is also stupid, because it forces you to invalidate the cached file (cache bust param, etc) and force a full re-download of the entire file, when you just tweak a single byte of your more volatile code. So, chunking your big concat file into 2 chunks, one for the stable code, one for the volatile code, allows you to have different caching headers for each chunk. This makes more effective use of the cache, and leads to potentially better performance over periods of time, as users come repeat visit your site.
  3. Studies have shown that on average, a single page-view uses far less than 100% of the JS that gets loaded on the page (some estimates put it around 20-30% of the code). Loading all your code in one shot, all at once, at the beginning of page load, is congesting the line unnecessarily to push 70-80% of the file that is not needed then (and may "never" be needed). If you have your code in 2 chunks (one that is the more critical code and another that is less critical code), and you load the first chunk right away, and load the second chunk a few seconds after page load, you can free up the pipe for the much more important images/css and content. In essence, chunking allows you to prioritize your code.

Bottom line... on the topic of concat vs. parallel... I _always_ tell people: both. Not one or the other.

@getify well said.

Kyle's LABjs has my support.
As a consultant helping sites improve performance, I have seen LABjs work well many times.
Not only did it improve performance significantly (not just 100 ms, but 1+ sec), but also did the developers like it.
Easy to understand, easy to implement.

And I will take this opportunity to publicly say "Thank you Kyle, for the great support on LABjs. You've exceeded my expectations several times."

Using connection Keep-Alive, it's possible you can get 2 or 3 simultaneous connections (without 2-3 full connection overhead penalties)

HTTP doesn't mux/interleave responses, so you can't have parallel downloads without opening multiple connections first. The ideal case of persistent and pipelined connection is equal to contiguous download of a single file (+ few headers).

@pornel--

I have seen first-hand and validated that browsers can open up multiple connections in parallel to a single server, where with Connection Keep-Alive in play, the overhead for the second and third connections is drastically less than for the first. That is the effect I'm talking about.

@getify Fantastic, I think we've reached some sort of consensus. To refresh your memory:

I can anticipate a counterargument about loading your scripts in bits and pieces ...
but that's entirely orthogonal to the script loading technique, so please, leave it out
of the discussion.

Yes, I agree that loading your volatile scripts in a different JS file than your permanent scripts is great. Loading the script that is only needed for a specific page, only on that specific page, is similarly great.

So if I'm a web developer and I've got a page with a bunch of JavaScripts, what should I do? Use LABjs, or concatenate my permanent scripts into one file, and my volatile scripts into another, and load both at the bottom of the body tag with <script defer="true">?

Why should I subject my app to caching headaches, browser incompatibilities, race-against-the-images-on-the-page, and the rest of the trouble that a script loader brings along?

If the entire premise of using a script loader for performance is that it's easier and simpler than using two script tags ... I've got a bridge in Brooklyn to sell you.

@getify having implemented a web server more than once: keep-alive does not affect concurrent requests in any way and only reduces the costs of subsequent requests. A split body with two subsequent requests with keep-alive is still more expensive than a single request. Having two concurrent requests for the two body parts will probably perform better, but keep in mind that the browser will only open a limited number of concurrent requests (depending on the browser and config something around 5, I think), which is fine if all you do is loading your three js files, but is, as @jashkenas pointed out more than once, an issue if you have other assets, like images or css files.

@jashkenas-

So if I'm a web developer and I've got a page with a bunch of JavaScripts, what should I do? Use LABjs, or concatenate my permanent scripts into one file, and my volatile scripts into another, and load both at the bottom of the body tag with <script defer="true">?

TL;DR: both

Firstly, a lot of sites on the web are assembled by CMS's, which means that having inline script blocks strewn throughout the page is common, and VERY difficult to solve maintenance-wise by just saying "move all that code into one file". So, I think the premise that _most_ sites can get away without having any "inline code" to run after another external script loads and executes is unlikely, at best.

Secondly, I've proven that defer acts differently with respect to DOMContentLoaded in various browsers. In some browsers, the scripts go before DOM-ready, in other browsers, they go after DOM-ready. If you have code in your scripts which relies on happening before or after DOM-ready, using defer can be a problem. It's especially true that it's a sensitive area with a lot of misunderstanding and confusion, so it quickly becomes "this is not a simple straightforward solution". It takes a lot more thought.

Thirdly, I think for a lot of sites, changing their markup to use $LAB.script() instead of &lt;script> is a lot easier than explaining to them how to install some automated (or manual) bulid process on their server. Especially if that site is on shared-hosting (most of the web is), and they don't really control much of their server, asking them to figure out build processes so that their code maintainability is not lost is... well... non-trivial.

Can these things be overcome? Yep. Of course they can. But they take a lot of work. In some cases (like the DOM-ready thing) they may take actually painstakingly adjusting your code. It takes a person with dedicated efforts and lots of expertise and passion in this area to sort it all out.

By contrast, they can get a "quick win" dropping in LABjs instead of the &lt;script> tag. There's little that they have to think about (except document.write()). Most of the time, "it just works". And most of the time, they see an immediate speed increase in page load. For most sites, that's a big win.

So, to answer your question, I'd say, as I said before, do both... First drop in LABjs, see some immediate speed increases. Now, consider strongly the benefits of using a build process to move you from 15 files down to 2 files (1 file chunked in half). When you do that (if you do that, which as I said, most won't), you can ditch LABjs if you really want. But there's no real harm (it's small and caches well, even on mobile). It'll continue to load your two file chunks well, AND it'll do so without the quirks that defer might cause.

Also, having LABjs already there makes it stupidly simple for you to do step 3, which is to start figuring out what code you can "lazy/on-demand load" later. You can't do that without a script loader. Having LABjs already there and familiar means you don't have to worry about how to load that on-demand script at all -- it's already figured out.

@rkh--

I had it demonstrated to me (specifically in Apache, with toggling the Keep-Alive setting) how multiple parallel requests were affected (positively when Keep-Alive was there). I'm no expert in this area, so arguing the exact details of how it works or not is beyond me. I can say that the timing of request #2 was less that the timing of request #1, when Keep-Alive was there. How the browser and server did that, I can only make partially-informed guesses at.

A split body with two subsequent requests with keep-alive is still more expensive than a single request.

I never argued that the second request is free. I argued that the second request is not as expensive as the first request. So, if we assume that at least one request must be made, having a second request in parallel is NOT the same thing as having two completely independent connections to the same server, in terms of overhead or time costs.

By way of estimate, it seemed like Request #1 was X to service, and #2 in parallel with Keep-Alive present was 0.7X. It was explained to me that the server was able to utilize some of the existing connection overhead in servicing the second request, thereby making it a little cheaper. With Keep-Alive turned off, the second request had no such measurable decrease.


All this discussion is a seriously deep rabbit hole though. I'm no server expert. I don't have to be. I can only explain that I have actually seen (and created tests) around this exact topic... can I test that single 100k file load time vs. loading two halves of that same file in parallel, and will the second test be any measurable amount faster. As I've said, I saw, somewhere between 15-25% faster with the chunked-in-parallel test. How it did that, and managed to somehow overtake the awful "OMG HTTP RESPONSE OVERHEAD IS TERRIBLE" effect and still benefit from two parallel loadings, I guess I'm not qualified to scientifically prove. But it definitely did by obvservation.

Christ, you people type fast. I finish reading, reload the page, and there are like nine more comments.

I need help. I've tried to pinpoint exactly where in this thread we went from discussing _what works best for a boilerplate HTML file_ to discussing _whether script loaders are, in all cases, snake oil_.

@getify, you should certainly defend LABjs and respond to specific criticisms made by others in the thread, but (excepting @jashkenas) I think those who criticize LABjs are doing so in order to demonstrate that it's not the best solution for a boilerplate. You argue that it's easier to convert legacy pages to LABjs than to script[defer], and that might be true, but how does that apply to a boilerplate HTML file (which is, by definition, starting from scratch)?

You say that it's designed for people who don't have fancy build processes, but you also seem to advocate concatenating, splitting into equal-sized chunks, and loading in parallel. Isn't that a task for a build script? Again, it seems like the wrong choice for a boilerplate designed to give the user intelligent defaults. If a user wants that purported 20-30% speed increase, she can choose to upgrade later over what the boilerplate offers, but that's not a trivial task.

Having said all that, if you guys want to carry on with the general topic ("Script Loaders: Valuable Tool or Snake Oil?"), I'll happily hang around and make some popcorn.

@getify: I can agree that 2nd and 3rd connections might be opened faster than the first – the first one waits for DNS and possibly routing the very first packet to the server is a bit slower than routing the rest alongside the same path. In HTTPS SSL session cache helps subsequent connections a lot.

However, I don't see relevance of Keep-Alive in this situation. Subsequent _requests_ on the same connection are started faster with Keep-Alive, but those requests are serial within the connection.

I'm about done here -- I just reached my "mad as hell and not going to take it anymore" moment with respect to script loaders.

That said, I think that this thread, for a flame fest, has actually been quite productive. If LABjs wants to stake out a claim for the hapless and incompetent web sites, and leave people who actually want to have their sites load fast alone, it's a great step forward.

dude, chill

@savetheclocktower--

Fair questions.

I didn't start my participation in this thread strongly advocating for LABjs (or any script loader) to be included in h5bp. I think it's useful (see below), but it wasn't a major concern of mine that I was losing sleep over. Clearly, this thread has morphed into an all out attack on everything that is "script loading". That is, obviously, something I care a bit more about.

You say that it's designed for people who don't have fancy build processes, but you also seem to advocate concatenating, splitting into equal-sized chunks, and loading in parallel. Isn't that a task for a build script?

I advocate first for moving all your dozens of script tags to a parallel script loader like LABjs. This takes nothing more than the ability to adjust your markup. That's a far easier/less intimidating step than telling a mom&pop site to use an automated node.js-based build system, for instance.

And for those who CAN do builds of their files, I advocate that LABjs still has benefit, because it can help you load those chunks in parallel. If you flat out disagree that chunks are in any way useful, then you won't see any reason to use LABjs over defer. But if you can see why chunking _may be_ helpful, it should then follow that a script loader _may also assist_ in that process.

Again, it seems like the wrong choice for a boilerplate designed to give the user intelligent defaults.

The only reason I think a script loader (specifically one which is designed, like LABjs, to have a one-to-one mapping between script tags and script() calls) has a benefit in a boilerplate is that in a boilerplate, you often see one instance of something (like a tag), and your tendency in building out your page is to just copy-n-paste duplicate that as many times as you need it. So, if you have a poorly performing pattern (script tag) in the boilerplate, people's tendency will be to duplicate the script tag a dozen times. I think, on average, if they instead duplicated the $LAB.script() call a bunch of times, there's a decent chance their performance won't be quite as bad as it would have been.

That's the only reason I started participating in this thread. It's the only reason I took issue with @paulirish's "blind faith" comment WAY above here in the thread.

Sooooooooooo yeah.


I think it's clear this discussion has moved on way past whether a script loader is appropriate for the h5bp project. But that's good, as this topic is worth exploring.


regardless, I'm very interested in reproducible test cases alongside test results.

It also seems the spec for @defer was written to protect some of the erratic behavior that browsers deliver along with it. That behavior should be documented. I can help migrate it to the MDC when its ready.

We need straight up documentation on these behaviors that captures all browsers, different connection types and network effects. I'm not sure if a test rig should use cuzillion or assetrace, but that can be determined.

I've set up a ticket to gather some interest in that https://github.com/paulirish/lazyweb-requests/issues/42

Join me over there if you're into the _superfun_ tasks of webperf research and documenting evidence.

Let's consider this thread closed, gentlemen.

Lazy loading isn't the core benefit of AMD modules as @jrburke described on his comments.. The main reason that I choose to use AMD modules as much as I can is because it improves code structure. It keeps the source files small and concise - easier to develop and maintain - the same way that using css @import during dev and running an automated build to combine stylesheets is also recommended for large projects...

I feel that this post I wrote last year fits the subject: The performance dogma - It's not all about performance and make sure you aren't _wasting your time_ "optimizing" something that doesn't make any _real_ difference...

And I'm with @SlexAxton, I want AMD but simple script tags are probably enough for most people. Maybe a valid approach would be to add a new setting to pick AMD project and run RequireJS optimizer instead of the _concat_ tasks (RequireJS optimizer Ant task), that would be pretty cool and probably not that hard to implement.

Let's consider this thread closed, gentlemen.

@paulirish What about including AMD support? Where should we discuss that?

@benatkin open a new ticket bro.

@paulirish OK, thanks. @jrburke would you please open up a new ticket to continue the discussion you started? I think I'll add a comment, but I don't think I can lay out a case for AMD support as well as you can.

Entertaining and informative. Thanks guys.

I think someone needs to start a new script loader project and called it "Issue28". :)

For widest compat, fast performance can be had by putting script at bottom, minify, gzip, but don't defer. At least not until browser compatibility is consistent for a few years straight.

Bottlenecks can come from ads, too much javascript, bloated HTML, too much CSS, too many iframes, too many requests, server latency, inefficient javascript. Applications that use a lot of third party libs have problems caused by not just too much javascript, but more than that, they tend to also have many other problems, mostly bloated HTML, invalid HTML, too much css, and inefficient javascript. Twitter comes right to mind, with having two version of jQuery and two onscroll handlers that cause a bouncing right column onscroll.

The kicker is that if you know what you're doing, you can avoid those problems. You don't need things like jQuery or underscore, and so your scripts are much smaller. You write clean, simple, valid HTML and CSS. Consequentially, your pages load faster, the app is more flexible in terms of change, and SEO improves. And so then using a script loader just adds unwarranted complexity and overhead.

https://github.com/BroDotJS/AssetRage

BOOM! I close the clubs and I close the threads.

What a thread ... wow.

Imo, the discussion started in the context of the h5bp, which is intended to be a starting point for web devs.
As such, you can state that the webdev using the h5bp will actually have clean HTML, clean CSS, a good .htaccess etc and _maybe_ even _not_ suffer from too many images, inefficient JS, lots of crappy third party JS etc. You know, because the web dev choosing to use the high performance h5bp and by that is concerned about performance, and will pay attention to the non-h5bp stuff that goes onto the page(s).

From the thread, and in this context, I think there is unfortunately not enough evidence to draw a final conclusion.
I am with Paul on getting the research going and documenting what needs to be documented.
Count me in Paul.

Sidenote. I am not very familiar with AMD and from a first look, it seemds intimidating to me, or at least not something I can pick up very easily. I think most 'ordinary' web devs will agree.
The stuff you see in the h5bp needs to have a low entry barrier, or it will not be used and uptake of h5bp may be slower than it could be without it.
I doubt something like AMD belongs in the h5bp.
Keep it simple.

And another comment ....
'Putting scripts at the bottom' and 'Concatenate JS files into a single file' has been high up on the Web Perf Best Practices list for many years. So why do >90% of the average sites out there, built by in-house developers and by the top brand agencies still have multiple script tags in the HEAD? Really, why is that?

And the other 9% have a single, concatenated JS file ... in the HEAD.
Rarely do I see a 'normal' site which is _not_ built by some top web perf dev with one script at the bottom.

Devs keep building sites like they have been for years.
Site owners care most about design and features, so that's what the devs spend their time on.

Changing a way of working, a build system, the code ... it has to be easy, very easy, or else it won't happen.

I have worked on many sites where combining the JS in the HEAD into a single file and loading it a bottom of BODY broke the pages on the site. And then what? In most cases, it's not simply an hour work to fix that. Serious refactoring needs to take place ... and this does not happen because of the lack of knowledge and, especially, the lack of time.

(oh right, the thread is closed...)

We're talking about a library build on top of jQuery and Modernizr. Says it all, really. Who uses that? Oh, shit, I forget, Twitter.com, which uses two jQuerys and also has in source code, the following:

Line 352, Column 6: End tag div seen, but there were open elements.
Error Line 350, Column 6: Unclosed element ul.
Error Line 330, Column 6: Unclosed element ul.

And the problem with expecting the browser to error correct that is that HTML4 didn't define error correction mechanisms and so you'll end up with a who-knows-what who-knows-where. Sure, HTML5 defines error handling, but it ain't retroactive -- there's still plenty of "old" browsers out there.

And speaking of shit, anyone here had a look at jQuery ES5 shims?

BTW, do you have anything to add to that statement of yours "that the webdev using the h5bp will actually have clean HTML," aaronpeters?

@GarrettS ok, ok, I should have written "will _probably_ have clean HTML"

:-D we can always hope!

Beating a dead horse, I know ... but it turns out that at the same time we were having this scintillating discussion, the current version of LABjs actually had a bug that caused JavaScript to execute in the wrong order in some browsers: https://github.com/getify/LABjs/issues/36

Oh, the irony.

must. resist. posting. totally. [in]appropriate. image. for. previous. statement.... aggggh! AGONY!

My favorite part was when the dude that made dhtmlkitchen.com (currently totally messed up) started talking about markup errors.

That site has been transferred to Paulo Fragomeni. Yes I made it and proud of what I wrote there, as here. Go take a screenshot of your weak avatar, jackass.

...and after you're done with that, try to pull your head out of your ass and understand the difference between my old personal website (which is no longer maintained by me) and one that is developed by a team and financed by a profitable, multi-million dollar company (though Twitter may be worth billions AFAIK).

Glad we're keeping this classy, and _on topic_, guys.

jashkenas got the relevant bits of info out early on in this discussion.

But then there was the backlash. No! It must not be! Souders said to do it! And there was the bad advice to use defer, not caring how it fails when it fails.

And then ironically, out of nowhere, there came a claim that h5bp users would be doing things properly. And this is very ironic because this comment came _after_ comments from its supporters who evidently produce invalid markup and use a load of third party abstraction layers (and awful ones). And after the comment about using defer.

And so what does any of this have do with dhtmlkitchen.com being down? Nothing at all, obviously. That was just a weak jab from an h5bp forker who can't stand to hear criticism.

Bros.
Dude.
Bros.

This thread is closed. Remember? You don't have to go home, but you can't flame here.

Hey y'all remember that one time when we made an epic thread where there were multiple debates, personal flame wars, people getting angry all over the place, an obscene image or two, and an all-around good time? Can't believe it was free. We should do that again sometime.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

coliff picture coliff  Â·  10Comments

coliff picture coliff  Â·  14Comments

klutchdev picture klutchdev  Â·  18Comments

davidmurdoch picture davidmurdoch  Â·  30Comments

gaurav21r picture gaurav21r  Â·  21Comments