It will be great if we can migrate to ES6 modules like three.js did so the end user can take advantage of Webpack2/Rollup's tree shaking feature.
whom do you mean by "we"?
@buzzdecafe I am just suggesting here xD feel free to close this issue if you don't like it
The problem for now is that Ramda is also available as individual modules. You can simply do
const map = require('ramda/src/map');
, soon to be replaced with just:
const map = require('ramda/map');
I don't see any way to maintain that -- which is important to a number of people, still, and switch to a Rollup-built ES6 module format. If this actually is possible, I'd really like to know!
Out of curiosity, what is the benefit of doing
const map = require('ramda/map');
vs
const map = require('ramda').map;
I can only think of reducing bundle size but I don't think that would be relevant when using it on nodejs like this?
Regarding having the original code as ES6 modules and still being able to have separate commonjs modules I think there are several ways to do that. One way would be to use this function in babel. The idea is to write the modules in ES6 and have babel transform them to commonjs. If the transform is applied on a per file basis I think it should be possible to require the resulting commonjs files separately like today.
I would not at all mind seeing a demonstration of these ideas if you're interested in creating a PR.
I don't know what the benefits are, except that I know a number of people do seem to want this.
I did a small POC in #1977. Only src/add.jshas been updated to use es6 style export. If the concept is accepted I could rewrite all to this style.
@CrossEye About the benefits of proper build for rollup/webpack 2. You can continue to import ramda like this:
import {map} from 'ramda'
And with rollup/webpack 2 it will work like this (roughly), because of tree-shaking:
import map from 'ramda/src/map'
So you actually bundle only the things that you import (and use).
Here follows some speculation on the need to have commonjs modules that are importable as separate files. If you are doing this in your application:
const map = require('ramda/src/map');
Then you are probably doing it for a reason. The only reason I can think of is that you want a smaller application bundle. So if that is true, given that bundle size probably is not a concern on the server-side, you are building something for the browser. But you are using commonjs-style require which cannot run in the browser. This implies that you are using a bundler that can handle converting this down to es5 (like webpack or jspm). Now since you are already using a bundler that is capable of converting different formats of imports/require you could be doing (as @faergeek pointed out above):
import {map} from 'ramda'
And you would not have to change your tool-chain since your bundler already supports this. And if you are using webpack 2 or rollup, your application bundle would automatically exclude everything except what is needed to run the map function.
So if all my speculation is true, then I don't think there is a high demand for a folder with separate commonjs module files. But as I said this is all speculation since I don't know the reason for wanting separate commonjs files.
BTW, I don't understand what's the problem to have both ways.
@faergeek You mean having both a folder with separate cjs files and the original src folder with es6 modules? No problem. My PR currently supports both ways :-).
@jonaskello I didn't look at it yet. But main concern here that I see here from @CrossEye is that import map from 'ramda/src/map' should still work as before. And I just don't see the reason it will not :-)
Aha, yes that would of course work. I thought @CrossEye wanted the old style commonjs require to work as before. As in const map = require('ramda/src/map'). To that end my PR creates a cjs folder with commonjs format files for each es6 module format file in the src folder.
@jonaskello so you just need to move existing src folder to es6 or something like that, then run babel es6 --out-dir src. Then do things inside es6 using rollup or whatever :-) Paths should be saved as they are now I think, that's the concern.
Exactly, so what my PR does is this: babel src --out-dir cjs --copy-files. Then someone wanting to use the old style cjs require could do this: const map = require('ramda/cjs/map').
EDIT: So actually I do what you said but in reverse. I update the code in the original src folder to use es6 modules and create a new for cjs rather than creating a separate one for es6. Of course it would be less work to keep the code-base as is and generate an es6 folder but I think in the long run it would be better to actually update the code-base to es6 modules since they are the future.
@jonaskello People are already use const map = require('ramda/src/map'), but you want them to rewrite it to const map = require('ramda/cjs/map') in future, that's not good :-). Everything else is ok I think
Agreed, breaking compability is not good. OTOH if you look at @CrossEye comment above:
, soon to be replaced with just:
const map = require('ramda/map');
So I think the idea is to remove directly importing from ramda/src/* anyway. Could it be done in a way so we don't need to have the cjs files directly under ramda?
@CrossEye Could you explain how you want to transition users to short paths?
My thought -- before we started talking about changing module formats -- was that in the preparations for an NPM publish we would simply copy the relevant information from our main folders to a temporary publish folder. The src folder would copy directly to the root, though. Then we'd publish from there and delete this temporary folder. For a transitional period we might also publish the src folder as well, but that's a minor concern.
Makes sense, and that approach would work with the PR too, only instead the cjs folder would be generated and then its content copied the root of the publish folder.
Hi guys,
Just to be sure, tree shaking is not possible with commonjs modules as it uses static analysis which only es6 modules provide (by the spec). Only dead code elimination can be applied to commonjs modules, but it's far from ideal.
A good example on how to set this up with a minimal effort is [d3].
On a side note, if we only want to import required parts, currently we have to do:
import map from 'ramda/src/map'
import forEach from 'ramda/src/forEach'
If es6 modules are supported this will correctly leverage tree shaking (which it does not currently):
import { map, forEach } from 'ramda'
fyi - since ramda uses currying everywhere, rollup has a hard time treeshaking efficiently.
Here's a small example using module'ified ramda code that shows F.js will be bundled whether or not it's imported.
The fool-proof bundling approach seems to be how babel-plugin-ramda goes about it e.g.
// this
import { add, map } from 'ramda';
map(add(1), [1, 2, 3]);
// into
import add from 'ramda/src/add';
import map from 'ramda/src/map';
map(add(1), [1, 2, 3]);
Note this approach is not limited to being implemented as a babel plugin.
@olsonpm for me, when I remove export { default as F } from './F'; from ramda.modules.js, it does not get bundled.
I think you're misunderstanding the example. ./F.js should be exported. The problem is that because it calls a curried function, rollup detects it as possibly causing a side-effect and thus doesn't tree-shake it off.
The whole point is to show that tree-shaking sounds great in theory, but is not sufficient in practice which is why Rich Harris is trying to find a better way to detect unused code.
Here's a simpler example to show the same concept using non-ramda code
And finally here's that same example without the factory, showing the tree-shaking working as expected
right! yes, it makes sense (sorry I didn't catch it the first time is already late in here). Luckily, ramda is super structured so we would have options.
Not a problem - I'm not the greatest at explaining things in general -.-
And yeah if you want to see the bundle sizes in practice, you can clone my ramda branch "es6-modules" to see what a single
import { add } from 'ramda'
does after rolling it up. It's not pretty.
Since tree shaking is not possible atm because of currying, the only hope would be if annotations landed: https://github.com/rollup/rollup/issues/1293 https://github.com/mishoo/UglifyJS2/pull/1448
Annotations would allow it, yes. Personally I don't think it's a great cost/benefit considering a plugin can achieve the same result.
I was happy to read jdalton's comment
Btw, I'm taking Lodash in a different direction for v5 (just started working on it). In v5, the traditional lodash package won't ship with a monolithic build. Instead it'll enforce the cherry-picked form,require('lodash/forOwn'), as the default
Seems like the most reasonable route given the current tooling available.
I made a quick script hack that converts Ramda to support Rollup and UglifyJS2 __PURE__ annotations: ramda-rollup-hack. Works fine in my test project and reduces bundle size (roughly half of Ramda is dead code eliminated after running the script). You could also adapt the script to Ramda's build to publish Ramda with Rollup support. Note that the __PURE__ annotations help UglifyJS2 to do dead code elimination.
Thank you very much.
I look forward to investigating this. Hopefully this evening.
I still find it unfortunate that future javascript code may be littered with comment annotations - but if it serves a good purpose then I guess I'm okay with it haha.
Thanks much for putting that together.
@olsonpm Yes, I totally agree that having to use such annotations is unfortunate. However, I find that the combination of Rollup and UglifyJS2 with /*#__PURE__*/ annotations allows library authors (like myself) to author "rich" libraries that are easy (well, as easy as using Rollup) to consume without having to pay for unused features. At least for me it would be even more unfortunate to have to continue to be extremely conservative about adding useful helpers to libraries.
Yeah it's easier, but the alternative route lodash went is only slightly less convenient.
I haven't looked at in detail what lodash does, but the way Rollup eliminates the module overhead completely is really important. It doesn't just help to eliminate unused stuff, but it also eliminates indirections (modules disappear) and then tools like UglifyJS can do a better job. If lodash just allows you to separately import commonjs modules it will likely lead to bigger bundles and worse performance with additional indirections.
Note that the hack is also available from NPM: see here.
lodash's method is in coordination with a bundler, so I think you're misunderstanding. Not a big deal though - this annotation is certainly providing sought-after functionality.
@CrossEye I'm wondering if you've gotten a chance to work on this; or if progress is made somewhere publicly (I dont really see a PR or a branch).
@dashed I did a small POC in #1977 but I don't know of any other work going on.
So the whole point of the tree shaking is to be able to do something like
import { add } from 'ramda';
and for a tree-shaker to eliminate all other ramda utilities. I haven't played around with the __pure__ anotations yet, but it's possible even those don't meet the above goal.
Until the above goal is met, I don't see any benefit to es modules.
@dashed: I'm afraid I've done nothing. A PR would be welcome.
So I finally took the time to learn more about this /* #__PURE__ */ annotation and have a working ramda repo with all the necessary annotations.
If you want a feel for how it diff'ers from just es modules without the annotations, here's my es-module branch
And finally an example repo reaping the benefits of the annotations. Note it takes multiple uglifyjs compression passes to achieve the fully minified results.
I still don't think it's as nice a solution as the plugin - but feel free to fork and tinker to your liking. It works as expected.
Just trying to clarify if my understanding is correct.
There are at least two issues related to creating a minimal app bundle, i.e., one that only includes code that is actually used by the app.
Leverage ES2016 modules (_i.e._ import/export) that, through static analysis, enable a bundler to only include those dependencies that are actually used (aka tree-shaking). This is what plugins such as rollup-plugin-lambda and https://github.com/megawac/babel-plugin-ramda help achieve (it's worth mentioning that this is possible because Ramda code base if so beautifully completely modular)
Due to the very dynamic nature of JavaScript, tree-shaking ES2015 modules may still include large swathes of unused code consisting of function calls that can't be proven to be side-effect free. When using Ramda, we typically create lots of small functions by calling other functions. So we need a way to specify when a function call is side-effect free. This is where the uglifyjs2 /*#__PURE__*/ annotations come into play.
@olsonpm if I understand correctly, the lodash approach you mentioned above only seems to address point 1.
OTOH @polytypic's very cool hack above seems to address both point 1. and point 2. in the way that's completely transparent / compatible with the current release of Ramda.
Am I missing something:?
Has anyone considered prepack in addition to tools like webpack and uglify2 (or babili)?
@dashed - To me that library looks like it will hit a point of complexity just like tree-shaking did. Time will tell though.
@teehemkay
Take for instance my es'ified ramda [index.js file](https://github.com/olsonpm/ramda/blob/es6-modules-with-pure/index.js). Its job is to re-export all the ramda modules (lodash-es has [the same concept](https://github.com/lodash/lodash/blob/es/lodash.js)). This means any imports that are supposed to be tree shaken e.g. `import { add } from 'ramda'` now have to dig through all the code in index.js and determine whether any of it produces side effects. Because tree shaking is hard, a ton of code is included. So the transformer plugins say "that obviously doesn't work, so let's forego analysing 'index.js' and just modify `import { add } from 'ramda'` to `import add from 'ramda/src/add'`." Now rollup only analyses the add module which gives us what we want.
/* #__PURE__ */ annotation allows for an alternative to the transformer plugins. They say "Bring on the import { add } from 'ramda'. We can just declare what function calls are pure. This way any unused variables can be safely removed" (remember imports are turned into variables via rollup).Hopefully that clears some things up - though honestly the best way to understand would be to play around with some code.
@olsonpm Thanks for the clarification.
Uglifyjs2's /* #__PURE__ */ annotation allows for an alternative to the transformer plugins
I came to this thread from rollup/rollup#1354 which lead me to conflate two different problems, one of which has nothing to do with the current thread. I'm sorry for the added confusion.
The two problems are conflated are:
common.js that both module a.js and module b.js depend on but for different bits of common.js. Without, uglify /*#__PURE__*/ annotation all the functions / variables exported by common.js that where created by using calling Ramda side-effect free functions will be will be included in the bundles for a.js and b.js even if they are not actually used there.Point 1. can be solved either by the plugins or by uglify annotation. The plugin option seems cleaner (and probably faster). But it only works because––as already mentioned––Ramda code base is beautifully modular. But the plugins are only concerned with minimizing the Ramda code.
But the annotation also works for any code (even code that is not as modular as Ramda). And it is particularly handy for the typical, side-effect free code that Ramda allows us to write.
@olsonpm sorry for the noise; wanted to suggest it as a potential alternative to /*#__PURE__*/ annotation. Practically, prepack still has some rough edges, and probably wont be "safe" to use until a year or so (unless you like to live on the bleedingedge).
@teehemkay
But the annotation also works for any code (even code that is not as modular as Ramda)
Kinda. The problem is that you will either
I'm biased toward 1. I can only see 2 being useful in existing code bases where refactoring to be more modular is difficult and not worth the effort. If you can think of a scenario where annotations are useful otherwise I'd be interested in hearing. Also - maybe we can continue this on your original issue.
@teehemkay @olsonpm The Babel plugin approach only works if you basically write one function (correction: export) per module. That is not how I want to structure my libraries and applications. I'm also not particularly fond of the idea that libraries would come with an associated Babel plugin. YMMV.
Note that a Babel plugin that transforms import {foo} from 'bar' into import foo from 'bar/src/foo' is not generally safe: import 'bar' may have side-effects. So, basically, a Babel plugin that performs such a transform essentially does the same kind of hack (assumes no side-effects and performs unsafe optimization) as the /*#__PURE__*/ annotations are doing.
(Speaking of structuring code. For historical reasons most of my JS libraries are currently single file, which I don't consider as ideal either. In the future I may structure libs as multiple modules because a bundler like Rollup can eliminate the module overhead conveniently.)
Also, splitting code into multiple modules so that there is just one function per module does not make the code fundamentally more modular. It just means that you have a lot more source files to manage and most tools will be less effective on your project. For example, it becomes a bit more difficult to see which definitions are unused (with a single file, a simple tool like eslint can tell you immediately when a top-level function is unused).
@polytypic
The Babel plugin approach only works if you basically write one function per module
My understanding is the approach works if all the code associated with a module is included in that module. That's hardly limited to one function.
@olsonpm If you have a module that exports more than one thing and you might only use a subset of those things then plugin approach will not eliminate the unused things.
At any rate, my only interest is to have a future Ramda that can be properly dead code eliminated. The ramda-rollup-hack I made works for me now, but I do consider it as a temporary hack. I don't particularly care whether Ramda uses the one export per module approach with a Babel plugin or something else. Though... I don't particularly like the idea that in the future every library would come with a Babel plugin (shudder). The Rollup + /*#__PURE__*/ works well enough for me. (It would be nice if one could conveniently mark functions as pure rather than have to mark function applications as pure. It might even allow for some tools to give more useful warnings.)
@polytypic - you're suggesting a tree-shaker that has not been created yet and probably never will due to complexity. This is the whole reason the pure annotation exists, to allow developers a manual means of declaring side effect free code.
To give an example, let's say a module math exposes two functions 'add' and 'subtract'. Let's say the 'math' module is exposed by ramda.
// math.js
export function add(left, right) { return left + right; }
export function subtract(left, right) { return left - right; }
// ramda.js
export * as math from './math';
As a consumer I'm expecting a magical tree shaker to turn this
import { math } from 'ramda';
console.log(math.add(1, 2));
into this
function add(left, right) { return left + right; }
console.log(add(1, 2));
The problem is that approach doesn't work in practical code bases as proven by the rollup and webpack2 issues related to tree shaking.
It would be nice if one could conveniently mark functions as pure rather than have to mark function applications as pure
This can't be done without being able to track function references. Uglifyjs2 has an option pure_funcs which allows you to mark certain function names as pure. I can't think of a great use-case for it, but I'm sure there is one given it exists.
@olsonpm This can't be done without being able to track function references.
Yes and no. First "no": It would already be useful to have tools be able to positively determine where an export like
export const curry = fn => ...
is used and I believe that is one of the intentions behind the new module system, because a const export like that cannot be mutated (AFAIU — I haven't read the JS spec in detail). Marking curry as "pure" could then allow such a tool to further determine that a definition of add like
import { curry } from 'curry'
export const add = curry(...)
can be safely eliminated as long as add is not referenced anywhere. This kind of "tracking" can be done quite easily given the guarantees that JS modules now give you (again AFAIU as I still haven't read the spec in detail and I don't plan to do so in the near future).
Now "yes": Being able to further effectively track aliases of curry (in order to more effectively apply dead code elimination and other optimizations) requires more sophisticated algorithms, but even those are well established techniques in optimizing compilers.
(BTW, I've have actually written a couple of compilers, including a simple optimizing compiler (based on CPS transform and optimizations in CPS form) for a simple ML style programming language.)
@polytypic Just FYI, even a mutable variable like export let curry = fn => ... can't be mutated by other modules, it can only be mutated in the module where it is defined.
So it's possible for a tool to detect when a variable isn't mutated and treat it as const
Babel automatically does this analysis, so it's very easy to detect whether a variable is mutated or not.
@polytypic - you keep mentioning very contrived examples which of course work. The problem is library authors are having a very difficult time making these methods work with practical codebases that developers want the functionality for.
@olsonpm I'm not sure what you mean. I'm both a developer and a library author myself and the Rollup + /*#__PURE__*/ annotation would have worked nicely in the previous production project I worked on. That project was originally bundled with Browserify, but I later made a PR to convert it to Rollup. It reduced bundle size significantly and also had the desired dead code elimination properties. Unfortunately for that project it was too late to make that switch into production. That project has been handed over and they don't want to make such changes at this point (the bundle size is not really a problem in that project).
Note that perfect dead code elimination is undecidable (it is easy to reduce to the halting problem). This means that any approach to dead code elimination is going to be an approximation and will miss some cases. That doesn't make simple and fairly easily understood methods I've outlined above useless.
I also disagree that my examples would be contrived. The curried add example is pretty much precisely what the Ramda codebase is doing everywhere. Having an effective solution for that would be very useful.
At any rate, I don't see this discussion going anywhere. I believe I have said what I wanted to say: Rollup + /*#__PURE__*/ is an effective approach. I'm using it in my libraries and have tried it with a production project where it also proved effective. I also don't particularly care how Ramda will ultimately allow for proper dead code elimination, but I hope that it will soon do so. If you must attempt to prove me incorrect, please do so without notifying me.
I think its worth mentioning that webpack is working on handling pure-module in package.json.
That still won't do good by everyone as its webpack specific for now, probably other bundlers will adopt it too over time though.
@kedashoe
I understand this is not a priority for you (ramda) right now as you are focused on pushing v1 forward. It shouldn't be too hard to get an outside contribution for this issue as its kinda a hot topic in the community, I can offer such help too ;) .
I think the only thing blocking this is establishing what should be done though. I've lately investigated and got to know this topic quite well while working on various libs and IMHO the best what can be done to satisfy everyone without much maintenance burden is:
module in package.jsonmain in package.json) as prepublish step and publish both directories to npmramda/map - this still could be used by many with babel plugin to ease cherry-pickingcurry in ramda) with /*#__PURE__*/ magic commentdist/ramda.min.js) build with rollup, its use of scope hoisting and all makes it the best choiceI think @std/sem may be relevant in this discussion too. Seems lodash is going that way. I haven't had any time to try it myself yet though.
Hm, dont think its that much relevant in this discussion. From what I understand it somehow acts as an alternative module loader in node, it will certainly allow people consuming es modules, but lets face it - we (community) need to support both styles for the foreseeable future and it doesnt really matter HOW es modules will get loaded (i.e. by @std/esm).
What matters is that we have so many different tools in the ecosystem and they wont be unified any time soon and for the best developer experience we should do our best to support both styles at the moment - cause we cant expect everyone just switching to some tool like @std/esm - also its node specific solution.
Nevertheless @std/esm is great! :)
I agree with the above approach if something is to be changed today. Have ESM as base and then build a CJS version and publish both to npm, indicating in package.json the index.js of each. Possibly add a UMD package. This is what I currently do.
However I think things will change, and I'm just thinking out loud here, so please feel free to correct any false assumptions. The two targets you can build for are the browser or node.
If you build for the browser you cannot use the CJS version unless you are using a bundler like webpack. However most (all?) bundlers support ESM natively today so you can just use that instead. So CJS is not needed for the browser target. (Global build may be needed not sure if anyone uses that anymore?)
If you build for node, then you can use the CJS version directly. However, going forward, with std/esm you can use the ESM version directly instead. So in the near future a CJS version is not really needed here either.
So I guess what I'm thinking is that the audience for a CJS version is becoming smaller for each month and the importance of having an ESM version is becoming greater.
Also I'm under the impression that the library author could have only an ESM version, require std/esm in the library, and it would be transparent to the consumer (node/webpack) of the library that there is no CJS version but maybe that is not true.
@jdalton posted an article on planning to use some new @std/esm thing to redo ES builds on Lodash. I haven't found a related thread over at Lodash on this, but it sounds like he knows what he's doing there. Maybe also relevant for @Andarist's current attempt to tackle this at #2254.
Yes, @std/esm would be really nice 👍 . That way we could get both ESM and CJS support without using rollup or babel. However I'm not sure @std/esm is ready for production just yet. Also if global build is needed then something like rollup is still needed. Having the original modules in ESM format is a requirement for using @std/esm. So I think the work at #2254 is useful as a first step that can be released today, and then later it can be simplified once @std/esm is ready for production.
So it seems that we have many people interested in this here, following this thread, if you want to help out with testing you can check this out:
npm install ramda@es-rc
@Andarist how do you import individual functions?
import {prop} from 'ramda'
// SyntaxError: Module '/try-ramda-es/node_modules/ramda/src/index.js'
// does not provide an export named 'prop'
when going through @std/esm or rollup
@bahmutov I'm not getting that error using rollup (haven't tried any @std/esm stuff), however it is not actually tree shaking for me. This path: /try-ramda-es/node_modules/ramda/src/index.js seems incorrect, it should be looking in es not src I believe. Did you check node_modules to make sure you've got the correct version of ramda?
If anyone has got it working could they post a sample rollup.config.js, package.json and input file?
@kedashoe
you didnt get tree-shaked output bundle because #__PURE__ comments are uglifyjs specific.
https://github.com/rollup/rollup/issues/1354#issuecomment-301292353
https://github.com/rollup/rollup/issues/1293
Ive created a simple example bundle:
import { add } from 'ramda'
export var getTwo = function () {
return add(1, 1);
}
rollup-plugin-uglify. Results are way better - many things are getting tree shaken, but still some dont. Ive investigated whole output and written down 'dependency graph'. In general what Ive noticed is that any of those could be tree shaken:_Set, XAll, XAny, XAperture, XFilter, XMap, XWrap, XReduceBy, XTake, XTakeWhile, XTap, XDrop, XDropLast, XDropLastWhile, XDropRepeatsWith, XDropWhile, XFind, XFindIndex, XFindLast, XFindLastIndex
Because they use a constructor pattern, in example:
function XAny(f, xf) {
this.xf = xf;
this.f = f;
this.any = false;
}
XAny.prototype['@@transducer/init'] = _xfBase.init;
XAny.prototype['@@transducer/result'] = function (result) {
if (!this.any) {
result = this.xf['@@transducer/step'](result, false);
}
return this.xf['@@transducer/result'](result);
};
XAny.prototype['@@transducer/step'] = function (result, input) {
if (this.f(input)) {
this.any = true;
result = _reduced(this.xf['@@transducer/step'](result, true));
}
return result;
};
I believe that those are not tree shakeable right now, because of the assignments to the prototype - the same thing happens to transpiled (by babel) es6 classes. Although they are easier to tree shake, as babel generated classes are contained in IIFEs, but the IIFE is not marked as pure at the moment. Related - https://github.com/babel/babel/issues/5632
Those mentioned exports ofc cause other things to be left out in the bundle as they reference them, i.e. this casquade:
_Set uses internal hasOrAdd which references _contains which in turn references _indexOf.
We can fix this by wrapping those usages of prototype in IIFEs and marking them as pure. Ive confirmed that when Ive wrapped them quickly like this my bundle got reduced A LOT.
This is my final bundle:
!function(global, factory) {
"object" == typeof exports && "undefined" != typeof module ? factory(exports) : "function" == typeof define && define.amd ? define([ "exports" ], factory) : factory(global.MyBundle = {});
}(this, function(exports) {
function _isPlaceholder(a) {
return null != a && "object" == typeof a && !0 === a["@@functional/placeholder"];
}
function _curry1(fn) {
return function f1(a) {
return 0 === arguments.length || _isPlaceholder(a) ? f1 : fn.apply(this, arguments);
};
}
function _curry2(fn) {
return function f2(a, b) {
switch (arguments.length) {
case 0:
return f2;
case 1:
return _isPlaceholder(a) ? f2 : _curry1(function(_b) {
return fn(a, _b);
});
default:
return _isPlaceholder(a) && _isPlaceholder(b) ? f2 : _isPlaceholder(a) ? _curry1(function(_a) {
return fn(_a, b);
}) : _isPlaceholder(b) ? _curry1(function(_b) {
return fn(a, _b);
}) : fn(a, b);
}
};
}
var add = _curry2(function(a, b) {
return +a + +b;
}), pad = ("undefined" != typeof Symbol && Symbol, function(n) {
return (10 > n ? "0" : "") + n;
});
Date;
exports.getTwo = function() {
return add(1, 1);
}, Object.defineProperty(exports, "__esModule", {
value: !0
});
});
The only weird things left out in the final bundle which are not supposed to be there:
var padDate constructorThey seem to be some leftovers of _toISOString but I dont know why they are left out in the bundle like this. This is a minor thing though.
It is my understanding, from following @polytypic , that a second pass through uglify-js can yield additional tree-shaking.
@kurtmilam I believe so too. I would like to understand the logic behind DCE a lot better though. Somehow I feel a little bit anxious when applying tools I do not understand to my projects, fearing subtle bugs or something ;)
New release candidate published, same command to install:
npm install ramda@es-rc
@olsonpm Your links to Rollup examples don't seem to be working anymore?
I'm very interested in understanding why currying precludes tree-shaking in Ramda. Seems to me a clear case of having a good dependency tree?
Its not that currying precludes it - its every function call at the top level of the file that precludes it. Minification process cannot guarantee that the function call is not producing any side-effects and cannot be sure that it is safe to remove it without changing the code's behaviour. I hope my other comment explains it in more details. If you have any other questions - please ask.
You can try to install ramda@es-rc and report back if it tree shakes (when using rollup + uglifyjs or webpack2+ + uglifyjs) correctly now.
I think I understand, but I'll look more into it. So, if there were only a way of currying without calling a function.... I guess currying would simply have to be applied inside each of the exported functions, wouldn't it?
I can't get it to tree shake as of today, but I also think the /*#__PURE__*/ hack isn't worth it. Compiler comments is a slippery path.
I can't get it to tree shake as of today
what do u mean?
I simply mean that whichever way I include a Ramda module (except with 'ramda/src/<module>'), the bundle size stays the same (I believe ~30 kB).
Have you installed ramda@es-rc instead of the latest from npm?
Could u share ur repository?
@drexkdal - the root issue is
@Andarist - by wrapping all top level functions in a pure directive, you're not tree-shaking anything. you could produce the same result using es5 imports.
@Andarist Yes, I have. Sorry, big closed source project :/
@olsonpm I understand that tree shaking is hard, I just think that a library of small utility functions should be the ideal case for it.
I'd like to hear your comments in regard to applying currying as a helper inside each exported top-level method rather that transforming the method via passing it to a separate function at module load time.
by wrapping all top level functions in a pure directive, you're not tree-shaking anything.
Its not tree shaking library, but rather allowing tree shaking for the consumers of the library.
you could produce the same result using es5 imports.
Aint sure if I follow. Did you mean cherry-picking paths instead of accessing exports from the R itself?
Sorry, big closed source project :/
Could u try to reproduce your issue on separate repository which you could share with us?
Andarist - No, I mean you could use es5 imports, run browserify to bundle the app (which doesn't tree shake) then run uglifyjs for DCE, and the pure directives will allow all the same code to be removed. You would end up with the exact same small bundle - I could show you an example if that would help.
@Andarist I'd like to help, but I just can't find the time for something like that. I'd also like to understand your strategy for enabling tree shaking better first ;)
applying currying as a helper inside each exported top-level method rather that transforming the method via passing it to a separate function at module load time.
Not sure what you mean here. A code example would help me understand.
I have a feeling that on a general level, all calls that are blocking tree shaking could be somehow "inverted" to support it. Just one of those things.... Feels right from a pseudo-computation theoretical perspective.
Andarist - No, I mean you could use es5 imports, run browserify to bundle the app (which doesn't tree shake) then run uglifyjs for DCE, and the pure directives will allow all the same code to be removed. You would end up with the exact same small bundle - I could show you an example if that would help.
That would certainly help!
i'll tag you when it's ready.
@olsonpm I still don't get it. It doesn't seem fair to say "they already tried that"; the way I read his post, there are specifics that get in the way of a much more general pattern, which otherwise could be successful.
From what I gather, tree shaking fails if a module executes a function on load time, correct?
It appears to me that function execution could, in theory, be eliminated by only exporting static objects or functions; the application of higher order functions could be inverted by applying them inside the lower order function which can then be exported safely.
In practice, though, wouldn't it be enough to simply put the higher order functions in their own leaf nodes of the dependency tree and make sure that the tree is kept proper?
From what I gather, tree shaking fails if a module executes a function on load time, correct?
Across many other things - yes. Assignments prevent tree shaking too, i.e.:
function MyClass() {}
MyClass.prototype.myMethod = function(){}; // this prevents tree shaking
It appears to me that function execution could, in theory, be eliminated by only exporting static objects or functions
I believe exporting objects is also preventing tree shaking in some sense:
export var obj = {
used() {},
unused() {},
}
I havent checked it, but im pretty confident that unused wont get tree shaken.
the application of higher order functions could be inverted by applying them inside the lower order function which can then be exported safely.
Possibly this could be possible, however in my opinion such transformation still should be performed by babel. Code creators shouldnt worry too much about re-structuring their source code so other tools can process them in some better way (in example by making them tree-shakeable) - its a superior solution imho to create a code transformation plugin, which could be applied on everybody's code with confidence to provide mentioned benefits. And while this would require babel plugin to be written I dont believe its that much superior than having a babel plugin which adds #__PURE__ annotations. However I see the gain of such solution in reduced runtime startup.
In practice, though, wouldn't it be enough to simply put the higher order functions in their own leaf nodes of the dependency tree and make sure that the tree is kept proper?
Could u elaborate? Im not sure if I know what you mean. Some example would be great.
Assignments prevent tree shaking (...) I believe exporting objects is also preventing tree shaking in some sense
Both examples involve properties/methods, which we already know will never be tree shaken, simply because they might be accessed programmatically at runtime by string lookup (Closure Compiler is supposedly able to remove them through much more advanced static analysis).
Sure, it would be great to be able to do automatic transformation of code to allow tree shaking, but it seems clear that this would either be a highly nontrivial problem or depend on certain assumptions that might fail in edge cases or need annotations.
Could u elaborate?
About the dependency tree: Modules are locally scoped, so if each and every function had its own module and the dependency graph was kept as a proper tree, my thought would be that unused code could be safely removed through static analysis, which is basically what tree shaking means, right?
There must be something I'm not getting here...
@Andarist My last comment about the dependency tree is probably related to your report here: https://github.com/webpack/webpack/issues/5654
so my point was that you have rich and jdalton trying to solve a problem that you're perceiving as fairly simple. "couldn't we just change the code to be tree-shake'able?". my point was that rich concluded it was not simple at all. i mean, if lodash could have done it, it would be done by now. you not only have rollup, but now webpack that tree shakes as part of its bundling process. both libraries have many issues regarding tree shaking and both resort to wontfix resolutions because it is either not possible statically or requires logic too complex to warrant adding to the codebase.
so i think it's fair to say they already tried that, and if you can solve this problem then your contribution would be extremely valuable.
I'm reaching the realization that this issue is not about Ramda being an inherently hard case for tree shaking, but about wanting to achieve it without touching the structure of the code base at all.
I won't say that this approach is lazy, since Ramda isn't the smallest of libraries, but I would say that a rewrite definitely ought to be preferred over compiler comments targeting a specific bundler....
the comments target a minifier, not a bundler. and it is a very new addition to uglifyjs.
Also such comments are not really tool specific - any tool could leverage them, but as its UglifyJS' invention, it's leveraged only by them (for now at least). It indeed feels a little bit sad that we have to provide hints to the tools to reach certain outcomes, but at least for me I prefer such solution than restructuring the code in unnatural way. That's ofc just my personal opinion on this.
@Andarist - had the time tonight to look into my es5 comment. I should have realized that although the pure comments are unrelated to tree-shaking, they are dependent on scope hoisting. In producing a minimal example to show that, I reached a bug with rollup when turning tree shaking off. I also couldn't find a way to turn treeshaking off with webpack.
The reason it relies on scope hoisting is that uglifyjs can't remove unused properties from an object as that would be a very unsafe transformation - and module.exports is always an object. However with hoisted scopes, everything is top level which uglifyjs can then remove via the toplevel minify option.
Is there a reason to have browserify in package.json when rollup it used?
Possibly not, could get removed.
I started using Ramda and I stumbled on a problem. After installing through Npm it works fine in the REPL:
R = require('ramda')
But it fails in the source code:
import R from 'ramda';
with the following error:
export 'default' (imported as 'R') was not found in 'ramda'
My workaround is to use the old require() syntax in the source files too, just for Ramda. Am I missing something? Is this a known bug?
@tobia, try import * as R from 'ramda';.
What are you using to run/compile the import syntax? I'm using node --experimental-modules *.mjs in Node 8.7.0 and it works. Maybe you can get that same syntax working by tweaking your loader.
@davidchambers Thank you, import * as R from 'ramda'; works splendidly. It probably does the same thing as require(), but at the very least it's more consistent with all the other imports.
I'm still a relative newbie to the modern JS world, so I wouldn't have figured that out!
@nickmccurdy I just used create-react-app and worked my way from there, so I'm not sure what node parameters I'm using! Ramda is the latest (and hopefully last) chapter of a long search of mine for a good JS functional library.
@tobia you can inspect the output of your import syntax here and the ramda's entry point after transpilation here (ofc ramda's has more functions exported, but thats not the point) to get the gist of whats going out behind the scenes.
When using bare require it omits the interop step, but when using import R from 'ramda' it expects to import what's under default property of the imported exports, however such doesnt exist.
Is using the destructuring import { filter, keys } from 'ramda' syntax working with tree shaking atm? or do we have to import each function individually to reduce dead code in our bundles?
@marcosfede library itself (ramda) is well suited for tree-shaking now. However all depends on the tree-shaking algorithm used and they differ between tools.
You will get best results with rollup, webpack@4 in produciton mode should give you acceptable results (or maybe even ideal nowadays), you can get good results with ModuleConcatenationPlugin in webpack@3, but without it you will get poor results.
You can read more here
Does tree shaking work with import * as R from 'ramda' or import R from 'ramda'?
import * as R from 'ramda' is the appropriate form, ramda has no default export - it's a namespace
But does it have tree shaking though?
@dolanmiu not in webpack 3 (haven't tried v4)
Tree-shaking is not something to have - it's algorithm used by other tools when bundling. See my comment above - https://github.com/ramda/ramda/issues/1968#issuecomment-380700190
@Andarist Sorry, I phrased the question wrong
Does tree-shaking algorithms (such as rollup or webpack@4) work with import * as R from 'ramda', or do I have to import it like:
import {identity} from 'ramda'
import identity from 'ramda/es/identity';
Got this from: https://github.com/ramda/ramda/issues/2355#issuecomment-341513380
I would prefer to use import * as R from 'ramda', which is why I am asking
Until u pass R as argument to function call (be cautious when doing such thing) this:
import * as R from 'ramda'
R.identity(1)
should behave the same as import {identity} from 'ramda.
You can read a little bit more in my comment above.
Most helpful comment
Hi guys,
Just to be sure, tree shaking is not possible with
commonjsmodules as it uses static analysis which onlyes6modules provide (by the spec). Only dead code elimination can be applied tocommonjsmodules, but it's far from ideal.A good example on how to set this up with a minimal effort is [d3].
On a side note, if we only want to import required parts, currently we have to do:
If
es6modules are supported this will correctly leverage tree shaking (which it does not currently):