The issues occur when you run multiple builds concurrently or even sequentially in the same process.
For the first issue, it seems that module resolution is only run once for both builds, which creates a problem if for example you're building a browser and server bundle.
The second issue is that parcel keeps the process open even after completing its work when building w/ two bundlers.
See this simple repo repository - https://github.com/gdborton/parcel-concurrent-build
Building a bundle for both server and browser should include the correct modules for each (browser vs main in package.json)
Parcel seems to use the same module resolution across bundlers, despite one targeting node and the other targeting browsers.
For the first problem, scope module resolution to bundler, hopefully this is fairly straight forward.
For the second, maybe a singleton wrapper around the worker farm api that can track open/closed workers.
I think this is a related issue, I'm not sure it's a 100% duplicate - https://github.com/parcel-bundler/parcel/issues/1771
See this simple repo repository - https://github.com/gdborton/parcel-concurrent-build
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel |
| Node |
| npm/Yarn |
| Operating System |
Changing your bundle Promise.all to this async IIFE function should solve your issue, untill we figure out how to improve the workerFarm to handle this.
(async function bundleEverything() {
console.log('Start bundling...');
let clientBundle = await clientBundler.bundle();
console.log('Client bundled...');
let serverBundle = await serverBundler.bundle();
console.log('Server bundled...');
// the debug/src/browser file is the "browser" entry for debug
const debugBrowserLocation = require.resolve('debug/src/browser');
const containsBrowserFile = !!Array.from(clientBundle.assets).find(item => item.name === debugBrowserLocation);
assert(containsBrowserFile);
console.log('Bundling finished!');
})();
EDIT: This script cleanly starts up and shuts down an entire workerfarm, this way the options don't get overwritten and the process shuts down cleanly
This should not be an issue as long as you do production builds without watching. The moment you start watching you have no other option than running them in parallel so this bug definitely needs to get resolved.
So the issue is that getShared gets called, which if there is a shared one overwrites the options of the entire workerfarm, blocking all workers updating their options and re-enabling them once up-to-date with the latest options. (So browser => Node).
If you would flip around the Promise.all it would be the opposite result.
We could solve this by assigning an ID to each workerfarm based on options and entrypoint or making workerfarm part of the bundler object, so we don't overwrite options. Not sure both solutions seem a bit too complex to me.
Not entirely sure that I'm following your solution, but I think ideally we still use the same set of workers with differing configs to avoid creating too many threads.
This could be done by passing the config with each request (potentially slow due to ipc, largely determined by the size of the message), or by initializing workers with a config then passing references to that config with each message (reference could be a hash of the config object).
Bundler.bundle starts and stops the entire workerfarm, so it does not overwrite the configs at all. (Will not work in watch mode)
About the solution, I guess adding a config hash would solve the issue.
Note: Parcel 2 will be able to solve this by running a single instance of Parcel with two entries points to the same file with different configs
@jamiebuilds do you have a roadmap/timeline for parcel v2?
Most helpful comment
Note: Parcel 2 will be able to solve this by running a single instance of Parcel with two entries points to the same file with different configs