So this is odd -- I've got a project where I'm generating text PDFs, nothing too fancy. I've hit some sort of threshold, though -- I went from making a ~20-page document to making a ~40-page document, and the PDF-generating time spiked from ~3 seconds to ~2 _minutes_.
I've got a repository up (with obfuscated text files, whee NDAs) here. You can click to generate the "slow" or "fast" documents, with basic instrumenting telling you how long each one takes.
Have I hit some sort of known performance landmine here? Does pdfMake just take a lot longer once you get above a particular input-object size? Either way, is there a way to optimize around this?
All that said, it looks like it may be choking on the number of items in the "content" array. I tried charting performance time against the number of lines in the (prettified) doc object, and I got a pretty good linear correlation. Is pdfMake just not a viable solution once you get above a certain number of text objects?
Hmm -- on a similar pdfMake run, I tried running it on the entire document, and then tried splitting the document into three chunks and running pdfMake on each:
Total: make-report: 132281.000ms
Part #1: make-report: 1178.000ms
Part #2: make-report: 4923.000ms
Part #3: make-report: 3994.000ms
... so that doesn't quite look linear...
For what it's worth, it looks like the slowdown happens after the doc.end(); in _createDoc -- everything prior to that is pretty snappy. On the "slow" pdf, the 'data' events get emitted very, very slowly. Not sure what pdfMake is doing at that point in the process....
(I guess something's gotten snarled up in the node.js stream.Readable?)
Huh -- I did manage to knock it down quite a bit by making this adjustment around line 25,900 in pdfmake.js:
function flow(stream) {
var state = stream._readableState;
debug('flow', state.flowing);
if (state.flowing) {
do {
var chunk = stream.read(Number.MAX_SAFE_INTEGER); // grab MAX_SAFE_INTEGER bytes
} while (null !== chunk && state.flowing);
}
}
I knocked it down further by killing the call to addPageBreaksIfNecessary, which doesn't seem to do anything in this particular case....
If addPageBreaksIfNeccessary is an issue, it should work now - if you don't specify a pageBreakBefore function.
From the description you give, I to me it sounds like this could be a GC issue. Could you double check that?
Thanks for following up on this! I assume "GC" = "Google Chrome"? I tried this in Firefox and got more sane results -- the 'fast' PDF showed up in 52 seconds, whereas the 'slow' one took 1:47. (Also re-confirmed that I was getting the same slowdown from Chrome with all my extensions turned off.)
No, I mean GC = garbage collection.
If your Chrome takes up to much memory, it'll try harder to release it. Using more CPU to find disposable javascript objects.
Ah, gotcha. Not 100% sure how to check on whether it's garbage collection. I did check in the Chrome task manager, and saw that the slow case is using more memory (120MB versus 65MB) and on my machine pegs the CPU at 33% for a long while. If you can point me to the right sort of heap-profiling to do in this case, I can investigate further...
Just do a CPU profile - it will show the time spend in garbage collection: https://developer.chrome.com/devtools/docs/cpu-profiling
Awesome -- I'll give that a shot tomorrow
Ugh. Just my luck, the profiler appears to hang on the machine that's demonstrating the slowdown. It works fine on a machine where both files take about 45s:

But that's probably no good. Do we just chalk this one up to "not reproducible?"
Circling back to this one, I have no good way to profile this particular slowdown. Looks like "don't call addPageBreaksIfNecessary when you don't need to" has been taken care of. The only other obvious performance glitch this one uncovered was to replace the one-chunk-at-a-time "flow" with returning the whole file at once (as per https://github.com/bpampuch/pdfmake/issues/280#issuecomment-98944706), which cut the processing time by more than half. Maybe we could spin that off to a separate discussion and close this one?
That's a good suggestion. Wanna try a pull request?
Lord knows I'd like to, but the problem here is that I'm making that change to the readable-stream module, forcing its flow(stream) function to return the entire pdf object at once (which, again, gives us a massive performance gain on large files).
It's not obvious to me what change I could make to the _pdfMake_ source to effect that same change to the readable-stream behavior.
Do you want to chat at some point in time so we can figure out, if there's a good path to a solution to this issue?
Sorry for the considerable delay -- yeah, I'm happy to chat about this if that would help. I'll send along an email.
Hi,
Is there any news about this issue ? I have the same one and don't know what to do. Is there a way to automatically split the pdf generation on smaller chunks and merge them at the end ?
Thanks.
ps: Awesome work, thanks a lot.
Hello,
Is there anyway I could help with this issue ?
Thanks.
Hi there,
I just ran into a big slowdown once my PDF hit a certain size. It won't load at all, in fact.
Looks like the profiler is telling me it's the drainQueue function that is taking a long time.

This was my issue https://github.com/bpampuch/pdfmake/issues/246
@hujhax Did you find a workaround? Do you have some code that you would be willing to share?
@kraf The most helpful change I made was the one listed in this comment: https://github.com/bpampuch/pdfmake/issues/280#issuecomment-98944706
Thanks! I didn't realize this was your mentioned change to Readable, apologies.
+1
Applying both update (flow + addpagebreak) reduce generation time from 60s to 6s !!!!
Thanks a lot. I still have some issues with larger files but this is really a great improvement.
I hope this will be merged soon.
Thanks for the improvement :) from 2min -> less than 10 sec!! THANK YOU VERY MUCH
+1 (flow + addPageBreaksIfNecessary) :) :+1:
Well this is v. heartwarming -- glad I could help, and yeah, hopefully this can be incorporated into the codebase down the road.
Wow, looks great. Thanks a lot!!! I'll merge it tonight and release a new
package
W dniu czw., 8.10.2015 o 18:30 Peter Rogers [email protected]
napisał(a):
Well this is v. heartwarming -- glad I could help, and yeah, hopefully
this can be incorporated into the codebase down the road.—
Reply to this email directly or view it on GitHub
https://github.com/bpampuch/pdfmake/issues/280#issuecomment-146605493.
did you release it,yet ?
otherwise where i can apply this change ?
Look at line 20122 in pdfmake.js
add Number.MAX_SAFE_INTEGER or 9007199254740991 (compatible with IE) in stream.read()
function flow(stream) {
var state = stream._readableState;
debug('flow', state.flowing);
if (state.flowing) {
do {
var chunk = stream.read(Number.MAX_SAFE_INTEGER); // add this here in stream.read(xxx);
} while (null !== chunk && state.flowing);
}
}
if you want to fix it in the minified version without rebuild: Line 11 character 393
do var n=t.read(Number.MAX_SAFE_INTEGER);
Thanks for the reply, but i need to reference for nodejs.
For me the problem is resolved in Firefox, but still persists in Chrome
Same issue for me, workaround working in all browser tested for my print (large tables).
I was experiencing significant performance problems (1.5 minutes to generate PDF) and slow script warnings generating a 26 page PDF. I patched in the change that hujhax and ximex posted about regarding read(Number.MAX_SAFE_INTEGER) and the warnings went away and performance took about 4 seconds.
The solution for client side works very well for me.
But and the server side? I have the same problem for generate pdf with a great number of rows.
The fix to apply Number.MAX_SAFE_INTEGER in the read is going straight to the built js file and getting overwritten during new builds. not sure where this function is coming from... but it appears to be in many of the node modules... How can we fix this in a way that survives a rebuild other than a string replace?
Thanks to you all, my pdf is now instantly made!
For me, adding the Number.MAX_SAFE_INTEGER made things much faster however, it could not create a pdf for more than 4-5 pages (the table has 14 columns). There was no error message. 4-5 secs after initial creation, the browser title shows "loading" for a half second then doesn't show anything...blank page.
Have you tried .download instead of .open? There is another bug in
which large PDFs can't be opened with the data url.
On Tuesday, 12 April 2016, actionahn [email protected] wrote:
For me, adding the Number.MAX_SAFE_INTEGER made things much faster
however, it could not create a pdf for more than 4-5 pages (the table has
14 columns). There was no error message. 4-5 secs after initial creation,
the browser title shows "loading" for a half second then doesn't show
anything...blank page.—
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
https://github.com/bpampuch/pdfmake/issues/280#issuecomment-209020666
Patrick Detlefsen
OMG you are my savior!! Thank you so much Patrick!!!
😄 you are welcome!
On Tuesday, 12 April 2016, actionahn [email protected] wrote:
OMG you are my savior!! Thank you so much Patrick!!!
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
https://github.com/bpampuch/pdfmake/issues/280#issuecomment-209052986
Patrick Detlefsen
The correction proposed by ximex (on october 15 ) is perfectly working for me.
Hope it will be soon integrated!
Thank you
you guys could probably just use concat-stream to grab all of the chunks instead of dribbling them in one by one you could also just manually drain the stream via reading it with .read() instead of .pipe() or you could raise the highWaterMark which should also increase the flow rate.
Thanks, calvinmetcalf. I'll give those a shot, and if anything seems to do the trick, I'll report back here.
Did anyone tried to generate a big 30-40 pages document on iOS (ipad); it crashes in Safari/Chrome/Firefox; same code works on windows or android! See my issue: https://github.com/bpampuch/pdfmake/issues/661
Thanks - Bruno.
I hate to bring up an old topic, but I'm having exactly the same thing as actionahn a few posts up. Large data file, delay on new tab, loading icon, then nothing. What did you mean by trying .download instead of .open, patrickdet? Where is this change made? I've been tinkering around with the script but haven't fixed the issue yet.
I'm using pdfmake within ui-grid, if that makes any difference.
Thanks!
hi mjortman, yes, just change the ".open" to ".download"
@mjortman FWIW for pdfmake.js + ui-grid users
The fix for stream.read in pdfmake.js + the change in ui-grid.js pdfExport func worked for us to get ~40 page reports going with ~10 cols in about 4 sec.
// ui-grid.js ~ line 17603 - change .open() to .download()
pdfMake.createPdf(docDefinition).download()
// pdfmake.js
function flow(stream) {
var state = stream._readableState;
debug('flow', state.flowing);
if (state.flowing) {
do {
var chunk = stream.read(Number.MAX_SAFE_INTEGER); // add this here in stream.read(xxx);
} while (null !== chunk && state.flowing);
}
}
Also worked for me.
On Thursday, September 15, 2016 1:40 AM, cerd <[email protected]> wrote:
FWIW for pdfmake.js + ui-grid usersThe fix for stream.read in pdfmake.js + the change in ui-grid.js pdfExport func worked for us:// ui-grid.js ~ line 17603
pdfMake.createPdf(docDefinition).download()
// pdfmake.js
function flow(stream) {
var state = stream._readableState;
debug('flow', state.flowing);
if (state.flowing) {
do {
var chunk = stream.read(Number.MAX_SAFE_INTEGER); // add this here in stream.read(xxx);
} while (null !== chunk && state.flowing);
}
}
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
Is there a fix for this when rendering PDF on server side (e.g. Node.js via Express)?
Extremely slow...
+1
Is it at all possible to fix this without modding the compiled js file? The line in question is found in https://github.com/nodejs/node/blob/master/lib/_stream_readable.js. Is it possible to pass a flag or something that would change how it behaves?
Also, I have noticed that while the above hack greatly improves performance in Chrome and even IE11, performance in FireFox is absolutely terrible. A PDF which generates in 1-2 seconds in Chrome and 4-5 in IE11 takes well over 30 on FF, throwing several "long-running script" warnings in the process.
Wow, I'm glad I found this issue.
I'm generating pretty detailed forms and potentially there is a need for hundreds of them to be generated.
Using the Number.MAX_SAFE_INTEGER fix above a 98 page test went from 9 minutes to around 10 seconds. Absolutely stunning.
As we only have chrome where it will be used, I'm very satisfied with this. Definitely going to help out the development where I can.
Thank you.
Just wow. I can't agree more, the Number.MAX_SAFE_INTEGER trick is amazing. Now I can generate a 742 page report (18.4 MB) in just under 30 seconds. I'm so happy right now!
I join the party. ~3 months ago I've implemented the max_safe_integer in our company. Until today the PDF generation is working flawless and it's very fast. :+1:
PS: additionally we're working with new pdfmake repo (https://github.com/pdfmake/pdfmake), which has several fixes.
Thank you all :smile:
I made this commit https://github.com/bpampuch/pdfmake/commit/e755a3da26bac7547866c38ba485d213c417aa27 which should solve the problem of performance (as described above).
And i made test build https://github.com/bpampuch/pdfmake/commit/214ec161c11fadb8f02c08f2e3bea0576ac4c9fb for testing.
Can anyone test and verify that this fix is correct?
This is the same hack we have been using locally, looks good, but perhaps
using Number.MAX_SAFE_INTEGER is better?
Best,
*Christopher Svanefalk*
Cell:* +46762628251
Skype: csvanefalk
On Thu, Dec 15, 2016 at 8:14 AM, Libor M. notifications@github.com wrote:
I made this commit e755a3d
https://github.com/bpampuch/pdfmake/commit/e755a3da26bac7547866c38ba485d213c417aa27
which should solve the problem of performance (as described above).
And i made test build 214ec16
https://github.com/bpampuch/pdfmake/commit/214ec161c11fadb8f02c08f2e3bea0576ac4c9fb
for testing.Can anyone test and verify that this fix is correct?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/bpampuch/pdfmake/issues/280#issuecomment-267255835,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABE1MVjCTAEi11XjIL_WRGbQlj-GW0Iwks5rIOi_gaJpZM4EPvqU
.
@csvan Yes, use variable Number.MAX_SAFE_INTEGER is nice, but this doesn't support Internet Explorer.
(source: https://developer.mozilla.org/cs/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER)
Solved the problem with this?
In issue https://github.com/bpampuch/pdfmake/issues/798 is again problem with performance. Some ideas?
This issue resolving problem with reading from stream. This has been fixed in version 0.1.22 and rewrited fix to core pdfmake in version 0.1.25.
Hi, it's been three years after this issue was solved.
But still, I have this issue. Downloading PDF takes like more than a minute.
I use pdfmake v0.1.32. I want to apply the solution given above but it seems like the code mention is different from my pdfmake.js.
I attached the file because I can't seem to find the line 25,900. Can anyone help with this? Thank you.
pdfmake.txt
Most helpful comment
Huh -- I did manage to knock it down quite a bit by making this adjustment around line 25,900 in pdfmake.js: