When I use rasterizeHTML to render the page as a canvas, then use jsPDF's addHTML, the image quality is variable.
If I don't use the option "pagesplit", it renders one PDF page in good quality. If I set pagesplit to true, then it renders the entire page but the image quality on each page is dramatically reduced.
Also, I found that the default code was stretching or compressing the images. I made some changes to the code. This is the code from line 1779.
var crop = function() {
var cy = 0;
while(1) {
var canvas = document.createElement('canvas');
canvas.width = Math.min(W*K,obj.width);
canvas.height = Math.min(H*K,obj.height-cy);
var ctx = canvas.getContext('2d'),
scaledHeight = obj.height * canvas.width / obj.width;
ctx.drawImage(obj, 0, cy, obj.width, obj.height, 0, 0, canvas.width, scaledHeight);
var args = [canvas, x,cy?0:y,canvas.width/K,canvas.height/K, format,null];
this.addImage.apply(this, args);
cy += (obj.width * (canvas.height / canvas.width));
if(cy >= obj.height) break;
this.addPage();
}
callback(w,cy,null,args);
}.bind(this);
Basically, I added 'scaledHeight' so the drawImage height is normal, then I changed the 'cy' amount.
I also checked already, bypassing the part where it takes the canvas and does .toDataUrl('image/png') does not improve the image quality.
I found out why the image quality is so bad and made myself a work around.
Basically the image quality suffers every time you use the canvas method "drawImage", especially if you modify the size of the image (increasing or reducing the size doesn't matter).
In the .addHTML() code, the logic was as follows:
If you don't have the option _pagesplit_ it will only render one pdf page. It will take the image / canvas you provide it and pass it straight through to the pdf.addImage() method. No other canvases at scaled sizes are created.
If you do have that option enabled, it will create a canvas at whatever scale will match the pdf size, then draw the scaled image to that canvas. Then it'll pass data through to the pdf.addImage() method. Creating the additional canvas at a scaled size with a scaled image dramatically reduced the image quality.
** the workaround
This solution isn't for everyone, but it does not involve modifying any jspdf code which is good.
Here's the pseudo-code:
While you still have some image to add:
create a new canvas that's the same width as the original canvas, but whose height is the height of Min(scaledPdfPageHeight, imageHeight).
draw the image to that canvas applying an image height shift.
add the canvas to the pdf using the .addImage() api
Here's the actual code (slightly modified but should still work):
var canvasShiftImage(canvas, shiftAmt, realPdfPageHeight){
shiftAmt = parseInt(shiftAmt) || 0;
if(shiftAmt <= 0){ return oldCanvas; }
var newCanvas = document.createElement('canvas');
newCanvas.height = Math.min(oldCanvas.height - shiftAmt, realPdfPageHeight);
newCanvas.width = oldCanvas.width;
var ctx = newCanvas.getContext('2d');
var img = new Image();
img.src = canvas.toDataURL();
ctx.drawImage(img, 0, shiftAmt, img.width, img.height, 0, 0, img.width, img.height);
return newCanvas;
}
var html2canvasSuccess = function(canvas){
var pdf = new jsPDF('l','px'),
pdfInternals = pdf.internal,
pdfPageSize = pdfInternals.pageSize,
pdfScaleFactor = pdfInternals.scaleFactor,
pdfPageWidth = pdfPageSize.width,
pdfPageHeight = pdfPageSize.height,
totalPdfHeight = 0,
htmlPageHeight = canvas.height,
htmlScaleFactor = canvas.width / (pdfPageWidth * pdfScaleFactor);
while(totalPdfHeight < htmlPageHeight){
var newCanvas = canvasShiftImage(canvas, totalPdfHeight, pdfPageHeight * pdfScaleFactor);
pdf.addImage(newCanvas, 'png', 0, 0, pdfPageWidth, 0, null, 'NONE'); //note the format doesn't seem to do anything... I had it at 'pdf' and it didn't care
totalPdfHeight += (pdfPageHeight * pdfScaleFactor * htmlScaleFactor);
if(totalPdfHeight < htmlPageHeight){ pdf.addPage(); }
}
pdf.save('test.pdf');
};
html2canvas($('someSelector')[0], {
onrendered: function(canvas){
html2canvasSuccess(canvas);
}
});
Could you show some examplary usage of your functions? Som JsFiddle or something like that? I am trying to use them, but they are not working for me.
Thanks
Here you go @bhaal275
Here's a test page I whipped up quickly: http://run.plnkr.co/ZGk13nBcOi3XFW6n/
You can edit it here: http://plnkr.co/edit/nNSvHL8MZcT6nNKg9CG9
Just a couple side notes:
Export 1: uses addHTML(). In my actual implementation it looks a lot more blurry than in this demonstration for whatever reason. Maybe because I modified some code to maintain the aspect ratio of the page. Anyway, that seems to be the least of the problems. Not all of the background colors or styles are staying.
Export 2: This is my workaround. It seems to work a lot better than export #1. For whatever reason, it is also showing a few style issues. Those aren't showing up in my actual implementation.
Hi,
Thanks for your answer. It looks it solves my problem as well, so I tried incorporating this approach. Unfortunately, for some reason addImage method work reeealllllyyy slow on IE 8 with Chrome Frame 23 (corporate requirements). Do you have any idea what might be the issue?
@bhaal275 hmmm that seems pretty weird. I might have some time later to tinker with it but I don't think it'll be a quick fix. Hopefully I'll be able to come up with something worthy of a pull request and get it implemented in jsPDF so we don't need these workarounds.
@bpmckee It would be great if you would be able to work on that. I was investigating the issue further, and from what I saw the problem is here: https://github.com/MrRio/jsPDF/blob/master/jspdf.plugin.addimage.js#L539 on line 539. I might be wrong here, but I think that it is just too memory intensive for the browser to append PNG like that. Maybe the images are too big.
Another idea is that maybe the parameters that you pass to "addImage" make it so slow. Having a look here:https://github.com/MrRio/jsPDF/blob/master/jspdf.plugin.addhtml.js#L89
especially at lines:
var args = [obj, x,y,w,h, format,alias,'SLOW'];
this.addImage.apply(this, args);
I am wondering what is format or 'SLOW' for. Maybe those params somehow handle the better mem handling.
I'm thinking it might have something to do with that. Those parameters are for compression. I set it to "NONE" because theoretically it should give you the best image quality, but I haven't seen any of that code so I couldn't say that for sure.
I tested it with new lines:
var alias = Math.random().toString(35);
pdf.addImage(newCanvas, 0, 0, pdfPageWidth, 0, 'png', alias, 'SLOW');
And it started working faster, now without any major browser freezes or crashes. It seems to be the solution.
Now my last problem is that this configuration works great in newest chrome, but it has errors in Chrome 23 (corporate support). In this version, with multi-page pdf's it prints only one page.
The problems seems to be somewhere at the line:
var newCanvas = canvasShiftImage(canvas, totalPdfHeight);
Where on the first page it generates proper PDF page, but on the second run it generates an empty PDF. Any ideas what might be the issue? Something looks like a bug here?
The problem occurs here:
ctx.drawImage(img, 0, shiftAmt, img.width, img.height, 0, 0, img.width, img.height);
For some reason when this line is invoked on the second page of PDF (second page of PDF is drawn on canvas), the output canvas has correct dimensions, but is totally blank. For some reason no content is placed there in chrome 23, no idea why...
I finally managed to fix it. I got rid of canvasToImage function as it was necessary, and instead I added Pixastic (http://www.pixastic.com/lib/download/) library and used crop function to cut out the part of the image that should fit to next page. Finally I added the cropped part to pdf canvas.
var canvasShiftImage = function(oldCanvas,shiftAmt){
shiftAmt = parseInt(shiftAmt) || 0;
if(!shiftAmt){ return oldCanvas; }
var newCanvas = document.createElement('canvas');
newCanvas.height = oldCanvas.height - shiftAmt;
newCanvas.width = oldCanvas.width;
var ctx = newCanvas.getContext('2d');
Pixastic.process(oldCanvas, "crop", {rect: {left: 0, top: shiftAmt, width: oldCanvas.width, height: (oldCanvas.height - shiftAmt)}}, function (newCanvas) {
ctx.drawImage(newCanvas, 0, 0, newCanvas.width, newCanvas.height, 0, 0, newCanvas.width, newCanvas.height);
});
return newCanvas;
};
That's awesome! Sorry i didnt get to it yesterday. Had a lot to do at work. That sounds like a good solution because thats exactly what that canvasShiftImage does. Im surprised the pseudo canvas drawImage crop method didnt work in chrome 23 though. Im curious so ill look into that and see if theres a way to get it working without Pixastic. But glad you found a solution!
If you would find something, post it here, thanks.
Also, I created an angularJS module for printing webpages to PDF ngPrint available at: https://github.com/bhaal275/ng-print
I used your code as print service to cooperate with jsPDF, I hope you don't mind?
Haha awesome! I don't mind at all, that's what open source is all about!
Great, if you have any ideas how to improve it, just leave a note.
Interesting reading/audit, thanks guys :-)
for some reason addImage method work reeealllllyyy slow
Well, playing with the _plnkr.co_ test page i noticed that while the former addHTML's Export 1 usage produces a 500KB PDF, Export 2 generates a 18MB one... i guess that's why slow?
... the second page of PDF (second page of PDF is drawn on canvas), the output canvas has correct dimensions, but is totally blank. For some reason no content is placed there in chrome 23
It isn't something isolated to older Chrome versions, that's what i got opening the PDFs using up-to-date Adobe Reader:

( Left: Export 1, Right: Export 2 )
Hence, i think it's obvious these above two issues indicates a problem with the handling of images/canvas.
Regardless, Export 2 generates a much better text, no blurry. So, if you are able to fix these issues i'd suggest you to consider submitting your approach to the html2canvas repository, i guess they'll love it...well, as long that doesn't involve a MBs-magnitude canvas ;-)
Btw, if you are concerned about that black background, you can override it by passing a "background" option to the html2canvas library, otherwise it'll use "transparent"
I am wondering what is format or 'SLOW' for.
format is used to indicate addImage on what format is the image you're passing to it, in case of raw data or needed canvas.toDataURL() usage. While SLOW indicates a compression method, being that ignored unless the format is PNG.
Cheers!
Thanks for the reply @diegocr :)
I was playing around with a fork of that plnkr again to try and fix all of the other issues (super slow processing time, large file size, only exporting the first page in firefox). I also updated the logic to make a lot more sense. Sadly, firefox is being a huge pain. It seems to be hit and miss for when the drawImage canvas method wants to work.
There are super weird things like this:
// this isn't working in firefox??
var test = 0;
canvas.drawImage(img, 0, test, img.height, img.width);
// but this is????
canvas.drawImage(img, 0, 0, img.height, img.width);
Things like that which don't make any sense at all. Maybe it just doesn't like plnkr.co because it's in an iframe, but that's just a wild guess.
I'll play with it some more but I don't know how realistic it will be to get it to work with the weird issues I've seen so far.
@bhaal275
I forked my plnkr with some new code which fixes a bunch of issues. However, I don't have Chrome 23, so could you test to see if it works for you?
http://plnkr.co/edit/iY9KmzL4Fmx0cZdPXO7c
If everything checks out I'll try to cleanup a little bit of stuff and submit a pull request.
Hi Guys,
Sorry for a late response, I was out on holiday. I will try the new plunkr on Monday when I will be back at work.
Though I wouldn't worry so much about my issues, I do have a workaround implemented there, and it is just a corporate requirement to support an old Chrome Frame 23, which I hope will be abandoned by the end of the year.
Anyhow, I will let you know my results on Monday.
Cheers.
Hi,
Sorry for the late response. I just tried to check your example on Chrome Frame 23, but noticed that plunkr does not support Chrome Frame, so I can't test it, without wiriting a separate example.
As I told above, in such a case I would just loose support for Chrome Frames all together, as the product is officially no longer supported by Google.
No worries! I have been swamped at work lately and haven't had enough time to do a good implementation. Hopefully I can find some time this week.
Hmmm for whatever reason, when I tried my plnkr again, the text appears to be blurry. :(
I tried it as well just now, and got the same effect. Maybe the page is somehow extrapolated to a bigger image?
What browser do you use?
I am currently using the latest version of Chrome (37) to do most of my testing.
The solution I'm going to end up using (until I have a lot of time to dig through the code and see why it's not working well anymore is) is this:
(NOTE: converting the canvas to a Blob, then from a blob to an image cuts my file size down from 10+ MB to around 600kb with the same quality. Unfortunately it requires the canvas-toBlob.js polyfill).
Here's the new solution I have. (Note: there will no longer be PDF pages, it renders all as one image at whatever size image comes out).
function exportToPdf(html){
html2canvas(html,{
onrendered: function(canvas){
// Note: instead of canvas.toBlob, you could do var imageUrl = canvas.toDataURL('image/png');
// then you wouldn't need to include the polyfill. However, your file size will be massive
canvas.toBlob(function(blob){
var urlCreator = window.URL || window.webkitURL;
var imageUrl = urlCreator.createObjectURL(blob);
var img = new Image();
img.src = imageUrl;
img.onload = function(){
var pdf = new jsPDF('p','px',[img.height, img.width]);
pdf.addImage(img, 0, 0, img.width, img.height);
pdf.save('myPdf.pdf');
};
}
}
});
}
But doesn't that break the purpose of exporting the HTML to pdf, if we get one big picture?
I agree. It's not ideal. I definitely want to take another crack at it. It seems that canvas.toDataURL() is making the image blurry and the file size huge. canvas.toBlob() makes a great image that is teeny tiny. The only problem is, canvas.toBlob() isn't supported in many web browsers. You have to use the canvas_to_blob polyfill. I doubt jsPDF will want yet another 3rd party library buried in their code.
It's only a polyfil, so it won't even be noticable in some browsers, and in some future it could be removed completely. If users can save multiple MBs on the image size, I guess they would be fine with downloading additional several KBs for a polyfil.
I'll try to put something together in the next few days. Now that I know better what the problems are, I should be able to come up with something nice.
Here is the newest plnkr before I actually merge it into jsPDF.
There are 2 new buttons. The first one does not split the PDF into multiple pages.. it instead sets the PDF to the size of the image that was rendered. Not exactly what you want. The second actually does split the PDF into multiple pages & respects the PDF page size.
Let me know if you have any issues with it
@bpmckee did you have any chance to merge it into jsPDF? We've run into exactly the same issues and it seems your code fixes it!
Shoot I never did merge it back in. Thanks for reminding me! I'll get the latest version and try to merge it all in tonight.
Alright pull request open! I also managed to fix the blurry issue without having to pull in that canvas to blob plugin. No blobs needed! :)
Thx! Very much appreciated!
Hi. I just stumbled onto JSPDF today. I currently have what amounts to an interactive photo sheet generator on a web page. It is a bunch of divs that each have a photo and an editable caption. Photos can be different sizes. Currently, after the user is done editing and repositioning the photos and captions, I use TCPDF to print the PDF server side. I would LOVE to do it client side. So here are my questions with JSPDF: 1. Do I need to use canvas to put the images into the PDF ? 2. If you had, let's say 50 images, would you guys simply loop through the divs and use the 'addimage' one by one until they were all in there? Is there a better way? (especially noting that I will always be more than one page in the PDF). Output quality of the images is a huge concern. Thanks in advance for any feedback.
has this been merged into jspdf? Im using addHTML on a div of tables, and it's still producing blurry pdfs.
Nope, last I heard they completely rewrote some logic and that this change
isn't necessary anymore.
On Thu, Mar 26, 2015, 9:23 AM ragefuljoe [email protected] wrote:
has this been merged into jspdf? Im using addHTML on a div of tables, and
it's still producing blurry pdfs.—
Reply to this email directly or view it on GitHub
https://github.com/MrRio/jsPDF/issues/339#issuecomment-86538410.
am I doing something wrong perhaps? Here's what I'm calling (also using jszip):
var pdf = new jsPDF('l', 'in', 'letter');
pdf.addHTML(exportbuffer, 0,0, {pagesplit: true}, function(){
var pdfDataURL = pdf.output('datauristring');
var base64 = pdfDataURL.replace(/^data:(image|application)\/(png|jpg|pdf);base64,/, "");
exportFolder.file(exportdate+"_table.pdf",base64,{base64:true});
Hmm I'm unfamiliar with some of the functions you are calling so someone
else may have to help you.
It looks like you're taking the output the PDF produces, grabbing the
base64 data and exporting that using a different export tool. Why not save
using jsPDF's API?
Although it doesn't make too much sense why that'd make it not blurry.
The original issue is that the math to convert everything to a canvas was
wrong. It would either stretch or squish the content very slightly. I'd
recommend resizing the window to really wide and really small. See if it
looks stretched or squished. If it does then the issue is still there.
Also, I have some plnkr links above. Try to grab the version of jsPDF that
I use there and see if that helps.
On Thu, Mar 26, 2015 at 9:27 AM ragefuljoe [email protected] wrote:
am I doing something wrong perhaps? Here's what I'm calling:
var pdf = new jsPDF('l', 'in', 'letter');
pdf.addHTML(exportbuffer, 0,0, {pagesplit: true}, function(){
var pdfDataURL = pdf.output('datauristring');
var base64 = pdfDataURL.replace(/^data:(image|application)\/(png|jpg|pdf);base64,/, "");
exportFolder.file(exportdate+"_table.pdf",base64,{base64:true});—
Reply to this email directly or view it on GitHub
https://github.com/MrRio/jsPDF/issues/339#issuecomment-86540461.
Ok will give your plnks a try. If it's the ones I tried before, the one with "new hotness" wasn't working for me.
I'm outputting some other pdfs in a folder that I need to deliver zipped up, but like you said, I don't see why that would affect the blurriness of the pdf.
_update_
full screen browser makes it unreadable, didn't even try that before.
https://www.dropbox.com/s/oaujyuq4vphavv1/Screenshot%202015-03-26%2009.40.09.png?dl=0
super narrow is much more acceptable
https://www.dropbox.com/s/sq7vkaqjx5n6ry1/Screenshot%202015-03-26%2009.41.45.png?dl=0
trying it with your jspdf.mychanges.js (the one with JSPDF2) yielded same results.
Per the comment by @diegocr about issuing a PR to html2canvas, there are a number of issues there reporting blurry output:
https://github.com/niklasvh/html2canvas/issues/340
https://github.com/niklasvh/html2canvas/issues/158
https://github.com/niklasvh/html2canvas/issues/312
https://github.com/niklasvh/html2canvas/issues/206
it would be great if the work here could be contributed to that repo! :+1:
I could really use a bug fix. Thanks!
Hi
I found a issue when generate PDF using jspdf. in pdf background color is coming black,
Is there any solution for remove background color.
Thanks
Nitin
If I don't use the option "pagesplit", it renders one PDF page in good quality. If I set pagesplit to true, then it renders the entire page but the image quality on each page is dramatically reduced.
IS THIS ISSUE SOLVED? If so can anybody please provide me the code, I have been having this issue for a while and I couldn't figure out how to solve this. Thanks in advance
The issue with "pagesplit" still exists I think, just tried it with my web application with multiple scalable tables and the result is still squishy and blurry.
@nitin0708
use
pdf.addHTML(document.documentElement, function() {
pdf.save(test.pdf');
});
basically you have to use "document.documentElement" instead of "document.body" to avoid black background with the most recent html2canvas plugin
@subashboss29
A method to solve the poor quality problem is not using the 'pagesplit', you can use codes below.
function printbypage(pdf, k){
if(k >= $('.my_show').length)
{
pdf.output('dataurlnewwindow');
}
pdf.addHTML($('.my_show')[k], function(){
if(k < $('.my_show').length - 1)
{
pdf.addPage();
}
printbypage(pdf, k + 1);
});
};
function print_myshow() {
var pdf = new jsPDF('p', 'mm', 'a4');
printbypage(pdf, 0);
};
nvm, my issue has been fixed. Thanks
I have close to 500 images which will be generated dynamically and those images are in a foreach loop. I have tried adding these images to pdf using addImage and it generated me a pdf of size 15mb and using SLOW param i tried to reduce the size and it worked but unfortunately instead of 500 different images it is showing only one image 500 times. could any one help me in this issue.
Thanks in advance!
Great contribution the problem but the only drawback is that it depends on the size of the screen that is displayed.They have some form of fix
Great! But as I use an id ? it's possible? I do not have a selector main
"If I don't use the option "pagesplit", it renders one PDF page in good quality. If I set pagesplit to true, then it renders the entire page but the image quality on each page is dramatically reduced. "
Could someone tell me how to fix this? Thanks so much.
@bpmckee
hi , I tried your export2 codes, that's cool.
But still I have a problem afer using the export2 codes, the page now is splited, (on PDF)but only on one page 1 I can see conents , on page 2 and page 3 there are blank. Could u pls give me some help?
I tested the export2 codes with firefox ,IE10+ and chrome, on IE10+ and chrome, all are fine, PDF is properly splited, but on firefox , the output PDF only has contents on page 1, on page 2 or 3 there are blank. Could someone throw some light how to fix it? I just got caught by this issue for quite long time and don't konw how to do.
@drizzt00s I haven't looked at this issue in a long time, but I remember firefox being a huge pain.
I mentioned in one of the comments that there were errors that did not make any sense at all with firefox.
For example:
// this isn't working in firefox??
var test = 0;
canvas.drawImage(img, 0, test, img.height, img.width);
// but this is????
canvas.drawImage(img, 0, 0, img.height, img.width);
Things were very very strange and all seemed to happen around the canvas.drawImage and canvas.toDataUrl.
Unfortunatly I don't have very much time to look into the issue but if it's anything like it was before, it's something with one of those two functions that firefox doesn't like sometimes.
I think I know what's wrong with firefox . in Your method
function canvasToImage (canvas){
var img = new Image();
var dataURL = canvas.toDataURL('image/png');
img.src = dataURL;
return img;
};
The problem with method is: when you set img.src = dataURL;, this is asynchronous, so on the next line when the method return img, on firefox the img has not been fully loaded.
@bpmckee
I think this is why the bug happens, but I am not 100% sure
@bpmckee
hi, ignore my previous two comments.
in the method canvasShiftImage
change this line:
ctx.drawImage(imgPreload,0, shiftAmt, imgPreload.width, imgPreload.height, 0, 0, imgPreload.width, imgPreload.height);
to:
ctx.drawImage(imgPreload,0,shiftAmt,imgPreload.width,imgPreload.height-shiftAmt,0,0,imgPreload.width,imgPreload.height-shiftAmt);
It will fix the bug on firefox. All browsers are fine now.
Does anyone know where pagesplits quality is fixed because i use 1.0.272 and there it is bad, i cant use the newest version because of any other issues, Ok nvm
EDIT:
ok bpmckee code with Pixastic works well but i would appreciate if this bug will be fixed soon ;)
Ok i found a solution :
Use
@bpmckee change for the crop
+
Change the Scale factor it is to low
var I = this.internal, K = I.scaleFactor, W = I.pageSize.width, H = I.pageSize.height;
K <-- to a 7 or 8 example.
if you have to scale your content do it on the DOM before "sending" it to the pdf plugin I do it with a clone of the element
I have found a workaround as well.
// change the font size of everything in the area you want to print to a much larger size (mine by default was 12px, so I'm changing it to 22px
$("#iframe").contents().find("*").css("font-size", "22px");
// clone it
var printClone = $("#iframe").clone();
// get the width and height of all of the content
var contentHeight = $("#iframe")[0].scrollHeight;
var contentWidth = $("#iframe")[0].scrollWidth;
// we have where we create a new div to stuff the content into
$("#printOutput").append("<div class='pdfDiv' id='pdfDiv'></div>");
// set the new div height and width to the calculated ones above
$(".pdfDiv").height(contentHeight);
$(".pdfDiv").width(contentWidth);
// add the div to the clone
printClone.appendTo(".pdfDiv");
// add the html
pdf.addHTML($(".pdfDiv")[0],function() {
// save it
pdf.save('pdf.pdf');
// reset the font size
$("#iframe").contents().find("*").css("font-size", "");
// remove the div
$("#pdfDiv").remove();
});
So has anyone come up with one definitive answer as to how best to use addhtml, output a pdf, and not have it blurry!!?
I think mine is good enough.
On Mar 11 2016, at 2:37 am, David Kivlehan <[email protected]>
wrote:So has anyone come up with one definitive answer as to how best to use
addhtml, output a pdf, and not have it blurry!!?—
Reply to this email directly or view it on GitHub.
Nope, 0% improvement in my single page pdf using your above method. Gunna switch to a new library, try out some server-side solutions I think.
I had a problem with retina and blurry fonts i solved it with this workaround for A4:
var w = 794;
var h = 1145;
var canvas = document.createElement('canvas');
canvas.width = w * 2;
canvas.height = h * 2;
canvas.style.width = w + 'px';
canvas.style.height = h + 'px';
var context = canvas.getContext('2d');
context.scale(2, 2);
//scale end
pdf.addHTML(document.getElementById(that.createId("iframe")).contentWindow.document.body.firstChild, -2, -6, { canvas: canvas }, function() {
I solved it with the following steps:
var ctx= canvas.getContext('2d');
ctx.mozImageSmoothingEnabled = false;
ctx.webkitImageSmoothingEnabled = false;
ctx.msImageSmoothingEnabled = false;
ctx.imageSmoothingEnabled = false;
This will resolve the blurry output in most cases. This hack was refused since this hasn't worked for some people.
(used versions jspdf 1.0.272 and html2canvas 0.4.1)
Edit: You cannot get around the limits of the canvas for IE (8192px) and FF (16k px) which will lead into a canvas which is exactly at the limit of width or height resulting in another stretched canvas
Hi Billy and Dominik,
Thanks for sharing this method to convert the HTML page to PDF.Above approach worked fine for me :) accept one problem through which I am currently going through.
I have implemented this on a button click(similar to you) but want to ignore the button from the generated pdf(as it is provided in fromHTML method of jspdf through elementhandler).Could you please guide me how I can ignore button from the PDF using above approach?
Hi Bill and Domnik,
I am fetching the images from CMS(Contentful) in my HTML and then trying to generate PDF of that HTML.
But in the generated PDF, images are missing.(seems to be an issue of cross origin).
Please suggest me how can i get rid of this issues.
Below is my code-:
function downloadPDF() {
var canvasToImage = function(canvas){
var img = new Image();
var dataURL = canvas.toDataURL('image/png');
img.crossOrigin = "Anonymous";
img.src = dataURL;
return img;
};
var canvasShiftImage = function(oldCanvas,shiftAmt){
shiftAmt = parseInt(shiftAmt) || 0;
if(!shiftAmt){ return oldCanvas; }
var newCanvas = document.createElement('canvas');
newCanvas.height = oldCanvas.height - shiftAmt;
newCanvas.width = oldCanvas.width;
var ctx = newCanvas.getContext('2d');
ctx.mozImageSmoothingEnabled = false;
ctx.webkitImageSmoothingEnabled = false;
ctx.msImageSmoothingEnabled = false;
ctx.imageSmoothingEnabled = false;
var img = canvasToImage(oldCanvas);
ctx.drawImage(img,0, shiftAmt, img.width, img.height, 0, 0, img.width, img.height);
return newCanvas;
};
var canvasToImageSuccess = function(canvas){
var pdf = new jsPDF('l','px'),
pdfInternals = pdf.internal,
pdfPageSize = pdfInternals.pageSize,
pdfScaleFactor = pdfInternals.scaleFactor,
pdfPageWidth = pdfPageSize.width,
pdfPageHeight = pdfPageSize.height,
totalPdfHeight = 0,
htmlPageHeight = canvas.height,
htmlScaleFactor = canvas.width / (pdfPageWidth * pdfScaleFactor),
safetyNet = 0;
while(totalPdfHeight < htmlPageHeight && safetyNet < 15){
var newCanvas = canvasShiftImage(canvas, totalPdfHeight);
pdf.addImage(newCanvas, 'png', 0, 0, pdfPageWidth, 0, null, 'NONE');
// var alias = Math.random().toString(35);
// pdf.addImage(newCanvas, 0, 0, pdfPageWidth, 0, 'png', alias, 'NONE');
totalPdfHeight += (pdfPageHeight * pdfScaleFactor * htmlScaleFactor);
if(totalPdfHeight < htmlPageHeight){
pdf.addPage();
}
safetyNet++;
}
var pageName = document.location.pathname.match(/[^\/]+$/)[0];
pdf.save(pageName + '.pdf');
};
html2canvas($('body')[0],
{
onrendered: function(canvas){
canvasToImageSuccess(canvas);
}
});
}
How can one compress the output file?
`$(document).ready(function () {
$("#btnPrint").on('click', exportTwo);
function exportTwo() {
var canvasToImage = function (canvas) {
var img = new Image();
var dataURL = canvas.toDataURL('image/png');
img.src = dataURL;
return img;
};
var canvasShiftImage = function (oldCanvas, shiftAmt) {
shiftAmt = parseInt(shiftAmt) || 0;
if (!shiftAmt) { return oldCanvas; }
var newCanvas = document.createElement('canvas');
newCanvas.height = oldCanvas.height - shiftAmt;
newCanvas.width = oldCanvas.width;
var ctx = newCanvas.getContext('2d');
var img = canvasToImage(oldCanvas);
ctx.drawImage(img, 0, shiftAmt, img.width, img.height, 0, 0, img.width, img.height);
return newCanvas;
};
var canvasToImageSuccess = function (canvas) {
var pdf = new jsPDF('l', 'px'),
pdfInternals = pdf.internal,
pdfPageSize = pdfInternals.pageSize,
pdfScaleFactor = pdfInternals.scaleFactor,
pdfPageWidth = pdfPageSize.width,
pdfPageHeight = pdfPageSize.height,
totalPdfHeight = 0,
htmlPageHeight = canvas.height,
htmlScaleFactor = canvas.width / (pdfPageWidth * pdfScaleFactor),
safetyNet = 0;
while (totalPdfHeight < htmlPageHeight && safetyNet < 15) {
var newCanvas = canvasShiftImage(canvas, totalPdfHeight);
pdf.addImage(newCanvas, 'png', 0, 0, pdfPageWidth, 0, null, 'NONE');
totalPdfHeight += (pdfPageHeight * pdfScaleFactor * htmlScaleFactor);
if (totalPdfHeight < htmlPageHeight) {
pdf.addPage();
}
safetyNet++;
}
pdf.save('test.pdf');
};
html2canvas($('.pgePreview')[0], {
onrendered: function (canvas) {
canvasToImageSuccess(canvas);
}
});
}
});`
Exception occurred: Uncaught (in promise) Invalid unit: px -
I referred the this link http://plnkr.co/edit/nNSvHL8MZcT6nNKg9CG9?p=preview
So did anyone find the best solution for rendering html to pdf with good quality? :) I've been stuck with this for a month already.
We are closing this issue, because we will not support any longer fromHTML and addHTML.
Explaination:
We are working on a new html2pdf plugin, which will be based on html2canvas and our context2d plugin. This should lead to more reliable results for your projects. And it will give us the time to focus on the core functionality of pdf-generation because we will not use our energy for writing/supporting/extending 2 html plugins. If you still want to use addHTML or fromHTML you can still use jsPDF 1.4.1.
Best Regards
Most helpful comment
I am currently using the latest version of Chrome (37) to do most of my testing.
The solution I'm going to end up using (until I have a lot of time to dig through the code and see why it's not working well anymore is) is this:
(NOTE: converting the canvas to a Blob, then from a blob to an image cuts my file size down from 10+ MB to around 600kb with the same quality. Unfortunately it requires the canvas-toBlob.js polyfill).
Here's the new solution I have. (Note: there will no longer be PDF pages, it renders all as one image at whatever size image comes out).