Imagesharp: PNG Encoder produces too huge images and different sizes in Linux and Windows

Created on 15 Oct 2019  路  10Comments  路  Source: SixLabors/ImageSharp

Prerequisites

  • [X] I have written a descriptive issue title
  • [X] I have verified that I am running the latest version of ImageSharp
  • [X] I have verified if the problem exist in both DEBUG and RELEASE mode
  • [X] I have searched open and closed issues to ensure it has not already been reported

Description

When saving an image to PNG I end up with a file that is more than 5x larger than what I would obtain by using System.Drawing (or Paint.NET)
Attached are the images I generated using ImageSharp, and the same one opened/saved with Paint.NET.
I've tried to tweak the CompressionLevel and FilterMethod (not so sure what this one is about)
properties of the encoder, but this changes nearly nothing in the output image size:

  • Compression levels of 1 to 5 give me the same size of 310 KB both in my Windows and WSL tests
  • Compression levels of 6 to 9 give me a size of 289 KB when run on Windows but 102 KB in WSL
  • With FilterMethod set to Adaptive, I obtain slightly better results; however, far from my expected 55 KB

I can see there's a related issue (#702); however, regardless of the different output sizes in Linux vs Windows, I still think my issue stands as a bug as I'd very much like to be able to obtain a ~50 KB image instead of 300 KB.

Steps to Reproduce

Here is the code I used to exhibit this behavior:

using System.IO;
using System.Runtime.InteropServices;
using SixLabors.ImageSharp;
using SixLabors.ImageSharp.Formats.Png;
using SixLabors.ImageSharp.PixelFormats;

namespace ImageSharpCompressionTest
{
    internal sealed class Program
    {
        private static void Main(string[] args)
        {
            var encoder = new PngEncoder
            {
                BitDepth = PngBitDepth.Bit8,
                ColorType = PngColorType.RgbWithAlpha
            };

            // encoder.FilterMethod = PngFilterMethod.Adaptive;

            var w = 3500;
            var h = 3500;
            var rgbaBytes = GetImageBytes(w, h);

            var rootdir = RuntimeInformation.IsOSPlatform(OSPlatform.Windows) ? @"c:\temp\win" : "/mnt/c/temp/lnx/";
            if (!Directory.Exists(rootdir)) Directory.CreateDirectory(rootdir);

            using (var image = Image.LoadPixelData<Rgba32>(rgbaBytes, w, h))
            {
                for (var level = 1; level <= 9; level++)
                {
                    encoder.CompressionLevel = level;

                    var filename = Path.Combine(rootdir, $"compress-{level}.png");
                    using (var stream = File.Create(filename))
                        image.SaveAsPng(stream, encoder);
                }
            }
        }

        private static byte[] GetImageBytes(int width, int height)
        {
            var bytes = new byte[width * height * 4];
            for (var y = 0; y < height; y++)
            {
                for (var x = 0; x < width * 4; x += 4)
                {
                    bytes[4 * y * width + x] = (byte)((x + y) % 256); // R
                    bytes[4 * y * width + x + 1] = 0; // G
                    bytes[4 * y * width + x + 2] = 0; // B
                    bytes[4 * y * width + x + 3] = 255; // A
                }
            }

            return bytes;
        }
    }
}

Attachments:

  • Running in WSL: lnx.zip
  • Running on Windows: win.zip
  • Opened/Saved by Paint.NET: compress-pdn

System Configuration

  • ImageSharp version: 1.0.0-beta0007
  • Other ImageSharp packages and versions: none explicitly added
  • Environments:

    • Windows 10 Pro 1709 (French)

    • Ubuntu 18.04 WSL

  • .NET Framework versions:

    • Windows .NET Core 3.0.100 (but still running a netcoreapp2.2)

    • Linux .NET Core 2.2.204

  • Additional information:
png

Most helpful comment

@odalet

I've been experimenting with a managed DeflateStream port that is yielding promising results.

53.9KB using Adaptive filtering at Compression level 9. Results will be identical cross platform.

compress-9

I't's still very much a work in progress though as I have to do a lot of optimization work on the code but I think I can get reasonable performance out of it.

All 10 comments

OK, at least, I now understand why compression levels 1 to 5 and 6 to 9 give the same results and I now suspect the issue is indeed inside .NET's implementation of DeflateStream:

https://github.com/SixLabors/ImageSharp/blob/master/src/ImageSharp/Formats/Png/Zlib/ZlibDeflateStream.cs#L94

By the way for levels 6 to 9, my repro app gives the same results as Linux when targeting net472 under Windows. The so-called optimization of DeflateStream in Windows .NET Core has for side result to double the size of the output...

@odalet Thanks for raising this and adding so much detail. 馃憤

I spoke to a member of the MS team last night and he said we should raise an issue against CoreFX. If you could do that and reference this issue hopefully we can figure out why the difference is so dramatic for the zlib instances and get them to provide a fix. Maybe we can convince them to expose the proper compression levels also!

Edit... I see you've already added to the existing issue. Great!!

Thanks for the prompt reaction!

Maybe we can convince them to expose the proper compression levels also!

That'd be so great, this CompressionLevel enum does not make much sense to me as well!

And by the way, thanks for your great job: the little part of the API I've played with is a pleasure to code against.

@odalet

I've been experimenting with a managed DeflateStream port that is yielding promising results.

53.9KB using Adaptive filtering at Compression level 9. Results will be identical cross platform.

compress-9

I't's still very much a work in progress though as I have to do a lot of optimization work on the code but I think I can get reasonable performance out of it.

Thanks for working on this subject! There is no urgency on my side for this to be implemented quickly; and even if there was, I would of course never dare put any pressure on you; I'm simply pleased to notice improvements are on their way!

@odalet My pleasure!

I think my PR is about as good as I can get it now. If anyone else fancies a look though please do.

I just had a quick look at the PR and it seems that moving to this implementation of zlib has the additional benefit of really using the compression level. Great!

Sorry in advance for chatting under closed issues, I'm reusing this discussion to avoid TLDR noise under the related dotnet BCL issue.

@odalet can you post some details about your use-case (where do your real-life images come from)? I want to use it as an argument, when posting an issue against Intel-Zlib.

You'll find attached one of the images generated once with .NET Core 2.2 and another time with .NET 4.7.2 (both times on Windows). The generation app depends on ImageSharp 1.0.0-beta0007.
Both images are identical at the pixel level, but the .NET Core version is more than 3x as huge as the .NET fx-generated one.

net472
netcore22

Here is some insight on our use case:
Our images are generated from sensor data. We have some electronics monitoring a process that is inherently 2D. Hence the sensors give us - over a period of time - several data points and a plane coordinate. Using what the sensors detect, we build an image that is computed by severely aggregating the input data so as to get a 3500x3500 image in the end. Each pixel in the resulting image is the result of averaging/re-scaling a huge number of input data points.

I hope this helps you understand a bit what we are doing. I'm afraid I can't share here anything too specific about our process.

NB: none of these images are really real-world cases ;) because we are still experimenting with our generation process, but they model - at least a part of - the images we'll generate when gone production. That is:

  • Most of the pixels are background (here, black)
  • the non-background pixels will be made of different colors potentially spanning the entire available 24bits - think false colors in medical/astronomical images - although in the attached example, there is only one non-black value... due to it being generated from fake data.
    The major take-in is that we are not handling the usual holliday photograph, but rather 2D-sensor data represented as an image. These images will also most likely be not as sparse as the ones in the example but still, they will exhibit more background pixels than foreground one. Here, you can think particle collision images in term of complexity. E.g. this one .

Hope this helps

@odalet thank you, this helps a lot!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

jarroda picture jarroda  路  3Comments

Inumedia picture Inumedia  路  3Comments

xakep139 picture xakep139  路  3Comments

agoretsky picture agoretsky  路  4Comments

nullpainter picture nullpainter  路  3Comments