Three.js: Support of Compressed Textures in core

Created on 20 Apr 2018  路  33Comments  路  Source: mrdoob/three.js

Compressed Textures

Compressed textures as a first-class citizen, along with tools for on-line compression

Motivation

Compressed textures offer a great amount of extra detail requiring only a little space, for applications with large textures and/or large number of textures, this draws a line between interactive frame-rate and a slide-show, this point becomes more relevant for lower-end GPUs, as they tend to have less RAM, being able to draw 2024 compressed textures instead of 512 uncompressed ones is a extremely important, as they take up potentially the same amount of GPU RAM. Compressed textures take less time to load and put less stress on browser, since decompression is not done by default (unlike PNG).

born as a result of this discussion: #13807

Suggestion

Most helpful comment

I wouldn't move BasisTextureLoader into core right now, for a few reasons:

  1. It depends on the WASM transcoder, which can't be bundled into core anyway.
  2. Not fully clear whether the new Basis UASTC format (higher quality, higher filesize) should be considered a separate format with separate loader, or bundled into BasisTextureLoader.
  3. KTX2Loader will also be available before too long, as another way of loading Basis textures. This is the version GLTFLoader will require.

All 33 comments

I agree with you on the importance of compressed textures, but I think the suggested step of putting them in src/* might be the wrong place to start. For the subset of three.js developers who know the following...

  1. what compressed textures do
  2. how to create compressed textures
  3. how to put compressed textures in a FBX/GLTF/DAE/... model
  4. how to choose the appropriate version for a given device

... the extra step of grabbing THREE.DDSLoader from examples/js/loaders is pretty trivial. Just putting this into core is probably not going to affect overall usage of the feature.

I assume that's also what you're getting at with "tools for on-line compression", and I think something in that vein is a better first step... I assume you mean an online service, not compressing images at runtime? Were there tools you had in mind, or would particularly like to see? This might also be a great area for better documentation, tutorials, etc.

(Or, if I've totally missed the mark and there are real pain points with using compressed textures _because_ they're not in the core library, that would also be great feedback...)

@donmccurdy
A lot of points!
Lets say I have a texture cat.png and I want to use that. Except, i want to enable texture compression, to have a smaller GPU footprint. Suppose I'm okay with uploading the raw bitmap to GPU in the first instance, and waiting for a little while for conversion to DDS to happen in say a WebWorker. That would be one usecase.

Suppose i have a model, and the only textures i have for it are already compressed, i don't know much about all these fancy formats and what makes a model - i just want to load it in and see it, a-la loader.load('my_fancy_model.glb').then(model => scene.add(model) ), note that the format is not as important here and the fact that compressed textures just work out of the box. That's usecase number duo.

Lastly, and that's the point you have alluded to - say i have uncompressed textures, and i want to compress them, and I don't know about looks like crunch, i only know that texture compression exists and it's a good thing to have for my needs. Offering a simple to use tool would be pretty awesome. i.e. your online conversion tool. It could use the same code as what's written for usecase1, and run entirely in the browser, the main point is convenience.

Suppose I'm okay with uploading the raw bitmap to GPU in the first instance, and waiting for a little while for conversion to DDS to happen in say a WebWorker. That would be one usecase.

I don't think you would want every visitor generating DDS textures at load time in the browser; better to do this once, offline, in advance. I'm not aware of good in-browser tools to do the conversion.

For the other two cases, I think the last ...

i only know that texture compression exists and it's a good thing to have for my needs. Offering a simple to use tool would be pretty awesome.

... is the best place to start. If people can create models with compressed textures easily and loading them becomes the pain point, that will be a good problem to have. Make this as easy as we can with tools and tutorials, but only move it into the core library when it's more common practice.

PlayCanvas solved this problem in a neat way via One-click texture compression and an automated texture format selection for a specific device:

https://blog.playcanvas.com/webgl-texture-compression-made-easy/

Unfortunately, the direct comparison between three.js and PlayCanvas is not valid since three.js does not provide any form of backend services.

@Mugen87
I don't think that backend service is required to provide this functionality. A lot of modern engines do real-time texture compression, especially for things like procedurally generated textures. One major example is Virtual Textures (or Mega Textures). Sure, javascript is no C++, but it is doable and with an added benefit of lower bandwidth, since you can download a png and transcode it to whatever your device supports.

Compressed textures offer benefits beyond saving memory, well, indirectly. One major benefit is faster sampling, as it involves less bandwidth for the same number of samples.

@Usnul it would be amazing to have this functionality in three.js (at least depending on how fast the compression can be done at runtime), but it would make more sense as an example. There would need to be a good chunk of logic around deciding which format the current device supports, and then there would need to be compression utilities for each of these formats.

From the Playcanvas article it look like DXT, PVR and ECT1 would cover all devices, but that still means three texture compressors. That's not going to be small. Or, I suspect, trivial to write in JS.

That said, an easy to use plugin that did this in a fairly easy to use way at run time would be an amazing addition to the library. But you will get a lot more support for this idea if you push it as an example first. Once the usefulness has been demonstrated, then you can push for adding it to the core.

From the Playcanvas article it look like DXT, PVR and ECT1 would cover all devices, but that still means three texture compressors.

It will be even more complex since the most promising standard ASTC should also be supported. I guess PlayCanvas will add this texture format in the future, too.

In general, I support @donmccurdy earlier post. I think it's preferable to do the conversion just once with existing tools and not for each user at the client-side via JavaScript.

Yeah, that probably makes more sense. In that case, two things are lacking:

  • an easy to use conversion tool that takes PNG or JPG and outputs the required DXT, PVR, ECT1 and ASTC files, and any other formats we decide to support
  • some way of detecting and loading the correct format based on the current device. This is definitely part of the application layer, however it would be useful to add an example demonstrating how to do this.

an easy to use conversion tool that takes PNG or JPG and outputs the required DXT, PVR, ECT1 and ASTC files, and any other formats we decide to support

I'd like to highlight that it's no good idea to develop a conversion tool from scratch. For example the encoding process of ASTC is very sophisticated, so you definitely want to use the existing tool from ARM: ASTC encoder.

BTW: Since the encoding process for high quality ASTC textures can be very long, it's no good idea to do this on the client-side.

So the idea might be a wrapper node script that calls existing CLI-tools. However, since these tools are normally written in C/C++, you need to deliver different binaries for multi platform support.

A few places to start:

  • crunch has a Windows CLI, or can be compiled with Emscripten.
  • Compressonator provides a GUI, CLI, and SDK for compressed texture creation and (OBJ or glTF) mesh optimization.

I'd like to highlight that it's no good idea to develop a conversion tool from scratch

Agreed, I was in no way suggesting that we do so! 馃槄
Either we create a wrapper for third party binaries, or leave it up to users to do this themselves.

Seems like Compressonator can do everything except for PVR. It does ASTC, ATC, ATInN, BCn, ETCn, DXTn, swizzle DXTn formats. So, we can generally recommend users to use that.

There's also pvrtextool which compresses to PVRTC, ETC and DXT and has GUIs for windows, mac and linux,

Have you guys played any recent games, for the past 10 years or so with large open terrain spaces? If you did - you witnessed virtual textures in action more than likely. Virtual textures require building a texture at run-time, I won't get into why right now - it's a much larger topic all-together. Virtual textures require you to do texture compression on the fly at run-time also, if you wish to benefit from that tech. So guess what? most engines that implement virtual textures also implement texture compression.

I see a lot of argumentation here along the lines of "runtime compression is not possible" or "runtime compression is not useful". I think in WebGL it's especially useful, since we don't have a common ground on texture compression formats and bandwidth is a consideration.

I am not claiming that it's easy, i'm making an argument for it's usefulness and it's feasibility.

You use a compressed texture and save up say 4x space, this means that your sampling runs up-to 4x faster now, since main bottleneck is the memory bandwith, this means that if, say, you use a compressed texture for shadows - you can effectively double the resolution for a compressed texture and have exactly the same performance as with an uncompressed texture of half that size when sampling. Does that sound useful? How about 4x higher number of samples for PCF? Or any other multisample technique.

I do not get the argument.

I'm not opposed to doing both, either. 馃榿 The crunch library is only 150kb when compiled to JS, so shipping that with a web experience seems totally reasonable. Someone want to give this a try?

I've made a library some time ago that does what @Mugen87 describes above, it is a wrapper Node script that abstracts away the specific CLI flags for you and makes it easy to have ASTC, ETC, PVRTC and S3TC in KTX containers. It is released under the MIT licence here: https://github.com/TimvanScherpenzeel/texture-compressor.

Of course this is not the final solution to the problem but it does the trick for me for now. A major advantage is that you only need a single compressed texture loader for decoding: KTXLoader.

A much better universal alternative is being developed by Binomial called Basis: http://www.binomial.info/ and is going to be contributed to the GLTF project: https://www.khronos.org/assets/uploads/developers/library/2017-gdc-webgl-webvr-gltf-meetup/6%20-%20Universal%20Texture%20Compression%20Format_Mar17.pdf.

Interesting. I've never heard of Basis...

Basis is neither open source nor free... 馃槥

Curious how that's going to work with glTF?

EDIT: looks like they're going to open source it. Looks like a promising option for the future.

@TimvanScherpenzeel I've checked out your tool and I'm wondering if it's possible to pass in ASTC-specific parameters? For example there are -normal_psnr and -normal_percep which optimize the texture compression for normal maps. The result can be quite impressive:

image

The leftmost normal map is compressed with default settings, the second uses -normal_psnr and the normal map on the right uses -normal_percep.

Source: https://developer.arm.com/-/media/Files/pdf/graphics-and-multimedia/Stacy_ASTC_white%20paper.pdf

The whitepaper from ARM actually recommends to use these presets when compressing normal maps.

Hey @Mugen87,

As documented you can use the -f flag you can pass a tool specific commands to each tool.

This should work when running it from the root of the repo: node ./bin/texture-compressor -i ./docs/example/example.png -o ./docs/example/example-astc-8x8.ktx -t astc -c astc -b 8x8 -f "normal_percep", you can pass multiple flags to it as well: -f "flag1" "flag2".

If you have any other questions regarding my tool I think it is better to ask them in my repo (feel free to open an issue) that keeps the conversation here more to the point.

@TimvanScherpenzeel Thanks for the feedback.

I just wanted to address this topic right here since it's important that automated conversion tools make actually use of format-specific optimization features if possible. As mentioned above, the quality difference can be very noticeable.

Hi @Mugen87,

After a second thought and testing it out again and it appears that does in fact won't work.
This is not because of the flag command but rather because I'm using ASTC through PVRTexTool and PVRTexTool only accepts a subset of the ASTC implementation. I thought it was working because no errors were being thrown but that is because PVRTexTool ignores unknown commands. Perhaps there is a way to pass through flags in PVRTexTool to ASTC encoder, I haven't really looked into it yet.

I've opted for using ASTC through PVRTexTool in order to add mipmapping support and Y-flipping of textures in order to avoid introducing more dependencies and issues I was having with file reading / file writing callbacks.

Now that Basis textures are free and well supported the title of this issue should be changed to:

Move BasisTextureLoader into the Core

Is that something we want to do? If not, this issue can probably be closed.

@Usnul does the BasisTextureLoader cover what you mean by "compressed texture support"?

I wouldn't move BasisTextureLoader into core right now, for a few reasons:

  1. It depends on the WASM transcoder, which can't be bundled into core anyway.
  2. Not fully clear whether the new Basis UASTC format (higher quality, higher filesize) should be considered a separate format with separate loader, or bundled into BasisTextureLoader.
  3. KTX2Loader will also be available before too long, as another way of loading Basis textures. This is the version GLTFLoader will require.

I think having these loaders in the examples is absolutely sufficient. In general, the idea of "moving stuff into the core" has lost importance anyway since all example files are now available as modules. They can easily be imported via ES6 imports.

@looeee

does the BasisTextureLoader cover what you mean by "compressed texture support"?

It would be a helpful step. It covers the "first class compressed texture support", but online compression is still not covered.

If you load a normal PNG or a JPG - being able to convert it to a compressed format inside the library would be beneficial for a lot of users, I believe. For me personally, I want to be able to compress textures that were generated in code, things like texture atlases, noise maps etc.

If we had a PNG -> basis encoder, then the whole chain would be covered.

@donmccurdy

It depends on the WASM transcoder, which can't be bundled into core anyway

I agree, sadly. It seems that supporting compressed textures on a wide range of devices without bloating the library is some ways off right now.

online compression is still not covered.

I'm not sure that three.js is the right place for a basic compression tool. We should focus on asset consumption and let other people deal with asset creation. Is there a browser-based texture compression tool we could recommend rather than creating one here?

Aside from that, is compressing to basis fast enough to do client-side? Or is it better to do this as a build step?

My impression is that GPU texture compression is slow to do clientside, at runtime, for nontrivial texture sizes. And if your texture sizes are trivial, you are unlikely to benefit from GPU compression. It also tends to require some visual inspection or trial and error, Basis Universal in particular needs tuning.

I think this is _mostly_ a job for a good "offline" tool. Or something like https://squoosh.app/, but for Basis. That said, a WASM Basis encoder would almost certainly still have its uses, even if it is slow.

In the interest of taking action on this issue, how about this:

  • we recommend using Basis compressed textures
  • the basis loader will not be added to the core since it relies on the WASM transcoder
  • we recommend doing texture compression using an offline tool such as Basis universal
  • we don't intend to add a basis compression tool to this repo since it's likely to be too slow for client-side runtime.

We can revisit these recommendations later if needed. For now, I think we can close this issue.

For me personally, I want to be able to compress textures that were generated in code, things like texture atlases, noise maps etc.

This is addressed by the final point above. Unless someone can demonstrate a way to compress texture data on the fly that is reliable, fast, and doesn't require visual inspection to verify that it looks OK, I don't think there's anything to be done here.

@looeee That sounds all good to me!

I like the idea of having a robust set of texture compression tools and good support for compressed textures. It is something that would raise the performance bar for a lot of use cases.

As far as online(or runtime) compression on client side goes, this is something that has existed for a long time in AAA industry, I remember reading a paper on runtime texture compression from around 2005, the point of the paper was that doing really poor job at texture compression was still beneficial, as you could do it fast and still gain in terms of overall quality through increased affordable resolution.

There are a lot of techniques that require dynamic textures, most prominent one is "virtual texture", where an actual texture that is submitted to the GPU is basically a texture atlas, made up tiles of a virtual texture.

I don't think that performance is necessarily a bottleneck, I for one would be fine with having an "optimization" pipeline, where a worker thread could be used to perform texture compression and while you wait - you can use the uncompressed texture. In many ways it's similar to what we already do, assets are loaded progressively, you load a model and you start to see the geometry before the textures finish loading, waiting for compression is not so dissimilar in my view.

I like the idea of having a robust set of texture compression tools and good support for compressed textures. It is something that would raise the performance bar for a lot of use cases.

Probably that would involve taking an existing tool and converting it to WASM (maybe basisu)? However, any such tool would work with any 3D web framework, and creating it would be a large project in itself. That's why I don't think this repo is the right place for such a tool - it doesn't need to be tied to three.js.

Aside from that, is compressing to basis fast enough to do client-side?

No, the last time I used it it took 30 seconds to compress a 4k texture on my machine.

I don't think we should support "compressed textures" in core because every GPU supports a different type.

However, we may want to support some sort of TranscodableTexture. BasisTextureLoader transcodes to a compressed texture depending on what the GPU supports. This means that, if we wanted to serialize that texture, we should serialize the .basis file and not the transcoded data.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

Horray picture Horray  路  3Comments

makc picture makc  路  3Comments

boyravikumar picture boyravikumar  路  3Comments

akshaysrin picture akshaysrin  路  3Comments

clawconduce picture clawconduce  路  3Comments