Is there a way to disable Anti-Aliasing (I think is what it's called) on fonts?
If I render fonts of smaller sizes, they become blurry. I want to make them sharp and from investigation it appears the best way to do that is to disable AA?
It's not an easy problem to solve, there is no magical "disable AA".
1) make sure you read the font documentations and experiment with oversampling.
2) Dear Imgui uses stb_truetype.h to rasterize fonts (with optional oversampling).
This technique and implementation are not ideal for fonts rendered at _small sizes_, which may appear a little blurry. There is an implementation of the ImFontAtlas builder using FreeType that you can use:
https://github.com/ocornut/imgui_club
FreeType supports auto-hinting which tends to improve the readability of small fonts.
Note that this code currently creates textures that are unoptimally too large (could be fixed with some work)
@ocornut stb_truetype.h (if updated, and I think freetype or at least a fork of it supports SDF generation as well) has support for single channel SDF bitmap generation. It's not as great looking as 2-channel SDF generation, but by using it then someone can generate a single SDF font image that would look great at all sizes (though it would look better if 2-channel, stb is open for PR's though!). No antialiasing with it, everything is perfectly crisp and clear, you can do effects like outlining and shadows and more, effectively for free. Perhaps it would be useful to add support for SDF font rendering?
That would be great to support SDF font rendering, sure !
My guess is that there are other top priorities for the moment, but SDF font rendering may become an important one on next year ?
Some notes
Also linking to
https://github.com/ocornut/imgui/issues/858
(Though I think that experiment was very misled by using the pixelated ProggyFont, should have focused on a font that we'd actually want to see scaled).
I personally don't suppose this is something I'll dig into next year, I think making the atlas dynamic would solve more issues than using SDF (such as the problem for localization). But if you want to experiment with it please do! And if you can find a way to make a possibility in imgui without affecting all users why not.
SDF isn't magic, can you get actually quality render for a non-zoomed ~14px size font? Which is what we should care about the most. SDF is great for enlarged display in 3D space but this is not how you get desktop quality rendering as far as I know (maybe I'm wrong).
I use 2-channel SDF's in my projects and they look fantastic at all resolutions, from tiny 8px big to the size of the screen, clear and readable at all steps. Single channel is indeed a bit wonky at times, but it still works the great majority of the time and is a good jumping point as the only change you need to make when going from 1 to 2 channel is changing the shader.
Heaver pixel shading which will affect drawing all our filled contents/surfaces - that or changing shader programs during imgui rendering, which is potentially worse. If you start adding options such as outline into the mix, how do you deal with the variety of shaders or do you push the cost into all rendering?
You should only need to change the shading program once for that, such as one shader to render the GUI pass then switch to another for the text pass, not much cost there (assuming clipping bounds are well set and so forth, which you can even bake that into the shader as well quite easily).
Options like outlines and so forth are usually all part of the same shader (at least mine is) and they are controlled via uniforms passed in (I need to clean it a bit), where basically if it is 0 there is not outline, same with shadow, same with italic, and a couple other things I have baked in. It is all the same shader running the same code path with no branches.
Harder to display normal images in your pipeline, you'd may need to redefine ImTextureId to be a higher-level construct. (Though one could imagine the renderer backend to be hardcoded to know that only the font atlas is SDF and the rest is by default RGBA).
Not really I'd think? An SDF is just a simple image/texture like any other, just black/white for 1-channel and red/green for 2 channel (I use blue and alpha for extra information in my renderer though, but just red/green is traditional).
Harder to include colored icons in the font atlas.
This is something I've not really explored yet. In my old renderer I can put images inline with text so I've not needed to create colored glyphs yet. However I could see a couple of fairly easy ways to do it that could be tested.
Harder to integrate in new projects.
Not really sure I see that. The only thing I did to add SDF to my renderer (from scratch) as change the font texture and write a different shader than the one I was using for text (I already had a text-specific shader to try to get text to look better before I switched to SDF a couple years ago).
(Though I think that experiment was very misled by using the pixelated ProggyFont, should have focused on a font that we'd actually want to see scaled).
I initially experimented with Valve's SDF font, it is quite well made, though there are better ones out there now.
And if you can find a way to make a possibility in imgui without affecting all users why not.
Honestly I'd just consider it as 'just another font'. A font could have properties like how it should be rendered and such, then the user's renderer can order it how they wish and render it with whatever shader's they want, just like any other font.
Thanks for your feedback.
one shader to render the GUI pass then switch to another for the text pass
So we'd need imgui to separate the text output from the non-text, which would break cases where we rely on overlapping shapes. Will requires a fair bit of work/fixing.
Not really I'd think? An SDF is just a simple image/texture like any other,
Yes but the shader isn't the same. Most backend implementation are designed to transport "texture" information in ImTextureId rather than "full material information" (shaders/programs/textures, etc.). So if you want to submit regular images within your render you'd need to interleave shader change.
However I could see a couple of fairly easy ways to do it that could be tested.
FYI there is now an undocumented but public API for that:
ImFontAtlas::AddCustomRectRegular()
ImFontAtlas::AddCustomRectFontGlyph()
ImFontAtlas::GetCustomRectByIndex()
etc.
Harder to integrate in new projects.
Not really sure I see that. The only thing I did to add SDF to my renderer (from scratch) as change the font texture and write a different shader than the one I was using for text (I already had a text-specific shader to try to get text to look better before I switched to SDF a couple years ago).
Half of dear imgui users have no idea what they are doing with shaders, textures, and some of the high-level libraries don't provide the same mechanism for custom shaders. I'm not saying this fact alone should discourage us from trying that technique, especially since we're experimenting there, but it's something to keep in mind. Ideally I think being able to have both would be ideal.
So we'd need imgui to separate the text output from the non-text, which would break cases where we rely on overlapping shapes. Will requires a fair bit of work/fixing.
Honestly should already be done, there are a lot of areas that specialized shaders could be of huge benefit (imagine a whispy cloud effect in the background of windows for example), this should all be tied together as a shader/texture combination on something though. :-)
Yes but the shader isn't the same. Most backend implementation are designed to transport "texture" information in ImTextureId rather than "full material information" (shaders/programs/textures, etc.). So if you want to submit regular images within your render you'd need to interleave shader change.
The renderer would indeed need another shader for it, but they probably already have such a shader depending on the engine they use, and it's not like they aren't using dozens to hundreds of other shaders anyway.
And I still wouldn't interleave shaders, I'd probably just use z-culling well and bake certain functions into the shaders and then batch everything by 'type', as most renderers do already anyway.
Half of dear imgui users have no idea what they are doing with shaders, textures, and some of the high-level libraries don't provide the same mechanism for custom shaders. I'm not saying this fact alone should discourage us from trying that technique, especially since we're experimenting there, but it's something to keep in mind. Ideally I think being able to have both would be ideal.
The users sure, but the users don't need to know about all that, only the person that integrates it into the engine itself, and I'd certainly hope they have at least an inkling of what a shader is since that is required to render, well, anything on any half-modern card of the past 10-20 years. ^.^;
But yeah, handling both would be fairly easy, especially if the rendering was batched properly based on shader/texture/parameter combinations, and you could even bake clipping into shaders optionally to allow for doing that much faster as well. :-)
SDF is not cross platform.
@richardlalancette Signed Distance Fields is a technique, there's nothing platform specific about it. As long as you have something shader-like for texture handling then you can perform the technique.
Still not cross platform.
So I would avoid that solution, unless you have a way to abstract the solution so there is an elegant fallback.
We deal with devices that do not have sharers! Or even GPU!
Still not cross platform.
How so?
We deal with devices that do not have sharers! Or even GPU!
Then that's extremely trivial to perform. Running shader-like code on the CPU is just a normal part of software rendering, whatever is doing the rendering can perform it (and to boot it's extremely fast an able to be run in parallel unlike a lot of other rendering code).
About the only thing it wouldn't work properly on is OpenGL1. OpenGL2 or OpenGL1.1+extensions will work fine (and both of which are well over 20 years old, I doubt anything exists that doesn't support at least GL2 even on embedded accelerated hardware, and of course can always fallback to pre-genning it on the CPU on demand, just like normal on-demand font rendering).
Well, the currently solution works everywhere.
SDF would require work and time we don't have.
I guess, IMGUI is perfect as is because it works without more work.
Richard Lalancette
On Mon, Jan 6, 2020 at 1:23 PM OvermindDL1 notifications@github.com wrote:
Still not cross platform.
How so?
We deal with devices that do not have sharers! Or even GPU!
Then that's extremely trivial to perform. Running shader-like code on the
CPU is just a normal part of software rendering, whatever is doing the
rendering can perform it (and to boot it's extremely fast an able to be run
in parallel unlike a lot of other rendering code).—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/ocornut/imgui/issues/1498?email_source=notifications&email_token=AAKDAGHQ3IHASMDOXQRBGKTQ4NZK7A5CNFSM4EH2QF6KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIGJYFA#issuecomment-571251732,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAKDAGCBTSSOZWMGTCDOCO3Q4NZK7ANCNFSM4EH2QF6A
.
Most helpful comment
Honestly should already be done, there are a lot of areas that specialized shaders could be of huge benefit (imagine a whispy cloud effect in the background of windows for example), this should all be tied together as a shader/texture combination on something though. :-)
The renderer would indeed need another shader for it, but they probably already have such a shader depending on the engine they use, and it's not like they aren't using dozens to hundreds of other shaders anyway.
And I still wouldn't interleave shaders, I'd probably just use z-culling well and bake certain functions into the shaders and then batch everything by 'type', as most renderers do already anyway.
The users sure, but the users don't need to know about all that, only the person that integrates it into the engine itself, and I'd certainly hope they have at least an inkling of what a shader is since that is required to render, well, anything on any half-modern card of the past 10-20 years. ^.^;
But yeah, handling both would be fairly easy, especially if the rendering was batched properly based on shader/texture/parameter combinations, and you could even bake clipping into shaders optionally to allow for doing that much faster as well. :-)