A lot of engines work in a way where update is called at a fixed rate, and then render also happens at an independent (usually as fast as possible) rate.
How does ImGui fit in to this sort of usage pattern? Right now it appears ImGui calls should be in render, but then a change in frame rate would also change the logic update rate which seem wrong?
you can draw imgui multiple times in a row (with the same visuals) without doing the newframe render cycle by using the GetDrawData method.
That lets you keep the imgui calls inside the update call.
so
update -> ImGui calls/logic
render -> other stuff that isn't ImGui?
You setup imgui without a io.RenderDrawListsFn function.
update does the ImGui::NewFrame(); ... ImGui::Render();
render does drawImgui(ImGui::GetDrawData());
With this method I assume its not possible to use ImGui calls within Render anymore? I.e they must all live within the update cycle?
You can't call any ImGui widget functions after calling ImGui::Render() before the next ImGui::NewFrame().
Usually ImGui::Render() would be called at the very end of your own rendering, so you can use imgui widgets within the rendering of your game/engine.
The problem I have is that I only wanted to do the "logic" parts in Update, which means putting ImGui::* calls in there. However there was one case where I wanted to also have calls within my Render which doesn't work when used this way. Which makes sense because there can be unbalanced amounts of Update/Render calls.
Perhaps using two separate imgui context would allow for this.
Perhaps using two separate imgui context would allow for this.
I think this is where I'm at right now.
I have update/render operations decoupled into separate threads. All ImGui handling currently occurs on the render thread.
A side effect of this is that ImGui will start to miss input events whenever the rendering performance drops off (since input is processed asynchronously, this is effectively a race condition on GetIO having visibility of the input state).
Using two separate ImGui contexts seems interesting, but the render context would still be subject to missed IO events in such cases.
One way forward would be to implement a buffered interface to GetIO (and if I want to avoid synchronising between those threads, I'll need a buffer for each thread). I'm not sure I want to go down that path..
Maybe I'm thinking about this the wrong way?
You鈥檒l maybe want to use something like:
https://gist.github.com/ocornut/8417344f3506790304742b07887adf9f
That looks pretty interesting! Nice. I'll have a play around.