Godot-proposals: Do NOT clean the data in MultiMesh when changing instance_count

Created on 29 Jan 2020  路  3Comments  路  Source: godotengine/godot-proposals

Describe the project you are working on:
Game spawning multiple thousand of entities which will stay for a long time.

Describe the problem or limitation you are having in your project:
Currently when using MultiMesh one has to provide the maximum amount of instances via instance_count which potential leads to moving a lot of unused data to the GPU.
If the instance_count is changed all old data is lost and there is no sensible way to recreate the old data as there is no get_as_bulk_array function.

Describe how this feature / enhancement will help you overcome this problem or limitation:
If there would be some way to enlarge the MultiMesh buffer without loosing the old data i could enlarge the buffer in steps (say: 1k).
So every time i want to add a new instance check if visible_instance_count < instance_count - 1 and if it is enlarge the buffer by 1k.
This would keep the overhead at bay and would prevent me from enforcing some arbitary limit of instance.

Describe implementation detail for your proposal (in code), if possible:
I see two different possibilities to implement this:


    1. Add a enlarge function which will do the same as setting instance_count but also provides an optional parameter keep_old_data=false. When keep_old_data is true, copy over the old data to the new buffer at offset == 0. If the new size is smaller as the old one only copy the first new_size items from the old data.


    1. Add an get_as_bulk_array function and add an optional parameter offset to set_as_bulk_array or add a set_range_from_bulk_array. This way user could copy over the old data by their own.

If this enhancement will not be used often, can it be worked around with a few lines of script?:
Except limiting myself to an arbitrary amount of instance the only way i can think of is to save all data additionally in a script and use set_as_bulk_array.
But since that function doesn't accept a offset parameter one would then need to also pad the extra array to instance_count

Is there a reason why this should be core and not an add-on in the asset library?:
Its a trivial change (in fact i would be willing to send a PR for this), doesn't break anything and most of all: makes sense.
I think the current behavior is not sensible for the normal use cases (why would i ever enlarge my multimesh at runtime if that means loosing all the data ?), but i realize changing this might break existing code, thus. my non breaking proposition.

rendering

Most helpful comment

I am aware of visible_instance_count and the overhead of allocating a new buffer. That's the reason i thought about enlarging it only in 1k (or probably better 10k) steps in my code and then use visible_instance_count.
This accepts occasionally spikes in exchange for the flexibility.

After sleeping a night over it:
Adding a get_as_bulk_array with support to only copy stripes (from_offset - to_offset) and adding a way to replace a range of the data via set_as_bulk_array (or a new function specific to that) would probably be the better idea.
This would allow users to (more or less) efficient fill the new buffer with the old data, but would also allow to move data ranges from the back to the front for example and thus allow better management regarding "removed" instances.
Additionally i assume it should be easy to implement.

All 3 comments

I think this is a good idea in general. However, you should consider the fact that the MultiMesh is not intended to be resized dynamically. Allocating the MultiMesh buffer on the GPU is a slow operation. If you are resizing it dynamically, you lose the benefits of using a MultiMesh in the first place.

An optimal workflow for working with MultiMeshes is to allocate to the largest size you need and then hide/show instances by setting the visible_instance_count. This leaves the buffer management up to the user and reinforces that the buffer should not be resized.

At the same time, if a user absolutely needs to resize the buffer for whatever reason, it makes sense to copy over the previous buffer data first. Even though this would be a slow operation, buffer allocation is slow to begin with. We would just have to be careful to further reinforce that the allocation function should not be called at runtime.

I am aware of visible_instance_count and the overhead of allocating a new buffer. That's the reason i thought about enlarging it only in 1k (or probably better 10k) steps in my code and then use visible_instance_count.
This accepts occasionally spikes in exchange for the flexibility.

After sleeping a night over it:
Adding a get_as_bulk_array with support to only copy stripes (from_offset - to_offset) and adding a way to replace a range of the data via set_as_bulk_array (or a new function specific to that) would probably be the better idea.
This would allow users to (more or less) efficient fill the new buffer with the old data, but would also allow to move data ranges from the back to the front for example and thus allow better management regarding "removed" instances.
Additionally i assume it should be easy to implement.

I was looking at the Vulkan code for allocating MultiMeshes recently and it looks like it might be way cheaper to do in Vulkan than in OpenGL (may not require a full buffer copy). If it turns out to be relatively cheap this is something that I would like to implement.

Was this page helpful?
0 / 5 - 0 ratings