Describe the project you are working on:
I trying make in-game character generator for godot from project MakeHuman. Like Skyrim or Fallout 3-4.
https://github.com/Lexpartizan/Go_MakeHuman_dot/
https://www.youtube.com/watch?v=cPVjNh2Ki4I
I make blend-file with one model with 200+ shapekeys. Importing to Godot.
I need setup blendshapes value and customize the appearance of the character.
I can this and on my github page have godot project with this example.
Describe how this feature / enhancement will help your project:
Then applying all shapes and generate MeshInstance without blend_shapes, because file with blendshapes for each character takes up a lot of memory space.
Mesh after import to Godot with all shapekeys- 100 Mb.
But, you need make resource local for different characters.
1 character on screen - 100 Mb video-memory. This is only vertex - memory.
10 character on screen - 1Gb video-memory.
Without blendshapes - 1 character only 2,5 Mb. Zombie rush possible! And each zombie unique.
AAA-feature with one small function.
And i dont know how make that. I don't even know if this is possible.
So, this feature request.
I think the character generator is a very important part of many role-playing games and its effective implementation it just depends on whether it is possible to apply shapekeys and get ready a small version of this mesh. Inefficient it is already working and can be used in games with a small number of characters in the frame, for example in fighting games.
I can make this in blender, but for in-game character generator this must be maked via code.
I asked this question on forums, discord, Q&A, etc and did not get an answer.
Is there a reason why this should be core and not an add-on in the asset library?:
Because this is the basic function for working with the mesh. And other functions to work with blendshape already have.
If this enhancement will not be used often, can it be worked around with a few lines of script?:
That's possible. So I want to know these lines ;-).
Now I have the nerve to ask a question already on a github. Sorry for that. And sorry for my english. I hope you could understand the problems from this vague description.
I'm having a hard time understanding your request.
I'm having a hard time understanding your request.
Sorry for this. English not my language.
If this could be solved with just a few lines of code, why not implement such a function into one of the mesh classes. This would not effect performance at all for those who don't use it.
So you can give me this few lines of code?
I would be very grateful.
I really need this very much.
I think that the engine is already building the surface I need (How else would a 3D model be displayed from a blendshapes?), but I need a function so that I can get the resulting mesh.
So you can give me this few lines of code?
I would be very grateful.
I really need this very much.
I am going by the idea that this could be solved by a script. Sorry, I don't actually know how to do it. Good luck though, keep researching
A possible design could be this:
my "research" did not lead to anything. Not my level programming.
However, I did a lot of boring work to translate most of Makehuman project into a Godot. This work relates more to blender and modeling than to programming. And I think that the character generator (open source) would be useful to the whole community, not just me. But I’m stuck with this optimization, which limits this character editor too much. It would be a shame to throw all the work to the dump when you need only a few lines of code, one function.
So I turn to the community for help.
I'm going to go in for clothes for the characters, but that doesn't make sense if two characters eat up all of the available video memory.
Therefore, I VERY need help with this.
Merge all the target blend shapes to one blend shape
How make this in godot?
I know how to do this in theory. I know how to do it in Blender.
But I don’t know how to do this in a godot with code.
This is for the one blend shape and will get the blend shape mesh. Once you have the blend mesh and the original mesh, you can read the vertexes and linear interpolate with the blend value between 0.0 and 1.0.
https://docs.godotengine.org/en/3.1/classes/[email protected]#class-gdscript-method-lerp
Break up the three components of the position. Use lerp ( target_mesh_float, blend_shape_float, blend_weight ) over all the vertex locations.
Edited:
https://docs.godotengine.org/en/latest/classes/class_mesh.html#class-mesh-method-surface-get-blend-shape-arrays
Thank you, you gave me something to think about. Yeah, that makes sense. Although I still don't understand how to work with vertices. Not my level. So help is still needed.
Well, it is possible to do just in plain gdscript. See my repository at https://github.com/gamedev-kindness/make-target
This basically shows why things like this are better implemented in engine as C++ - it is very hard to make them run at adequate speeds. It is just a bit too slow.
What you generally do
The problem is that there is no way to modify ArrayMesh vertices en masse with sparce indices - i.e. supplying array of indices and array of deltas and modify mesh in one shot in most effective way. I think this is what mostly needed for the solution to succeed.
@slapin
I don't really understand how to generate UV texture and how to work with colors in it. Also, I think the offset and vertex recalculation of each frame in the viewport will have a negative impact on performance. I will try a couple of weeks, looking at Your code, start working with vertices and get some result on my own. But I will definitely address you on discord (especially since You speak Russian). I understand You're working on a similar project.
You do not need to update every frame, you just update your vertices once
and then work with the result. The texture is just good way to store deltas.
See SIMS4 GDC video about their editor for more in-depth explanation.
On Sat, Sep 7, 2019 at 11:46 AM Lexpartizan notifications@github.com
wrote:
@slapin https://github.com/slapin
I don't really understand how to generate UV texture and how to work with
colors in it. Also, I think the offset and vertex recalculation of each
frame in the viewport will have a negative impact on performance. I will
try a couple of weeks, looking at Your code, start working with vertices
and get some result on my own. But I will definitely address you on discord
(especially since You speak Russian). I understand You're working on a
similar project.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/godotengine/godot-proposals/issues/24?email_source=notifications&email_token=AAABPUZCELEYRSLVLUIIZXTQINS5FA5CNFSM4ITZKWZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6ET3RI#issuecomment-529087941,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAABPU5XXGZ3IESCHP4LTG3QINS5FANCNFSM4ITZKWZQ
.
This proposal explains technical dependency to make proper character customization editors: https://github.com/godotengine/godot-proposals/issues/41
I have some progress witn that.
I followed @fire 's advice and looked into @slapin 's code. Now I got a Mesh without blendshapes, until I checked how things are with UV textures and normals. But it's a big step for me. And Yes, there are very few lines of code.
I hope that tomorrow a little bit will straighten him out, of commentary and lay here.
I'm only going to save the value of the keys for each character, and when a character appears on scene he generates a mesh for himself, which is not saved anywhere, but simply hangs in memory until the character is removed from the scene.
Would be nice if #41 would be added to core, that would save a lot of trouble. Please vote for it on the go.
@fire thanks
Oh yeah, thank you so much to everyone who helped me with this. Especially, @fire and @slapin
And I think that the character generator (open source) would be useful to the whole community, not just me.
Are you aware of this project:
https://github.com/Grumoth/godot-character-creator
https://www.youtube.com/watch?v=uowc04bAKPg&list=PL1x_Sm7RYv5fHdlsmnGPt2hYEh2wI8Cwv&index=1
@golddotasksquestions,
No, thanks for information. Good generator! Download and watch his code tomorrow.
I trying just make Makehuman for Godot.
The work was done great, especially artistic side. However there are some
shortcomings with this approach:
More advanced examples of this approach is Honey Select rig and Unity UMA
rig - they use dedicated bones for non-uniform scaling.
Also, having transforms constantly set might affect performance for larger
skeletons, so keeping transform on many bones (via set_custom_pose)
might be not a good idea if you want many characters. And I seen that games
using this approach usually have 2-3 characters on screen, no more,
so I guess that is quite expensive. Sims4 approach is much less expensive
as characters end up optimal, without extra bones, without (unnecessary)
blend shapes.
However, if your only platform is PC or current gen consoles, you can use
editor like and then process the result and optimize it as additional pass.
You will need to apply bone scale, apply blend shapes to mesh and remove
blend shapes. Also you will need to pack materials.
On Sun, Sep 8, 2019 at 10:33 PM Lexpartizan notifications@github.com
wrote:
@golddotasksquestions https://github.com/golddotasksquestions,
No, thanks for information. Good generator! Download and watch his code
tomorrow.
I trying just make Makehuman for Godot.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/godotengine/godot-proposals/issues/24?email_source=notifications&email_token=AAABPU4JCIFEZBGYLW4F5ETQIVHRDA5CNFSM4ITZKWZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6FXTWA#issuecomment-529234392,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAABPU2UDSETPTQ327756WLQIVHRDANCNFSM4ITZKWZQ
.
Is there any way to get ArrayMesh with a regular array with vertices, UV coordinates and so on? From them you can build ArrayMesh, but I would just like to read these values. I use MeshDataTool now, but it is too slow. I would like to get a regular array of vertices and work with them. And then create a new ArrayMesh. Now i have 50 different characters on the screen and 60 frames per second. Video Memory consumption= 35 MB. But I want a faster way to create character.
In addition, there is one small bug. When I use the mesh in the scene, but he's laying there just as the data (I read blendshapes from this) and the screen is not displayed, it still consumes the VideoMemory (VideoMemory, not RAM!), so I should just reset this value.
var mdt = MeshDataTool.new()
mdt.create_from_surface(load(basis_mesh), 0)
for i in range(mdt.get_vertex_count()):
var vertex = mdt.get_vertex(i)
mdt.set_vertex(i, Vector3 (lerp (vertex.x, blendshp[shape1][0][i].x,ix),lerp (vertex.y, blendshp[shape1][0][i].y,ix),lerp (vertex.z,blendshp[shape1][0][i].z,ix)))
mesh=Mesh.new()
mdt.commit_to_surface(mesh)
Sorry. I already found surface_get_arrays ( int surf_idx ) Thanks!
This function was in Mesh, not ArrayMesh.
See ArrayMesh doc, I write from memory:
surface_get_arrays()
surface_get_blend_shape_arrays() (IIRC)
You don't need meshes to be a part of MeshInstance for this.
On Mon, Sep 9, 2019 at 8:12 PM Lexpartizan notifications@github.com wrote:
Is there any way to get ArrayMesh with a regular array with vertices, UV
coordinates and so on? From them you can build ArrayMesh, but I would just
like to read these values. I use MeshDataTool now, but it is too slow. I
would like to get a regular array of vertices and work with them. And then
create a new ArrayMesh. Now i have 50 different characters on the screen
and 60 frames per second. Video Memory consumption= 35 MB. But I want a
faster way to create character.
In addition, there is one small bug. When I use the mesh in the scene, but
he's laying there just as the data (I read blendshapes from this) and the
screen is not displayed, it still consumes the VideoMemory (VideoMemory,
not RAM!), so I should just reset this value.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/godotengine/godot-proposals/issues/24?email_source=notifications&email_token=AAABPU3DT54MILXVUGNI4XLQIZ7XPA5CNFSM4ITZKWZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6ILN2A#issuecomment-529577704,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAABPUY3C5QN2ZJRT7XC5X3QIZ7XPANCNFSM4ITZKWZQ
.
Yes, i dont use meshes.
This, by the way, gives opportunity not use blendshapes foe GLES2.
surface_get_arrays() - in Mesh doc, not ArrayMesh doc. That's why I didn't find her right away.
So slow!
for i in range(size_vertex_arr):
basis_mesh[Mesh.ARRAY_VERTEX][i].x=lerp (basis_mesh[Mesh.ARRAY_VERTEX][i].x, blendshp["blndshp"][name_shp][0][i].x, value)
basis_mesh[Mesh.ARRAY_VERTEX][i].y=lerp (basis_mesh[Mesh.ARRAY_VERTEX][i].y, blendshp["blndshp"][name_shp][0][i].y, value)
basis_mesh[Mesh.ARRAY_VERTEX][i].z=lerp (basis_mesh[Mesh.ARRAY_VERTEX][i].z, blendshp["blndshp"][name_shp][0][i].z, value)
0.5 sec for model with 16000 triangles.
if 200 blendshapes - 100 sec for each character...
But engine generate model on screen from mesh with blendshapes realtime
do you mean generator part or display part?
Generator part is OK to be slow as it is done once per asset.
Anyway, I work on C++ version of generator now, and it is from 100ms to
1sec for character to generate raw shape data without compression
which is basically triangle rasterization on CPU. I think 100ms will be max
time to apply shape using C++ version.
Godot developers said big NO to all ways to optimize this so only way is
engine fork to add this feature.
On Tue, Sep 10, 2019 at 9:36 PM Lexpartizan notifications@github.com
wrote:
So slow!
for i in range(size_vertex_arr):
basis_mesh[Mesh.ARRAY_VERTEX][i].x=lerp
(basis_mesh[Mesh.ARRAY_VERTEX][i].x, blendshp["blndshp"][name_shp][0][i].x,
value)
basis_mesh[Mesh.ARRAY_VERTEX][i].y=lerp
(basis_mesh[Mesh.ARRAY_VERTEX][i].y, blendshp["blndshp"][name_shp][0][i].y,
value)
basis_mesh[Mesh.ARRAY_VERTEX][i].z=lerp
(basis_mesh[Mesh.ARRAY_VERTEX][i].z, blendshp["blndshp"][name_shp][0][i].z,
value)
0.5 sec
if 200 blendshapes - 100 sec for each character...—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/godotengine/godot-proposals/issues/24?email_source=notifications&email_token=AAABPU4MA44KBTS3TDW7OBLQI7SMFA5CNFSM4ITZKWZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6MCVEI#issuecomment-530066065,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAABPU3WJ4F6OEFLUTF3YHDQI7SMFANCNFSM4ITZKWZQ
.
It is very sad. It is interesting how surface with blendshapes are rendered. Better, as I understand it, the engine builds the surface we need with the help of the GPU. And then it displays. It remains just in time to get this surface ...
In theory, the work has already been done, it remains to find out how to get the result.
Yes, C ++ will allow you to use this with GLES 2, but it doesn’t interest me much.
100 ms for how many vertices and blends?
GDscript 400-500 ms for 1 blendshape and 16000 verts.
My mesh is 26K triangles, 76 blend shapes. Code is not quite optimized
though. 100ms is for whole mesh.
But do not consider it a final number, it is mid-development.
On Tue, Sep 10, 2019 at 10:26 PM Lexpartizan notifications@github.com wrote:
>
It is very sad. It is interesting how surface with blendshapes are rendered. Better, as I understand it, the engine builds the surface we need with the help of the GPU. And then it displays. It remains just in time to get this surface ...
In theory, the work has already been done, it remains to find out how to get the result.Yes, C ++ will allow you to use this with GLES 2, but it doesn’t interest me much.
100 ms for how many vertices and blends?
GDscript 400-500 ms for 1 blendshape and 16000 verts.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
Strong performance. 1000x faster.
I guess most performance is gained through less cache misses and using
trivial types. The code is identical to software rasterizer and simply
can't be that slow as it ends up when done using GDScript. Quake does that
in real time 60 frames and more per second, on old P133 @133 MHz.
Why can't i72600K at 3GHz do this at least 10 times faster, mmm? But ok,
lets it not be faster, but why slower?
On Tue, Sep 10, 2019 at 11:08 PM Lexpartizan notifications@github.com
wrote:
Strong performance. 1000x faster.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/godotengine/godot-proposals/issues/24?email_source=notifications&email_token=AAABPU6EU2V6R4HJHNRZLYDQI75DPA5CNFSM4ITZKWZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6MKVHY#issuecomment-530098847,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAABPU7CL2YBMTYXT6UBY2DQI75DPANCNFSM4ITZKWZQ
.
Where productivity goes and how it was achieved before is an eternal question. I was shocked when I made a suboptimal algorithm in the Mario clone. It would seem that the performance is more than enough to easily add\remove elements from a small map (STL list). But no, it resulted in wild friezes. Until I fixed the algorithm and delete extra interactions with the map. And it's In C++ and SFML!
3GHz CPU and freezes!
Of course usually there is only one optimal way to do a thing, but
unlimited suboptimal ways.
On Tue, Sep 10, 2019 at 11:28 PM Lexpartizan notifications@github.com wrote:
>
Where productivity goes and how it was achieved before is an eternal question. I was shocked when I made a suboptimal algorithm in the Mario clone. It would seem that the performance is more than enough to easily add\remove elements from a small map (STL list). But no, it resulted in wild friezes. Until I fixed the algorithm and delete extra interactions with the map. And it's In C++ and SFML!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
I'll have to change my approach. I wanted to generate characters when adding them to the scene. But now I think to save them in files and that the basic shapekeys (emotions, muscularity, fat, thinness) were present in the mesh. The silver bullet failed.
This wrong way, but i dont see another option.
do you mean generator part or display part?
Generator, offcourse.
In generator i use mesh with blendshape (not usable for GLES2) and create dictionary witn values of shapekeys (save dictionary to json). The dictionary stores only keys other than 0, which reduces the number of cycles to generate.
With this dictionary (from json) i create result mesh for 1 minute...
Generator part is OK to be slow
1 character generates 1 minute.
10 characters generates 10 minutes. Its long time.
You need to generate maps for only one, then apply them. Application is
composition of all maps and takes 10 seconds for each character on my side.
Try the sliders in my demo, each slider change applies ALL maps at once.
On Wed, Sep 11, 2019 at 8:14 AM Lexpartizan notifications@github.com
wrote:
do you mean generator part or display part?
Generator, offcourse.
Generator part is OK to be slow
1 character generates 1 minute.
10 characters generates 10 minutes. Its long time.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/godotengine/godot-proposals/issues/24?email_source=notifications&email_token=AAABPU2VNSG6VRQRTIG5SSDQJB5CDA5CNFSM4ITZKWZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6NI3MA#issuecomment-530222512,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAABPU5ESP55NGJAN4SAESTQJB5CDANCNFSM4ITZKWZQ
.
Application is composition of all maps and takes 10 seconds for each character on my side
Its fast. in my simple way 100 seconds. May I contact you in a discord? It is very difficult to understand your algorithm through google translate. I don’t understand how texture can speed up the application of a bunch of modified shape shapes. In Russian it would be easier))
each slider change applies ALL maps at once.
But slider change 1 blendshape. So if i change 1 blendshape through vertex, i achieve same speed.
Or you create image for each blendshape once and mixing images?
I dont understand why texture need.
Texture is needed to propagate changes to all meshes of different
topologies but having same UVs, like male and female.
Textures are also used to propagate changes to clothes (but using cloth
helpers is better for skirts and robes)
Actually you don't need to think of these as textures, it is more a data
array or heightfield of sorts.
On Wed, Sep 11, 2019 at 11:33 AM Lexpartizan notifications@github.com
wrote:
Application is composition of all maps and takes 10 seconds for each
character on my sideIts fast. in my simple way 100 seconds. May I contact you in a discord? It
is very difficult to understand your algorithm through google translate. I
don’t understand how texture can speed up the application of a bunch of
modified shape shapes. In Russian it would be easier))each slider change applies ALL maps at once.
But slider change 1 blendshape. So if i change 1 blendshape through
vertex, i achieve same speed.
Or you create image for each blendshape once and mixing images?
I dont understand why texture need.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/godotengine/godot-proposals/issues/24?email_source=notifications&email_token=AAABPUZ65CFBTALO7MXAJJLQJCUNZA5CNFSM4ITZKWZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6NW44I#issuecomment-530280049,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAABPU33WQJTTGYTDARTHDDQJCUNZANCNFSM4ITZKWZQ
.
I realized that the texture stores the difference between the base model and the result of using shapeshakes. I don’t understand what is the gain in performance, because we already have all these blendhapes in the form of arrays.
For the time being, I see such a way out for my game. I generate characters separately (in advance, not in the game itself) and save them using ResourceSaver.
If you need to change a character, upload it to the character editor and save it there again using ResourceSaver. Not good way, but...
It will not work to generate changes on the fly when loading a character, too slowly.
My solution is more intended at creation of characters using sliders.
C++ version has promising speed, but will take a few weeks to complete
I guess.
If you do not understand the benefits then it is just not for you. See
Sims4 GDC video, maybe you will, but that does not really matter. You
can do the same thing differently depending on constraints. If you
need quick turnaround and have the same topology among your meshes you
can use blend shapes of course, but they have problem of:
On Wed, Sep 11, 2019 at 1:41 PM Lexpartizan notifications@github.com wrote:
>
I realized that the texture stores the difference between the base model and the result of using shapeshakes. I don’t understand what is the gain in performance, because we already have all these blendhapes in the form of arrays.
For the time being, I see such a way out for my game. I generate characters separately (in advance, not in the game itself) and save them using ResourceSaver.
If you need to change a character, upload it to the character editor and save it there again using ResourceSaver. Not good way, but...
It will not work to generate changes on the fly when loading a character, too slowly.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
@slapin
I was very interested. And I will switch to Russian to ask a couple of questions and you understand me. All, sorry for that.
Насколько я понял, deformation maps подойдёт к любой топологии. Это позволит, в свою очередь, использовать любую, в том числе низкополигональную топологию из Makehuman. И это интересно. Кроме того, в Sims4 это делается на CPU. Видео я не нашёл, маленький документ говорит о том, что в текстуре хранится разница между двумя мешами. Но я не могу понять как это работает.
Мы имеем базовый меш. И имеем форму толстого персонажа, например. Эта форма хранится в виде текстуры или в виде отдельного меша (возможно, с разной топологией)? Если в виде текстуры, то как сделать персонажа толстым на 0.5?
Если в виде меша, то на каждую форму у нас будет одна текстура, которую нельзя сгенерировать заранее (ведь мы заранее не знаем, насколько должен быть толстым персонаж). И как их смешать вместе в одну? Я уж молчу про саму систему применения такой текстуры. В любом случае, мне надо будет посмотреть Ваш проект более внимательно. Но сначала хотелось бы знать, как оно работает в принципе. Каков приблизительный алгоритм?
Генерируем текстуры (в Вашем проекте при запуске весьма долгая генерация) из всех мешей с формами (это необязательно делать в игре, можно сделать заранее, я так понимаю). А потом как-то их миксируем? Если каждая форма представляет собой текстуру, то, возможно, их можно смешивать каким-то шейдером и быстро получать результат.
Russian text below.
I answer the question regarding how image-based topology-independent
morphing works.
The process is this:
r = shape_vertex.x - base_vertex.x
g = shape_vertex.y - base_vertex.y
b = shape_vertex.z - base_vertex.z
The above calculation is simplified for this explanation purpose, you will
need to do some extra work for making sure you get non-negative values and
use
0.0-1.0 range in most effective way. See my code for better understanding.
4, You save all data in format suitable for run time (compressed and with
conversion data).
This should be simple enough explanation to the idea. There are additional
nuances, but they are not important for understanding.
Короче последовательность приблизительно такая:
r = shape_vertex.x - base_vertex.x
g = shape_vertex.y - base_vertex.y
b = shape_vertex.z - base_vertex.z
Псевдокод выше упрощенно иллюстрирует рассчет - нюансы опускаем.
Шаги вверху - это этап генерации мапов, он как этап разработки и в готовой
программе не нужен, Хотя он и быстрый на C++, но все-же кушает память,
на мобилки его не стоит пихать.
Далее шаги, которые выполняем уже в пользовательском приложении (игре):
Я надеюсь объяснил достаточно понятно. Если что не так - задавай конкретные
вопросы.
По поводу как это все ускорить - оно достаточно быстрое на CPU если
использовать C или C++ и кое-какие алгоритмы
(spatial partitioning). GDScript просто шибко адово тормозит, очень много
копирует, использует очень жирные типы данных.
Можно еще задействовать треды, но по-моему там все поломано в master'е на
эту тему, а так как на 3D всем пофиг, то
починят вряд ли скоро.
On Thu, Sep 12, 2019 at 11:12 AM Lexpartizan notifications@github.com
wrote:
@slapin https://github.com/slapin
I was very interested. And I will switch to Russian to ask a couple of
questions and you understand me. All, sorry for that.Насколько я понял, deformation maps подойдёт к любой топологии. Это
позволит, в свою очередь, использовать любую, в том числе
низкополигональную топологию из Makehuman. И это интересно. Кроме того, в
Sims4 это делается на CPU. Видео я не нашёл, маленький документ говорит о
том, что в текстуре хранится разница между двумя мешами. Но я не могу
понять как это работает.
Мы имеем базовый меш. И имеем форму толстого персонажа, например. Эта
форма хранится в виде текстуры или в виде отдельного меша (возможно, с
разной топологией)? Если в виде текстуры, то как сделать персонажа толстым
на 0.5?
Если в виде меша, то на каждую форму у нас будет одна текстура, которую
нельзя сгенерировать заранее (ведь мы заранее не знаем, насколько должен
быть толстым персонаж). И как их смешать вместе в одну? Я уж молчу про саму
систему применения такой текстуры. В любом случае, мне надо будет
посмотреть Ваш проект более внимательно. Но сначала хотелось бы знать, как
оно работает в принципе. Каков приблизительный алгоритм?
Генерируем текстуры (в Вашем проекте при запуске весьма долгая генерация)
из всех мешей с формами (это необязательно делать в игре, можно сделать
заранее, я так понимаю). А потом как-то их миксируем? Если каждая форма
представляет собой текстуру, то, возможно, их можно смешивать каким-то
шейдером и быстро получать результат.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/godotengine/godot-proposals/issues/24?email_source=notifications&email_token=AAABPU25PZJ33SGHDDR3GBLQJH2XFA5CNFSM4ITZKWZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6RBSVI#issuecomment-530717013,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAABPUYFCZO4XEZMINKDSTTQJH2XFANCNFSM4ITZKWZQ
.
@slapin Big Thanks for explanation! I couldn't figure out where the modifier was used. Now everything is clear. It is unclear only if the need for normal-map. And what if the offset is negative? Like in texture not can be negative meanings. However, this can be corrected by taking the middle of the range as 0.
We have many textures. What if you use node mix-shader for usual mixing textures on the GPU? As far as I understand it is just a linear interpolation with a modifier. With shader we have one mixing texture from many and one texture values we apply to mesh. Maybe it'll be faster.
And why use an additional UV, if all can be to do on first (default)?
This is why I said I simplified explanation. You need to normalize your
maps so that r,g,b values will always be from 0 to 1.
usual linear transform is ax + b, so you need a and b values calculated. So
you calculate max and min values separately for x, y, z
and then do
x' = (x - minx) / (maxx - minx)
y' = (y - miny) / (maxy - miny)
z' = (z - minz) / (maxz - minz)
this way x', y', z' will be in range from 0 to 1
When applying you restore values:
x = minx + x' * (maxx - minx)
y = miny + y' * (maxy - miny)
z = minz + z' * (maxz - minz)
so you will want to keep float min[3] and float max[3] (or min and
precalculated (max - min) value) together with your map for easier restore.
Alternatively you can use your own map format not dependent on Godot Image,
but byte-based integer format is the best for the task because it is
cache-friendly
and works fast. But you can optimize fetches using custom format and
avoiding dynamic heal allocations and memory fragmentation in general.
I plat to use Image for now but if I'll get grasp on good in-memory image
compression algorithm, I will move to raw byte array.
On Fri, Sep 13, 2019 at 8:11 AM Lexpartizan notifications@github.com
wrote:
@slapin https://github.com/slapin Big Thanks for explanation! I
couldn't figure out where the modifier was used. Now everything is clear.
It is unclear only if the need for normal-map. And what if the offset is
negative? Like in texture not can be negative meanings. However, this can
be corrected by taking the middle of the range as 0.We have many textures. What if you use node mix-shader for ususal mixing
textures on the GPU? As far as I understand it is just a linear
interpolation with a modifier. Maybe it'll be faster.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/godotengine/godot-proposals/issues/24?email_source=notifications&email_token=AAABPU6NNDWZCZZFN32SJ2LQJMOJLA5CNFSM4ITZKWZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6T7GAQ#issuecomment-531100418,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAABPU4V5SKFJXMKI2SGXN3QJMOJLANCNFSM4ITZKWZQ
.
As for GPU composition - I think you could try it, I currently prefer CPU
solution for now, as it should be fast enough for the job.
GPU might choke on 200 textures, so you will need to iteratively accumulate
data in blocks aming for maximum texture samplers counts,
but I think it will choke faster on uniform buffer size... but I never
tried that so I think you could. Compute shaders or opencl might be
more promising for this. But I bet more on CPU as heavy lifter for this
task as more universal solution. Might need to convert mesh to
interleaved vertex buffer first for better cache friendliness, but will see
about this.
On Fri, Sep 13, 2019 at 9:03 AM Sergey Lapin slapinid@gmail.com wrote:
This is why I said I simplified explanation. You need to normalize your
maps so that r,g,b values will always be from 0 to 1.
usual linear transform is ax + b, so you need a and b values calculated.
So you calculate max and min values separately for x, y, z
and then do
x' = (x - minx) / (maxx - minx)
y' = (y - miny) / (maxy - miny)
z' = (z - minz) / (maxz - minz)
this way x', y', z' will be in range from 0 to 1When applying you restore values:
x = minx + x' * (maxx - minx)
y = miny + y' * (maxy - miny)
z = minz + z' * (maxz - minz)
so you will want to keep float min[3] and float max[3] (or min and
precalculated (max - min) value) together with your map for easier restore.
Alternatively you can use your own map format not dependent on Godot
Image, but byte-based integer format is the best for the task because it is
cache-friendly
and works fast. But you can optimize fetches using custom format and
avoiding dynamic heal allocations and memory fragmentation in general.
I plat to use Image for now but if I'll get grasp on good in-memory image
compression algorithm, I will move to raw byte array.On Fri, Sep 13, 2019 at 8:11 AM Lexpartizan notifications@github.com
wrote:@slapin https://github.com/slapin Big Thanks for explanation! I
couldn't figure out where the modifier was used. Now everything is clear.
It is unclear only if the need for normal-map. And what if the offset is
negative? Like in texture not can be negative meanings. However, this can
be corrected by taking the middle of the range as 0.We have many textures. What if you use node mix-shader for ususal mixing
textures on the GPU? As far as I understand it is just a linear
interpolation with a modifier. Maybe it'll be faster.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/godotengine/godot-proposals/issues/24?email_source=notifications&email_token=AAABPU6NNDWZCZZFN32SJ2LQJMOJLA5CNFSM4ITZKWZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6T7GAQ#issuecomment-531100418,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAABPU4V5SKFJXMKI2SGXN3QJMOJLANCNFSM4ITZKWZQ
.
@slapin
I am very grateful to you for telling about such an unexpected decision.
I will still study your code for this. Thank you so much.
PS. Shaders programming not my level, but i trying with Visual Shader Editor for simple mixing two images.
All, very thanks for help.
I reached the speed I needed using gdscript.
this code very slow
var array = [][]
for i in range(len (array)):
.....for j in range (len (array[i])):
..........array[i][j] = func()
this code faster in 50> times:
var array = [][]
for i in range (len (array)):
....var temp = array[i]
....for j in range(len (temp)):
........temp[j]= func()
You can download my demo on https://github.com/Lexpartizan/Go_MakeHuman_dot
Most helpful comment
Well, it is possible to do just in plain gdscript. See my repository at https://github.com/gamedev-kindness/make-target
This basically shows why things like this are better implemented in engine as C++ - it is very hard to make them run at adequate speeds. It is just a bit too slow.
What you generally do
The problem is that there is no way to modify ArrayMesh vertices en masse with sparce indices - i.e. supplying array of indices and array of deltas and modify mesh in one shot in most effective way. I think this is what mostly needed for the solution to succeed.