Trinitycore: pool ram usage

Created on 29 Dec 2016  路  3Comments  路  Source: TrinityCore/TrinityCore

Description:
I inspected some of the RAM usage on startup and noticed that a huge amount was consumed by poolmgr. Like 1/10 of memory was consumed by it.
http://i.imgur.com/YKnOWeD.png

Further inspection revealed that most of the reserved memory is not probably used for anything.
Some small testing showed that using map reduces memory usage to maybe 1/10 of original and startup time was reduced by 30 seconds on debug mode while having the profiling tools on (so on release maybe some seconds? just a guess)

It was also seen that some parts of the core seem to loop over the whole structures when saving.
Or looping from 0 to max entry. (about 200000 iterations)
changing the structure may make these loops relatively more faster as there are just some 4000 entries instead of 200000.

The root cause of the issue is here:
https://github.com/TrinityCore/TrinityCore/blob/bf33159a7009f64a78cf2a1309eb5182fcd3f7e3/src/server/game/Pools/PoolMgr.h#L143-L160
https://github.com/TrinityCore/TrinityCore/blob/bf33159a7009f64a78cf2a1309eb5182fcd3f7e3/src/server/game/Pools/PoolMgr.cpp#L560-L571
As seen vectors are used and they are resized to DB max entry value which is 202482.
If DB is looked into there are nowhere near that many entries there.

Current behaviour: (Tell us what happens.)
poolmgr consumes high amounts of memory for little benefit.
the "empty values" are not used for anything from what I can see.
vectors are used.
There are "max(entry) from pool_template" elements reserved for the vectors.
There are huge gaps in the entries which causes the waste of memory
Not sure if this is intended, but when checking for pools on load it is only checked that entry < max entry. It is never checked if the entries used actually exist in pool_template table which means there can be pool IDs used that dont exist and dont produce errors.

Expected behaviour: (Tell us what should happen instead.)
poolmgr should use just what it uses or needs for effective use.
the memory used should have purpose.
maybe maps should be used because of the huge gaps.
properly check pool entries on load.

Steps to reproduce the problem:

  1. how I inspected ram usage:
  2. Start visual studio (2015 enterprise) as if you were compiling
  3. Clean solution (this is important because at least for me VS crashes if I use rebuilds and diagnostics)
  4. Build solution
  5. open world.cpp and place a breakpoint at the end of World::SetInitialWorldSettings so the debugger stops there
  6. right click on worldserver project on the solution explorer and click to set it as startup project
  7. right click on worldserver project on the solution explorer and select properties. Under configuration properties and under Debugging set working directory as $(OutDir)
  8. click to use "local windows debugger" or otherwise start the debugging session
  9. Go to Debug>Windows and click on Show diagnostic tools http://i.imgur.com/qruqzJh.png
  10. at the bottom of the diagnostic tools select the memory usage tab and click on the button to enable gathering data (heap profiling). http://i.imgur.com/qNlFgWZ.png
  11. at top of the window select to restart the debugging session (restart program / debugger)
  12. wait until the server starts up until the breakpoint is hit.
  13. in the memory tab in diagnostic tools click to create a snapshot
  14. click to show heap or the snapshot.
  15. for better view of the ram at top of the newly opened view change from tree view to stack view.

Branch(es):
3.3.5, master

TC rev. hash/commit:
https://github.com/TrinityCore/TrinityCore/commit/3a27e2f7f0a5fa6fa2bf02bb2a5d975e7240ad44

TDB version:
TDB_full_world_335.62_2016_10_17

Operating system:
windows 10

Branch-3.3.5a

Most helpful comment

Yeah, fix the db data - the problem I am seeing here is various db devs trying to encode much more info into one field than they are supposed to (see id/guid in many tables and some weird numbering schemes such as entry*100)

All 3 comments

Or dont insert garbage data into the db...

hmm, well if the entries would have little to no gaps then there is no issue yeah.
Current situation: http://i.imgur.com/0JGH6qp.png

Then could still improve the DB load errors about using nonexistant entries.

Yeah, fix the db data - the problem I am seeing here is various db devs trying to encode much more info into one field than they are supposed to (see id/guid in many tables and some weird numbering schemes such as entry*100)

Was this page helpful?
0 / 5 - 0 ratings

Related issues

Jonne733 picture Jonne733  路  3Comments

cbcs picture cbcs  路  3Comments

Teppic1 picture Teppic1  路  3Comments

daddycaddy picture daddycaddy  路  3Comments

DDuarte picture DDuarte  路  3Comments