Hey guys and gals.
I just wanted to make you aware of some work I've done to package up the steps required to build and run TrinityCore using Docker:
https://github.com/JustinChristensen/trinitycore-dockerfile
It's complete enough to be called a 1.0.0 release, but there's still a few ways in which it could be improved that I've listed below.
I see that someone added images to the Docker Hub, but I wasn't sure if these were officially supported or not.
By officially supporting Docker you'd gain a few advantages:
The images I've got in that repository are pretty far from ideal as of right now, and I wanted to float a few ideas your way on how they could be improved:
apt-get install
or apk add
as part of the Docker image build. As an example, In the ideal world I've described above (with the 3.3.5 branch) you'd have 7 packages on the APK (or Debian) repository:
worldserver
mapextractor
vmap4extractor
vmap4assembler
mmaps_generator
admin-console
And 3 images on Docker Hub:
trinitycore-worldserver
trinitycore-admin # standalone tool for driving client data extraction and managing the database
End users that want to play on a LAN then would simply pull down the docker-compose.yml file and run:
CLIENT_DIR=/path/to/WoW/client docker-compose up
And for more complicated setups (like groups that want to host their own private WoW servers for the public) you'd be able to set up your hosting to scale the number of containers as much as you need.
Feel free to close this issue if this is something that doesn't appeal to you. I just figured I'd float the ideas.
I should also mention that I can volunteer my time to configure your CI process to create the packages and docker images, and publish them, and I can write up some documentation on how it's all set up for the maintainers of this project if this was something you guys were interested in.
Some questions on top of my head (I still need to read the whole issue, that's quite long! :) ):
Some nice-to-have:
To discuss:
does this support sql autoupdating ?
In general, yes, but the way that I've got it currently implemented in that repository turns off the server's auto-updater, and downloads and initializes the database as part of the container's entrypoint. That's just a design decision I made for the first incarnation of these Dockerfiles, but I'm open to changing it, or at the very least making it configurable via an environment variable.
Some nice-to-have:
it would be nice to still have the git hash built in the binaries for ".server info" to show the commit hash as that's what we use to identify a version
Currently my images are pulling the source from Github's tarball API, and running cmake without git. This makes it so I don't need to install git on the image, which saves a little space, and speeds up the download process.
To support the case where someone wants to build from a downloaded archive, TrinityCore could add another cmake flag to specify the version without using git, for example:
cmake ../ -DWITHOUT_GIT=1 -DVERSION=<commit sha>
But this is largely unnecessary in the ideal world where the build is done as part of CI and is not part of the docker build:
There are a few ways to track versioning in this scenario:
RUN apk add worldserver
to pull the latest package from the package repository, which at that time of the build in CI would be hypothetically version 1.0.3. You'd be able to see from your CI build of 1.0.3 of worldserver that commit SHA COPY
the built executable into the image (instead of apk add
).In either scenario the existing git-based revision script you're currently using would still be used. And .server info
would still work as usual. By adding docker images and APK packages to the mix it wouldn't be as necessary for the user to run .server info
though to find the commit SHA, because telling you which image or package version is exhibiting a problem there should be enough information for you to find the specific version of the code (commit sha) they're on.
it would be nice to build in RelWithDebInfo so users can still report crashes. And maybe provide a -Debug images too
I'm not familiar with what that is, but if it's a CMake flag, I've currently got support for flag overrides with:
https://github.com/JustinChristensen/trinitycore-dockerfile/blob/master/base/Dockerfile#L12
Run with docker build --build-arg CMAKE_FLAGS='foo bar baz' -t trinitycore-base base
But, again, if we're building these executables and images as part of CI then this becomes a moot point. You'll have full control over what gets built and how in your CI process.
if you are curious about how to get latest 335 TDB, take a look at https://github.com/jackpoz/TrinityCoreCron/blob/3.3.5-TDB/release-tdb.sh#L49 , that's a fully automated script that gets latest TDB and creates a new one.
Sweet. I'll look into how that could be incorporated with what I've currently got. As of right now my entrypoint scripts do something similar (but the user must specify which particular database release they want to download and use).
To discuss:
not sure what plan we have about the updating, if we want to deliver a new docker image only with a new TDB release (so monthly) or with every commit (or maybe have 2 tags, latest and TDB or something like that)
You'd have full control (and the ability to give the user control) over managing the database update process.
I'm not sure what the level of familiarity this project's maintainers have with Docker is, but here's a little rundown, and a diagram for how this might all look in the simple case of a LAN user using docker-compose (note that I renamed trinitycore-tools
from my first post to trinitycore-admin
):
| Containers |
| |
| |
| | +-----------------------------------------+
| +-------------------------+ | | Volumes |
| | | | | |
| | trinitycore-worldserver |--------+--------+ | |
| | | | | | |
| +-------------------------+ | +----------+----->+-------------------------+ |
| | | | | |
| | +------------+----->| trinitycore-data | |
| +-------------------------+ | | | | | |
| | | | | | +-------------------------+ |
| | trinitycore-admin |--------+------+ | |
| | | | | |
| +-------------------------+ | | |
| | | +-------------------------+ |
| | | | | |
| +-------------------------+ | +----------+----->| mysql-data | |
| | | | | | | | |
| | trinitycore-authserver | | | | +-------------------------+ |
| | | | | | |
| +-------------------------+ | | | |
| | | | |
| | | +-----------------------------------------+
| +-------------------------+ | |
| | | | |
| | mysql |--------+--------+
| | | |
| +-------------------------+ |
| |
| |
| |
+-----------------------------------------+
Generally speaking, containers in Docker are considered to be stateless, immutable, and ephemeral. The state in a Docker system is contained in volumes that are mounted to a specific path in the containers on startup.
So in the diagram above, the named volume mysql-data
is mounted into /var/lib/mysql inside the mysql container. This means the mysql container (or mysql "service") can come and go as it pleases and the volume (and data within) remains unchanged. The named volume trinitycore-data
(which contains the extracted WoW client data) would be mounted at /usr/local/data inside of the worldserver container.
With the trinitycore-admin
container you'd be able to manage initializing the database by connecting to the mysql container (the networking is all handled by docker compose) using any sort of mysql client application. You'd have just as much control over this process as you do now, but at the same time making it opaque to the user for the simple cases like a LAN party.
Again, the end goal of all of this is that the simplest possible setup should be as easy as:
CLIENT_DIR=/path/to/WoW/client docker-compose up
See the Docker Overview for more info.
So in the ideal situation I'm proposing you'd have two CI release pipelines for each component:
worldserver (apk package)
1. Build
2. Test
3. Package
4. Publish (to apk repository)
trinitycore-worldserver (docker image)
1. Build (pulls worldserver apk package from apk repo)
2. Publish (to Docker Hub)
And so on.
To focus the discussion on concrete steps, here's what I'm proposing (again, using 3.3.5 as an example):
It would be incredibly convenient if this were available. Building the source right now requires a dedicated layman and the documentation is not always perfect.
It would be incredibly convenient if this were available. Building the source right now requires a dedicated layman and the documentation is not _always_ perfect.
compiling trinitycore on debian is copy and paste, no lawman, and "not always non-perfect".
@Aokromes I was going to comment and say that, last I checked, TrinityCore has documentation for multiple platforms, but i just quickly verified that claim and I see that you're one of the maintainers of that documentation, and so I understand that calling it imperfect probably struck a nerve.
Originally your comment made me think that the maintainers of this project were spending time arguing with people for "+1ing" project improvements, but now that I see you have a special connection to the documentation it makes more sense.
I will say that my desire to push this over the finish line has waned over the last 10 months and I'm keeping this open just to see what the community demand for it as a whole is.
Docker has a lot of benefits I think we should use Docker.
@Aokromes I was going to comment and say that, last I checked, TrinityCore has documentation for multiple platforms, but i just quickly verified that claim and I see that you're one of the maintainers of that documentation, and so I understand that calling it imperfect probably struck a nerve.
Originally your comment made me think that the maintainers of this project were spending time arguing with people for "+1ing" project improvements, but now that I see you have a special connection to the documentation it makes more sense.
I will say that my desire to push this over the finish line has waned over the last 10 months and I'm keeping this open just to see what the community demand for it as a whole is.
well, give a try and you will see it's a fact, not arguing.
great, maybe thats the first step , to make it official kubernetes ready ?^^
https://github.com/TrinityCore/TrinityCore/commit/9af6bf15aa2fe836c3ebba306eaaa8971f00fac4 added docker images to curvle ci pch job as artifact, it's a start
Hey @jackpoz
Glad to see you guys are picking this up. I noticed that your setup doesn't manage the MySQL server, and that the run commands require a fair amount of setup in the form of flags to get the different services communicating with each other.
By employing Docker Compose, you can have Docker automatically configure your volumes and network, and so that enables you to let Docker manage the MySQL server, but while still keeping it's volume on the host machine so that the data persists after shutting down Docker. In essence, the basic case of setting up the servers and the database become completely managed for the end user, and they then only need to execute the following commands to start and stop the whole setup:
# start MySQL and the TrinityCore services
docker-compose up
# stop the above
docker-compose down
An example of how that might look can be found here:
https://github.com/JustinChristensen/trinitycore-dockerfile/blob/master/docker-compose.yml
If you publish those images to the registry (the Hub), Docker Compose will even handle pulling them down onto the user's machine automatically when they issue the above up
command.
Let me know if you have any questions.
Also, high on the list of the wants in the above thread is getting those servers to log a warning and retry the MySQL connection instead of shutting down if they can't connect.
That's a fairly common practice in robust high demand production Docker setups, and for our purposes it means that the docker images with the servers could start before the MySQL server completely finishes starting up.
Hi @JustinChristensen , I think what me and you were trying to achieve with docker are 2 slightly different scenarios but it's fine for me to support both.
In my case I wanted to be able to run worldserver executable on my current server when I want to test a Pull Request, without having to compile it. I have already a MySQL server instance running on the host and I have already extracted dbcs/maps/vmaps/etc. In a way I built the whole system for myself and decided to publish it in case someone wanted to use it too and as a starting point to support docker in TC (and ofc to learn how docker works).
The only thing I'm interested into at the moment is https://github.com/TrinityCore/TrinityCore/commit/9af6bf15aa2fe836c3ebba306eaaa8971f00fac4#diff-78a8a19706dbd2a4425dd72bdab0502ed7a2cef16365ab7030a5a0588927bf47R84 , you could edit that build step to create more images using docker compose.
About publishing the images to dockerhub, I was thinking of integrating that in the circle ci build if dockerhub credentials are specified. TC Circle CI should push only images built from TC branches, while each user would have to set the credentials in docker hub if they want it. It might come in the future too depending on how much free time I have.
I also experimented a bit creating a base runtime image without all the development tools but the amount of saved space didn't look worth the effort, especially since the base circle ci image is already 400 MB and the result with our layers is 600 MB.
I was also thinking about downloading TDB automatically but in the end I didn't want to add all the features in the 1st merged PR.
I hope what I did so far can be reused by what you have in mind, do you have all you need to create a Draft PR or do you need anything from us ?
@jackpoz I see. It does indeed seem we had different goals in mind. My bandwidth is pretty low at the moment, and so my intention in opening this and responding was just to get you guys thinking about the possibilities docker has for a platform such as this.
I won't be submitting a PR, but if you guys have any questions for me feel free to ask.
Given where we stand now, does it make sense to keep this open? If so, do you want to document your goals here so that you know when this issue can be closed? Or would you rather we closed this, so you can document your plan in another issue?
I think it's still good to keep this issue open as it's labeled "Priority-FutureFeatureRequest" .
All the test I have done have been on a VM in Azure as my bandwidth is not that fast to download 600 MB of images. If you ever feel like experimenting with docker-compose and TC again, Circle CI is a good place to create the images (as it's in the cloud and takes 5 minutes to build worldserver without scripts), I could also fire up a small VM for you to test things, just contact me :)
Hey all, I stumbled across this post after I'd decided to update our 5+ year old TrinityCore-Docker repo (were we the first???). Looks like several people are working on the same things. :) I hope this minor off-topic post is alright, I'm only looking to share that there are various different needs/wants/approaches for docker+trinitycore.
Just wanted to say, that after revisiting TrinityCore after 5 years I was pleasantly surprised at the auto db update functionality! It makes maintenance so much easier! And the docs are extremely straightforward, so I'm very thankful for that.
If anyone is curious, here is the revisited repo. Usage:
./action tc-fetch # get/update source code on local filesystem
./action tc-build # build TC to local filesystem
./action tc-db-fetch # get the matching DB files to local filesystem
CLIENT_DIR=/absolutepath/to/installed/WoW3.3.5a ./action tc-extract # extract maps to local filesystem
docker-compose up # initialize db, boot servers, go!!!!!
I too found docker-compose to be invaluable to drastically simplify the overall maintenance and custom code required. It even allows the entire system to be booted and verified during CI (going from empty to populated database and loaded maps)! A critical simplicity is that thankfully TrinityCore only opens its ports when it's ready for connections, which means that docker-compose can just wait for each service without any coordination scripts! Combined with the mariadb container being able to autoload initial SQL, this means that with one command and no scripting you can go from nearly nothing to a completely working TC setup! And with docker-compose, you can even include an SQL editor, if you want. The README goes into more detail, with various common use cases such as how to configure the servers.
Hopefully this illustrates that different approaches are possible, depending on your needs and use cases (ours were wanting to boot a custom server for a weekend game session)! And, more importantly, hopefully someone finds this useful.
Most helpful comment
To focus the discussion on concrete steps, here's what I'm proposing (again, using 3.3.5 as an example):