If anyone is interested: We have new hardware available which could be included for the build server. (AMD Ryzen)
Oh, nice ... where to find and how to setup ;)
Maybe during the X-mas holidays there is some time for that.
Thank you, this is great news! It is also the perfect opportunity to write a tutorial of how to setup a new computer with puppet-libelektra ;)
I'll send you the login details once it has a public IP. (Currently it has an internal IP which would require us to tunnel over another computer.)
Does anyone know if the difference of Ryzen 5 and 7 is relevant for us? Or is it only a matter of seconds for the build time? More cores could be relevant though, I'll check the exact CPUs they have. The computer we will not use, will be used as low-load mail/web server (currently served by an AMD X2 Dual Core with a load of 0).
@tom-wa Are there any news about the power9 computer?
The ryzen hardware is reachable at a7.complang.tuwien.ac.at
Seems like a7 is down, will try to fix it.
We restarted it and I temporarily fixed /etc/resolv.conf by removing the symlink to NetworkManager. I am not sure if it will survive a reboot, otherwise it should work, though. Our admin will take a look at it on Monday.
The first build jobs already passed on the a7 machine.
However, I've decided to do a simple POC for docker based builds: see https://build.libelektra.org/jenkins/job/test-docker/
Of course, this is far away from being complete, but maybe it gives you some impressions and ideas :smile:
Thank you, this is great!
The pipeline config looks really nice. Does the pipeline run with two different images (stretch and xenial)? (The loop looks like only one image is used: "docker.image('elektra-builddep:stretch').inside()").
Is it safe to use sudo and install Elektra or will this modify the docker image? (I added two more stages but commented them out for now.)
I also enabled triggering from GitHub (by default and phrases, see fae2fbf9c8257d5f3c14ff14e24c1c0014538838). It first did not work because I forgot to add it as "GitHub" project.
The speed of the hardware seems to be good.
Does the pipeline run with two different images (stretch and xenial)? (The loop looks like only one image is used: "docker.image('elektra-builddep:stretch').inside()").
Oh, yes indeed. should be replaced by $it. I've added a jessie version too now :smirk:
Is it safe to use sudo and install Elektra or will this modify the docker image?
In general, yes, because Docker images are immutable, all changed files go into the "runnging" container (copy-on-write). So each new container gets exactly the same file, no modification from previous runs.
The Jenkins Docker plugin defaults uses a unprivileged user with same UID within the container, to allow writes to the workspace outside the container.
I have already experimented with running everything in the container as root, to get "run_all" passing, but may shell recorder tests are failing with this (e.g. "using system/..." instead of "using user/..."). So I've reverted that for now.
In general we should design the Docker images in a way to have all tests passing without requiring root privileges, while allowing to test modifications in system space too.
I'm not quite sure, what is really required here? Does a chmod -R 777 /etc/kdb do the trick?
For handling Docker images, I would suggest:
This way, we can use them for builds in a well defined way and other users/devs can use these images for testing too.
shell recorder tests are failing with this
Can you create an issue? Or are they only related to not being able to write to /etc/kdb?
In general we should design the Docker images in a way to have all tests passing without requiring root privileges, while allowing to test modifications in system space too.
While it is possible to test Elektra without ever being root, it makes the setup unrealistic. So we should take advantage of the non-harmful root access and run sudo make install into the /.
I'm not quite sure, what is really required here? Does a chmod -R 777 /etc/kdb do the trick?
A chown is needed and it needs to be executed as root.
See doc/TESTING.md (spec folders need to be chowned, too).
For handling Docker images, I would suggest:
- We add all Docker image recipes (Dockerfile) into our libelektra repo
We already have a doc/docker/Dockerfile. You are welcomed to add more Dockerfiles.
- create a build job to build all images automatically
- upload them to Dockerhub for sharing
Yes, these are excellent suggestions but as always it is a question of available time. The most urgent point is that we understand the setup you have done, so that others can add further Docker images and so on.
Is there any problem with tagging the ryzon HW build agent also as stable/stretch? It would speed up the build time a lot.
until now, I've skipped the installation of Elektra build deps. But I can do that in addition.
I'll label the agent accordingly afterwards.
Labeled agent as "stretch" an first native (non-docker) build job succeeded: https://build.libelektra.org/jenkins/job/elektra-gcc-configure-debian-stretch/662
Thank you, this is great! Let us see how it improves build time.
Did you install the deps directly on the hosts or within a container?
Btw. seems like we get a second ryzen hardware for at least one year. But it is not directly reachable via Internet but we would need a ssh tunnel over a7. Anyone interested in setting this up?
Accounts should be available within the next days.
@e1528532 Having the new ryzen included as agent would be great, the build server is under really heavy load. I am afraid the load also causes the lost connections among other problems.
$ w
19:39:19 up 43 days, 10:34, 0 users, load average: 10,88, 9,42, 10,50
Ideally we should avoid any build job to be built on the hardware where jenkins is running. (Even the build server website sometimes is hardly responding.)
The debian unstable agents seem to be the new bottleneck. Maybe we can add one more docker agent on the v2?
And we should reduce the build jobs on the jenkins server itself, it still gets too high load. (Somtimes even 502 errors.)
a lot of stuff happened with the build infrastructure, so i think this issue can be closed.