Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
I am running a non-root podman container. Since i want control the start and stop of this container in by systemd i found i nice function in podman. Unfortunately it does not work properly for non-root pod-container.
Let's have a look, here is my container, as you may notice, the technical user is lohnbuchhaltung
[lohnbuchhaltung@chasmash ~]$ podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
185bb3e27555 localhost/jlohn07:latest /lib/systemd/syst... 9 hours ago Up 9 hours ago 192.168.178.39:9222->22/tcp lohnbuch_00
so let's figure out, what is beeing created, when i calls podman generate systemd:
[lohnbuchhaltung@chasmash ~]$ podman generate systemd --timeout 10 --restart-policy=no --name lohnbuch_00
# container-lohnbuch_00.service
# autogenerated by Podman 1.8.0
# Wed Feb 19 18:30:59 CET 2020
[Unit]
Description=Podman container-lohnbuch_00.service
Documentation=man:podman-generate-systemd(1)
[Service]
Restart=no
ExecStart=/usr/bin/podman start lohnbuch_00
ExecStop=/usr/bin/podman stop -t 10 lohnbuch_00
PIDFile=/run/user/1002/containers/overlay-containers/185bb3e27555195312460d7c2964126bc2d51f57d2cfd4b83545cfc4a816d10e/userdata/conmon.pid
KillMode=none
Type=forking
[Install]
WantedBy=multi-user.target
There a following problems:
Steps to reproduce the issue:
2.
3.
Describe the results you received:
see above
Describe the results you expected:
a functional systemd.service file.
Additional information you deem important (e.g. issue happens only occasionally):
Output of podman version:
[lohnbuchhaltung@chasmash ~]$ podman version
Version: 1.8.0
RemoteAPI Version: 1
Go Version: go1.13.6
OS/Arch: linux/amd64
Output of podman info --debug:
[lohnbuchhaltung@chasmash ~]$ podman info --debug
debug:
compiler: gc
git commit: ""
go version: go1.13.6
podman version: 1.8.0
host:
BuildahVersion: 1.13.1
CgroupVersion: v2
Conmon:
package: conmon-2.0.10-2.fc31.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.10, commit: 6b526d9888abb86b9e7de7dfdeec0da98ad32ee0'
Distribution:
distribution: fedora
version: "31"
IDMappings:
gidmap:
- container_id: 0
host_id: 1002
size: 1
- container_id: 1
host_id: 231072
size: 65536
uidmap:
- container_id: 0
host_id: 1002
size: 1
- container_id: 1
host_id: 231072
size: 65536
MemFree: 13702340608
MemTotal: 16427347968
OCIRuntime:
name: crun
package: crun-0.12.1-1.fc31.x86_64
path: /usr/bin/crun
version: |-
crun version 0.12.1
commit: df5f2b2369b3d9f36d175e1183b26e5cee55dd0a
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
SwapFree: 8300523520
SwapTotal: 8300523520
arch: amd64
cpus: 4
eventlogger: journald
hostname: chasmash
kernel: 5.4.19-200.fc31.x86_64
os: linux
rootless: true
slirp4netns:
Executable: /usr/bin/slirp4netns
Package: slirp4netns-0.4.0-20.1.dev.gitbbd6f25.fc31.x86_64
Version: |-
slirp4netns version 0.4.0-beta.3+dev
commit: bbd6f25c70d5db2a1cd3bfb0416a8db99a75ed7e
uptime: 14h 9m 46.58s (Approximately 0.58 days)
registries:
search:
- docker.io
- registry.fedoraproject.org
- registry.access.redhat.com
- registry.centos.org
- quay.io
store:
ConfigFile: /mnt/raidSpace/Homes/lohnbuchhaltung/.config/containers/storage.conf
ContainerStore:
number: 1
GraphDriverName: overlay
GraphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: fuse-overlayfs-0.7.5-2.fc31.x86_64
Version: |-
fusermount3 version: 3.6.2
fuse-overlayfs: version 0.7.5
FUSE library version 3.6.2
using FUSE kernel interface version 7.29
GraphRoot: /mnt/raidSpace/Homes/lohnbuchhaltung/.local/share/containers/storage
GraphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
ImageStore:
number: 24
RunRoot: /run/user/1002/containers
VolumePath: /mnt/raidSpace/Homes/lohnbuchhaltung/.local/share/containers/storage/volumes
Package info (e.g. output of rpm -q podman or apt list podman):
podman-1.8.0-2.fc31.x86_64
podman-plugins-1.8.0-2.fc31.x86_64
Additional environment details (AWS, VirtualBox, physical, etc.):
Just to make sure: can you include the exact path of where you're saving the generated systemd unit file, and all commands you're using to activate and verify it?
You mean this one: /mnt/raidSpace/Homes/lohnbuchhaltung
Probably not. What I mean is: the output of podman generate systemd, you're saving it in a file somewhere, right? Then running certain systemd commands to enable it. Can you post the exact path of the file, and the exact systemd commands you are running?
Ok, i did the following:
systemctl daemon-reloadsystemctl status container-lohnbuch_00.service and i receive[root@chasmash system]# systemctl status container-lohnbuch_00.service
● container-lohnbuch_00.service - Podman container-lohnbuch_00.service
Loaded: loaded (/etc/systemd/system/container-lohnbuch_00.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2020-02-19 04:27:02 CET; 17h ago
Docs: man:podman-generate-systemd(1)
CPU: 148ms
Feb 19 04:27:02 chasmash systemd[1]: Starting Podman container-lohnbuch_00.service...
Feb 19 04:27:02 chasmash podman[854]: Error: could not get runtime: error creating tmpdir /run/user/1002/libpod/tmp: mkdir /run/user/1002: permission denied
Feb 19 04:27:02 chasmash systemd[1]: container-lohnbuch_00.service: Control process exited, code=exited, status=125/n/a
Feb 19 04:27:02 chasmash systemd[1]: container-lohnbuch_00.service: Failed with result 'exit-code'.
Feb 19 04:27:02 chasmash systemd[1]: Failed to start Podman container-lohnbuch_00.service.
Thank you. This confirms what I was suspecting.
Then, the file is copied into /etc/systemd/system
There's your problem. The unit file was, unless I misunderstand completely, created as user X. The unit file, therefore, must also be created such as to be owned by user X:
$ mkdir -p ~/.config/systemd/user
$ podman systemd generate > ~/.config/systemd/user/some-nice-name.service
$ systemctl --user daemon-reload
$ systemctl --user start some-nice-name.service
Can you try that and see if it resolves your problem?
Ok, as the user, i:
podman generate systemd --name lohnbuch_00 > ~/.config/systemd/user/container-lohnbuch_00.servicesystemctl --user daemon-reloadsystemctl --user status container-lohnbuch_00.service and i received:[lohnbuchhaltung@chasmash user]$ systemctl --user status container-lohnbuch_00.service
● container-lohnbuch_00.service - Podman container-lohnbuch_00.service
Loaded: loaded (/mnt/raidSpace/Homes/lohnbuchhaltung/.config/systemd/user/container-lohnbuch_00.service; disabled; vendor preset: enabled)
Active: failed (Result: protocol) since Wed 2020-02-19 23:07:49 CET; 6min ago
Docs: man:podman-generate-systemd(1)
CPU: 131ms
Feb 19 23:07:49 chasmash systemd[3615]: Starting Podman container-lohnbuch_00.service...
Feb 19 23:07:49 chasmash podman[13401]: lohnbuch_00
Feb 19 23:07:49 chasmash systemd[3615]: container-lohnbuch_00.service: New main PID 4223 does not belong to service, and PID file is not owned by root. Refusing.
Feb 19 23:07:49 chasmash systemd[3615]: container-lohnbuch_00.service: New main PID 4223 does not belong to service, and PID file is not owned by root. Refusing.
Feb 19 23:07:49 chasmash systemd[3615]: container-lohnbuch_00.service: Failed with result 'protocol'.
Feb 19 23:07:49 chasmash systemd[3615]: Failed to start Podman container-lohnbuch_00.service.
Oh yes, PID is not owned by root, surprise ;-)
I got a similar error, when i copied the user generated session-file and added:
User=lohnbuchhaltung
Group=lohnbuchhaltung
to the service [Service] section located at /etc/systemd/system
No one there ?
I'm sorry; this is beyond me. I spent some time last week trying to recreate your setup, and everything just worked fine for me. I really don't know what else to suggest. Perhaps someone else on the team might have a look next week (we're trying to get a release out the door this week). I am sorry I wasn't able to help.
That's ok. I will wait until you release your next version. Please be so kind and make sure, that fedora users get their rpm updates. I will immediately will inform you, in case of any different behaviour.
My testbed (F31 server edition) is a stable but slow environment (J4205 CPU with 4xraid harddisk and EXT4FS). The idea is to check its stability and to optimize it before i gonna migrate it to a xeon server.
@vrothberg and I are on PTO this week. Should give more feedback next week.
Dear sirs,
i think i understand, what went wrong. I created a new container using podman-create-run under another user, that serves a web-page. So this is what i get:
[techlxoffice@chasmash systemdConfig]$ podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
016f2e02a570 localhost/kivi_prod_05:latest /lib/systemd/syst... 55 minutes ago Up 55 minutes ago 192.168.178.39:9190->80/tcp kiki_p05
0925fa89f576 localhost/kivi_prod_04:latest /lib/systemd/syst... 7 hours ago Exited (0) 5 hours ago 192.168.178.39:9190->80/tcp kiki_p04
583200deccf4 localhost/kivi_prod:latest /lib/systemd/syst... 3 weeks ago Created 192.168.178.39:9190->80/tcp funny_zhukovsky
The most recent pod is kiki_p05. The next step was to figure out, if a user-systemd directory exist under ~/.config/systemd/user , but there wasn't any. So create it.
$ mkdir -p ~/.config/systemd/user
and now create my individual service with
$ podman generate systemd --name kiki_p05 > podmankivi.service
$ cp -p podmankivi.service ~techlxoffice/.config/systemd/user/container-kivi.service
$ systemctl --user daemon-reload
$ systemctl --user enable container-kivi.service
Then i stopped the container by podman and tried to start is by using systemctl:
$ podman stop kiki_p05
$ systemctl --user enable container-kivi.service
$ systemctl --user start container-kivi.service
$ podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
016f2e02a570 localhost/kivi_prod_05:latest /lib/systemd/syst... About an hour ago Up 4 minutes ago 192.168.178.39:9190->80/tcp kiki_p05
0925fa89f576 localhost/kivi_prod_04:latest /lib/systemd/syst... 7 hours ago Exited (0) 6 hours ago 192.168.178.39:9190->80/tcp kiki_p04
583200deccf4 localhost/kivi_prod:latest /lib/systemd/syst... 3 weeks ago Created 192.168.178.39:9190->80/tcp funny_zhukovsky
perfect! This document was helpful.
```
Great, happy that it's working for you!
@groovyman is your container starting by systemd during boot? Mine is getting loaded but not started. I have to trigger systemctl --user start my-container.service
At least, I found the solution here: #https://github.com/containers/libpod/issues/5494