Podman: Error creating /run/user/0/containers on podman login

Created on 1 Mar 2018  Â·  54Comments  Â·  Source: containers/podman

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

Description

On a fresh Fedora 27 Cloud Base image, I ran dnf update && dnf install podman buildah, then ran this:

[fedora@buildah-fresh ~]$ sudo su -
[root@buildah-fresh ~]# podman login quay.io
Username: rturk
Password:
error creating directory "/run/user/0/containers": mkdir /run/user/0/containers: no such file or directory
[root@buildah-fresh ~]#

Describe the results you expected:

I expected to successfully log in, but was not able to.

Additional information you deem important (e.g. issue happens only occasionally):

If I run mkdir -p /run/user/0/containers, I am able to successfully log in subsequently.

[root@buildah-fresh ~]# mkdir -p /run/user/0/containers
[root@buildah-fresh ~]# podman login quay.io
Username: rturk
Password:
Login Succeeded!

Asked a colleague, ipbabble, who does not have a /run/user/0 directory and is somehow able to successfully run podman login.

Output of podman version:

Version:       0.2.2
Go Version:    go1.9.4
OS/Arch:       linux/amd64

Additional environment details (AWS, VirtualBox, physical, etc.):

Fedora Cloud Base 27, running in Red Hat OpenStack Platform:

[root@buildah-fresh ~]# cat /etc/redhat-release
Fedora release 27 (Twenty Seven)

Most helpful comment

I suspect the issue is that we did most of our dev/testing under Fedora workstation & similar. If it's as simple as creating /run/user/$UID if it doesn't exist, we can do that easily. I'd like to read more on XDG_RUNTIME_DIR before putting that in, though.

All 54 comments

It looks like we use a path for temporary login files that exists for normal users, but not root. We can likely work around this by specifying an alternate path if the primary doesn't exist.

I can successfully podman login as the fedora user, yay! However, I can't really do anything else without write permissions to /var/run/containers.

[fedora@buildah-fresh ~]$ podman login quay.io
Username: rturk
Password:
Login Succeeded!
[fedora@buildah-fresh ~]$ podman pull buildah
could not get runtime: mkdir /var/run/containers: permission denied

Running with sudo should let you do anything - we find the login directory via an environment variable, which sudo should preserve, allowing you to use login. Running as non-root won't work for container operations, given overlayfs requires a lot of capabilities (I also suspect image operations will fail for a different reason), so even chowning the directory won't fix matters.

Ok - I burned my instance to the ground and started over to make sure there was no lingering login anywhere and tried again. After dnf -y update && dnf -y install buildah podman:

[fedora@buildah-fresh ~]$ podman login quay.io
Username: rturk
Password:
Login Succeeded!
[fedora@buildah-fresh ~]$ sudo podman pull quay.io/rturk/buildah
Trying to pull quay.io/rturk/buildah:latest...
Failed
error pulling image "quay.io/rturk/buildah": error pulling image from "quay.io/rturk/buildah": Error determining manifest MIME type for docker://quay.io/rturk/buildah:latest: unauthorized: access to the requested resource is not authorized

The quay.io login credentials do not seem to be passed through to my sudo shell. Am I doing it wrong?

Hm. That looks like it should work (and if it shouldn't, we should force podman login to be run as sudo as well). I'll look more into this tomorrow.

FYI the only containers dir I have outside my home dir and /var/lib is in /run and /etc
/run/containers
/etc/containers

NOTHING in /run/user

# find /run/user -type d -name containers 
find: ‘/run/user/1000/gvfs’: Permission denied
#

And I run podman login as root and it works. No problem

I'm on Fedora 27 laptop and not a VM Fedora Cloud Base like Ross

I'm just wondering if there's a certificate in play here that might be gumming things up or if the login isn't dropping the credentials in a place that the pull is expecting them to be. Can you try the following? The first one will show if it's a certificate issue, the second two will show if it's a problem with the login credentials.

sudo podman pull --tls-verify=false quay.io/rturk/buildah

and/or

sudo podman pull --creds rturk quay.io/rturk/buildah # should prompt for a password

and/or

sudo podman pull --creds rturk:yourpassword quay.io/rturk/buildah

Thanks!

Ok, here is the output:

[fedora@buildah-fresh ~]$ sudo podman pull --tls-verify=false quay.io/rturk/buildah
Trying to pull quay.io/rturk/buildah:latest...
Failed
error pulling image "quay.io/rturk/buildah": error pulling image from "quay.io/rturk/buildah": Error determining manifest MIME type for docker://quay.io/rturk/buildah:latest: unauthorized: access to the requested resource is not authorized
[fedora@buildah-fresh ~]$ sudo podman pull --creds rturk quay.io/rturk/buildah
Password:
Trying to pull quay.io/rturk/buildah:latest...
Getting image source signatures
Copying blob sha256:bf0f1f12b6ba38f3b9f55fc2f9e865ff92bcc8b5523fce9aeb5886dad05372a4
 86.31 MB / 86.31 MB [======================================================] 3s
Writing manifest to image destination
Storing signatures
6e2b497dd3a916710adc4617c34f68ef8177f2fd75d7e7bdbb9442b1d8761ec1

FYI, if I follow those commands up with another podman pull without --creds, it does not work:

[fedora@buildah-fresh ~]$ sudo podman pull quay.io/rturk/nginx
Trying to pull quay.io/rturk/nginx:latest...
Failed
error pulling image "quay.io/rturk/nginx": error pulling image from "quay.io/rturk/nginx": Error determining manifest MIME type for docker://quay.io/rturk/nginx:latest: unauthorized: access to the requested resource is not authorized

Thanks @rossturk! That definitely smells like login isn't dropping the authentication where pull is expecting it. Something for us to dig at. If you've the time, and if I was bright I'd have asked you in the previous, can you try:

sudo podman login quay.io

sudo podman pull quay.io/rturk/buildah

Thanks again!

That puts me back to square one:

[fedora@buildah-fresh ~]$ sudo podman login quay.io
Username: rturk
Password:
error creating directory "/run/user/0/containers": mkdir /run/user/0/containers: no such file or directory

I suspect sudo mkdir -p /run/user/0/containers and rerunning would fix it.

FYI, if you use the --creds in the podman command line, it does not retain those credentials once the command completes. The credentials are only tucked away and used for follow on commands when entered via 'podman login'.

Thanks for trying the 'sudo podman login', more good data for digging.

BTW here is my debug:

podman --log-level debug login quay.io

Username: ipbabble
Password:
DEBU[0007] Looking for TLS certificates and private keys in /etc/docker/certs.d/quay.io
DEBU[0007] GET https://quay.io/v2/
DEBU[0008] Ping https://quay.io/v2/ err
DEBU[0008] Ping https://quay.io/v2/ status 401
DEBU[0008] Increasing token expiration to: 60 seconds
DEBU[0008] GET https://quay.io/v2/
Login Succeeded!

I think it's super interesting that @ipbabble is able to podman login as root with no /run/user/0 directory at all! He is Fedora on bare metal, I am Fedora Cloud Base. Otherwise, we can't figure out what's different.

I have /run/user/0 on Fedora workstation bare metal, no /run/user/0 on Fedora server on a VM

I suspect the issue is that we did most of our dev/testing under Fedora workstation & similar. If it's as simple as creating /run/user/$UID if it doesn't exist, we can do that easily. I'd like to read more on XDG_RUNTIME_DIR before putting that in, though.

I don't have bare metal to look at. However my F27 workstation vm does not have a /run/user/0 either. I'm able to pull my image from a private repository using the creds with the pull, or by doing a 'podman login' with or without sudo. I think @mheon is going down the right track.

I did bit of googling and read some docs and see that directory in /run/user/<UID> is created by pam_systemd which sets default XDG_RUNTIME_DIR.

From freedesktop org docs it states, pam_systemd — Register user sessions in the systemd login manager. So below is what I see is happening.

If you login with an user, the corresponding /run/user/<UID> is created by systemd-logind.service (pam_systemd).

$ ssh [email protected]

# ll /run/user/
total 0
drwx------. 3 root root 80 Mar  1 04:50 0

$ ssh [email protected]

$ ll /run/user/
total 0
drwx------. 3 root   root   80 Mar  1 04:50 0
drwx------. 3 sunilc sunilc 80 Mar  1 04:51 1000

On the other hand if you login with X user and then su to Y user, then the directory is not created for Y user. Because the Y user has not logged in through systemd-login.

$ ssh [email protected]

[sunilc@svma ~]$ su -
Password: 

# ll /run/user/
total 0
drwx------. 3 sunilc sunilc 80 Mar  1 04:52 1000

Where does podman login create the files when XDG_RUNTIME_DIR is not set? Where does podman pull look for them?

From man page I see by default podman login will create the auth file at ${XDG_\RUNTIME_DIR}/containers/auth.json.

If there is no /run/user/<PID> directory present, then I guess podman login will fail with the message mentioned in first comment.

We can override default location with --authfile option

I wonder how bad it would be for us to create XDG_RUNTIME_DIR in place of systemd-logind if it does not exist. I also wonder about distros that may not have systemd at all - we'll need to do something about them too.

I think the answer is probably to ignore XDG_RUNTIME_DIR entirely and create files in the libpod tmp dir (named by UID, so we don't have conflicts). This is a location we can guarantee exists everywhere.

I think the reason the /run/user/ location was originally chosen is it gets cleaned out on reboot. It would be nice to have a location that did the same. Since we don't have a daemon banging around, we can't clear it through some exit routine there.

We should just check for the environment XDG_RUNTIME_DIR, if it does not exist, we default to /run/user/UID/.
That way it works in both cases.

@rhatdan that is what it is doing https://github.com/containers/image/blob/master/pkg/docker/config/config.go#L135.
This was the plan from the beginning from what I remember. If you run as root, the UID would be 0 yes? So even with this the path would end up being /run/user/0, which does not exist always?

There may also be cases where XDG_RUNTIME_DIR is set, but the directory was not created?

@mheon So check for if XDG_RUNTIME_DIR is set and if it is, check if the directory exists, if not create that directory? Is it okay creating /run/user/0 if it doesn't exist? (not sure if that is a thing we are allowed to do)

Whether we can create the directory is a good question. I'll look into that tomorrow.

@mheon okay, I will do some research also. Assigning this to myself.

If XDG_RUNTIME_DIR is not defined We should default to /run/user/UID, and create it if it does not exist.
If XDG_RUNTIME_DIR we should use it, and NOT create the directory if it does not exist. Setting the XDG_RUNTIME_DIR directory indicates that the caller knows what he is doing and is in charge of creating the directory. I don't to have a users typo accidentally trigger the creation of a directory, that the user might not even know about. I would rather fail.

@rhatdan I think this bug is caused by XDG_RUNTIME_DIR being set, but the dir not existing... Would it be better to revert to /run/user/UID at that point, instead of failing?

@mheon how about we print a warning if XDG_RUNTIME_DIR is set but the directory doesn't exist? This way the user will know the cause of the failure. We can also say "Either create the directory or unset XDG_RUNTIME_DIR".

I'd prefer to just fall back to /run/user/UID, but that would definitely be better than what we do now

So while writing a patch for this, I discovered that the code to create the directory if it doesn't exist is already there https://github.com/containers/image/blob/master/pkg/docker/config/config.go#L172.
So this is failing to create the /run/user/0/contianers directory. Could it be permissions?

Oh nvm, I am blind. It is doing Mkdir and and not MkdirAll.

Re-posting from https://github.com/containers/image/pull/424 to keep the conversation in one place:

I’m not convinced about auto-creating /run/user/$UID; /run/user is, at least on my system, root:root 755, so any non-root user is not going to benefit.

It seems to me that if the real fix for that needs to be in some quite different place.

  • Looking at pam_systemd(8), it removes /run/user when “the last concurrent session of a user ends”; so, if we created the directory without systemd knowing about the session, it can delete the directory at an unpredictable time when some other session terminates.
  • At least on F26, I do have pam_systemd configured in the system-auth config, which is used by su. The failure is actually
pam_systemd(sudo:session): Cannot create session: Already running in a session

So, overall, either pam_systemd needs to be taught to handle su, or we need to move away from /run/user, or we would have to give up on su/sudo (which seems untenable).

Well we can't change every distributions handling of su, so that is a non starter. We need a default location fore the creds, which is currently $XDG_RUNTIME_DIR/containers or /var/run/$UID/containers if $XDG_RUNTIME_DIR does not exists correct? Doing a MakeDirsAll on this might be the simplest solution.

MakeDirsAll on /var/run/$UID just won’t work for $UID not 0.

Even for root only, breaking the documented rules of that directory isn’t all that great either:

[mitr@f26]$ su -
[root@f26 ~]# ls /run/user
1000  42
[root@f26 ~]# mkdir /run/user/0
[root@f26 ~]# ls /run/user
0  1000  42
[root@f26 ~]# ssh localhost
[root@ssh-f26 ~]# ls /run/user
0  1000  42
[root@ssh-f26 ~]# logout
Connection to localhost closed.
[root@f26 ~]# ls /run/user
1000  42

and /run/user/0 is gone.

Meanwhile, systemd has liked to pretend that su/sudo and nested sessions don’t exist, so I am not quite holding my breath for pam_systemd to improve on its own, either.


BTW using $XDG_RUNTIME_DIR at all can be tricky as well:

[mitr@f26 ~]$ echo $XDG_RUNTIME_DIR
/run/user/1000
[mitr@f26 ~]$ su # Note: not (su -)
[root@f26 mitr]# echo $XDG_RUNTIME_DIR
/run/user/1000
[root@f26 mitr]# mkdir -m 0700 $XDG_RUNTIME_DIR/containers
[root@f26 mitr]# exit
[mitr@f26 ~]$ ls -ld $XDG_RUNTIME_DIR/containers
drwx------. 2 root root 40  7. bƙe 18.44 /run/user/1000/containers


 and as the original user, I can’t write into my own private directory any more. IIRC GNOME or D-Bus has been running into this in practice. Although in this case it’s difficult to say whether it is a problem or a feature, some might like that a login from an unprivileged session can be used inside su.


Overall going back to storing the data in the home directory seems most clean to me. That would lose the “log out of Docker on last logout or system shutdown” behavior, but I never saw that as all that valuable; maybe I’m missing something.

I have no objections to using to a dir in the user's home.

I would like to minimize the amount of time, we have clear passwords stored on flat files on disk.
MakedirsAll will work if /run/user/UID exist, it will create containers, if you are root and /run/user/0 does not exist, it will also work to create the content and the parent directories.

I think @mtrmac pointed out that making /run/user/ is not a reliable way to guarantee credentials remain cached because we're working independently of systemd, and it may create/remove the directory independent of us.

We could use a different directory for root running the containers. Until we have user namespace podman will not run as non root, not sure it will work run with usernamespace as non root then either.

Should we use MkdirAll, but restrict ourselves to only doing it if we are UID 0?

I don't actually think you need that constraint. Since it will either work or fail, if it fails we get an error.

If XDG_RUNTIME points to /run/user/3267 and it attempts to create /run/user/3267/containers, this will succeed if /run/user/3267 exists and I am logged in as 3267 it will fail if the directory does not exist.

Carving out a root-only exception (e.g. use /run/containers-root or whatever the rules for /run are) would work fine for podman, but the same login code is also used for skopeo and maybe other tools which do not have any privileges.

Tradeoffs
 I am instinctively averse to “works 99% of the time, fails unpredictably 1% of the time” designs, but maybe using /run/$UID is better than leaving keys on disk. If $HOME and /run were out of the question for unprivileged users, we wouldn’t have that many options: I don’t think we can rely on the presence of an userspace key storage daemon, and I don’t know about the practicality of using the kernel keyring (especially WRT debugging the state of the system in an emergency).

@rhatdan @mheon @mtrmac So what is the consensus? Store the auth in XDG_RUNTIME_DIR if it is set otherwise to default to /run/usr/UID (and create this if it doesn't exist) like we were doing?

@umohnani8 I don't believe we have a consensus yet, so leaving it as-is for now will have to do

How about if XDG_RUNTIME_DIR is not defined, we run it with /run/containers/UID, then we don't have to worry about systemd removing the directory

I have no problem with this. I presume we create the directory if it doesn't exist? Will we have permissions to do this if we're not root?

I agree with this also. Yeah you will have to be root to be able to create the /run/containers/UID directory. But all podman and buildah commands are run as root anyway, and we can add it to the docs "always run login and logout as root".

@rhatdan should I go ahead and update the patch to save in /run/containers/UID? User will have to run as root though, otherwise we won't be able to create the directory under /run.

Yes, I am really primary concerned about the root use case.

One more thing, apparently /run/containers already exists, with /run/containers/registries.conf created by the atomic-registries package.

I can’t see any issue with sharing it, and using /run/containers/$UID for the login status, but I may be missing something.

Is sharing the directory OK?

Yes, I don't see an issue. I am trying to keep all of our stuff under /etc/containers, /run/containers, /var/lib/containers, /usr/share/containers...

Was this page helpful?
0 / 5 - 0 ratings