Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
After logging in to our locally hosted repository and attempting to podman pull our latest image I received a couple of errors (one related to transport that was fixed by adding the docker:// to the call) the error below is still present (contact me for URL to image):
ERRO[0011] Error while applying layer: ApplyLayer exit status 1 stdout: stderr: lchown /var/www/drupal/web/config/active: invalid argument
Failed
(0x183b040,0xc00052b600)
Steps to reproduce the issue:
podman login -p {SECRET KEY} -u unused {IMAGE REPO}
podman pull docker://{IMAGE REPO}
Error
Describe the results you received:
Error instead of an image
Describe the results you expected:
Image to be used
Additional information you deem important (e.g. issue happens only occasionally):
Output of podman version:
podman version 1.2.0-dev
Output of podman info --debug:
MemFree: 511528960
MemTotal: 5195935744
OCIRuntime:
package: Unknown
path: /usr/local/sbin/runc
version: |-
runc version 1.0.0-rc6+dev
commit: f79e211b1d5763d25fb8debda70a764ca86a0f23
spec: 1.0.1-dev
SwapFree: 0
SwapTotal: 0
arch: amd64
cpus: 4
hostname: penguin
kernel: 4.19.4-02480-gd44d301822f0
os: linux
rootless: true
uptime: 136h 10m 42.4s (Approximately 5.67 days)
insecure registries:
registries: []
registries:
registries:
- docker.io
- registry.fedoraproject.org
- registry.access.redhat.com
- {IMAGE REPO}
store:
ConfigFile: /home/ldary/.config/containers/storage.conf
ContainerStore:
number: 0
GraphDriverName: vfs
GraphOptions: null
GraphRoot: /home/ldary/.local/share/containers/storage
GraphStatus: {}
ImageStore:
number: 0
RunRoot: /run/user/1000
VolumePath: /home/ldary/.local/share/containers/storage/volumes
Additional environment details (AWS, VirtualBox, physical, etc.):
This is a Debian sandbox on a Pixelbook. We found that one error was removed by adding the docker:// that was also displayed when run without the transport. @vbatts also had me run this command findmnt -T /home/ldary/.local/share/containers/storage
Output
TARGET SOURCE FSTYPE OPTIONS
/ /dev/vdb[/lxd/storage-pools/default/containers/penguin/rootfs] btrfs rw,relatime,discard,space_cache,user_subvol_rm_allowed,subvolid=266,subvol=/lxd/storage-pools/default/containers/penguin/rootfs
@giuseppe PTAL
yes, probably not enough IDs mapped into the namespace (we require 65k) and the image is using some higher ID. What is {IMAGE REPO}?
if you cannot share the image, can you please create a container as root user using that image and run this command:
find / -xdev -printf "%U:%G\n" | sort | uniq
What is the output?
@giuseppe I wasn't able to create it with root either. I'll email you the internal image repo details.
@giuseppe here is the content of the Dockerfile for the image:
# This is a data container so keep the image as small as possible
FROM alpine:3.4
# Make the directory structure that will be exposed as volumes by this data container
RUN mkdir -p /var/www/drupal/web/sites/default/files \
/var/www/drupal/web/config/active \
/docker-entrypoint-initdb.d \
/drupal-data
COPY drupal-db.sql.gz /docker-entrypoint-initdb.d
ADD drupal-filesystem.tar.gz /drupal-data
RUN rm -rf /drupal-data/files/css /drupal-data/files/js /drupal-data/files/php
RUN cp -r /drupal-data/config/lightning/* /var/www/drupal/web/config/active
RUN cp -r /drupal-data/files/* /var/www/drupal/web/sites/default/files
CMD true
What file from the host is copied to '/var/www/drupal/web/config/active'? Can you stat it?
do you get exactly the same error when running as root?
@giuseppe same error when running as root, correct
@KamiQuasi can I get access to the image?
@giuseppe let me see if I can find out who has that permission shouldn't be a problem though.
@giuseppe I believe you should have access to the image now at the URL I sent in email
I've not received any email. Did you send to [email protected]?
@giuseppe Subject is "Github Issue 2542" re-sent it again to make sure.
I confirm the issue is that there are not enough IDs in the namespace, it works for me as root:
$ sudo podman run --rm -ti drupal-data ls -ln /var/www/drupal/web/config
total 136
drwxrwx--- 2 1001410000 0 135168 Feb 28 07:25 active
Could you change the image to use smaller IDs?
@giuseppe sorry for my ignorance, but I don't actually know how to do that. Is it something I can modify in the Dockerfile?
@KamiQuasi you can chown the files to not have that GID.
What user is going to read them? Are they owned by root?
since we found out the issue is in the image, I am going to close this issue. Please feel free to reopen it or add more comments
I just hit this issue as well - I'm not using a custom image, but just testing fedora:latest referenced in this post. I am on Ubuntu 16.04 so I installed podman via apt-get install... The version is podman version 1.3.0-dev.
Here is the non sudo pull attempt - note the same error reported above:
$ podman pull docker://fedora:latest
WARN[0000] using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding subids
Trying to pull docker://fedora:latest...Getting image source signatures
Copying blob 01eb078129a0 done
Copying config d09302f77c done
Writing manifest to image destination
Storing signatures
ERRO[0010] Error while applying layer: ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 192:192 for /run/systemd/netif): lchown /run/systemd/netif: invalid argument
ERRO[0011] Error pulling image ref //fedora:latest: Error committing the finished image: error adding layer with blob "sha256:01eb078129a0d03c93822037082860a3fefdc15b0313f07c6e1c2168aef5401b": ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 192:192 for /run/systemd/netif): lchown /run/systemd/netif: invalid argument
Failed
(0x189ade0,0xc0007caa20)
and then with sudo, all is well!
$ sudo podman pull fedora:latest
[sudo] password for vanessa:
Trying to pull docker://fedora:latest...Getting image source signatures
Copying blob 01eb078129a0 done
Copying config d09302f77c done
Writing manifest to image destination
Storing signatures
d09302f77cfcc3e867829d80ff47f9e7738ffef69730d54ec44341a9fb1d359b
Thanks in advance for your help! This is the very first time I'm using podman, so I'm a super noob.
Let me know if it's better practice to open a new issue, happy to do that too!
This looks like you don't have any range of UIDs in /etc/subuid. Therefor you container only handle root content, any other UID is going to cause failures. Add a range of UIDs to /etc/subuid and you should be fine.
Thanks @rhatdan, I peeked at that but I do appear to have a range (should the range be different?)
$ cat /etc/subuid
vanessa:100000:65536
$ cat /etc/subgid
vanessa:100000:65536
They look similar to the ones in this example, but it's likely that I missed a step, if the above is not correct. Could you point me to the docs that mention to the user how to set this up correctly? Here is the trail that I followed:
If there are additional steps required to get it working, currently some users will only figure this out via the error message. I'd like to suggest that some additional documentation is added to the install to address this.
What does
podman run fedora cat /proc/self/uid_map
Ah, more evidence! The original command needed docker:// to specify the registry:
$ podman run fedora cat /proc/self/uid_map
WARN[0000] using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding subids
Error: unable to pull fedora: image name provided is a short name and no search registries are defined in /etc/containers/registries.conf.
and then when specified, we get the same error (but with an extra tidbit of evidence!) See the last lines.
$ podman run docker://fedora cat /proc/self/uid_map
WARN[0000] using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding subids
Trying to pull docker://fedora...Getting image source signatures
Copying blob 01eb078129a0 done
Copying config d09302f77c done
Writing manifest to image destination
Storing signatures
ERRO[0012] Error while applying layer: ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 192:192 for /run/systemd/netif): lchown /run/systemd/netif: invalid argument
ERRO[0012] Error pulling image ref //fedora:latest: Error committing the finished image: error adding layer with blob "sha256:01eb078129a0d03c93822037082860a3fefdc15b0313f07c6e1c2168aef5401b": ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 192:192 for /run/systemd/netif): lchown /run/systemd/netif: invalid argument
Failed
Error: unable to pull docker://fedora: unable to pull image: Error committing the finished image: error adding layer with blob "sha256:01eb078129a0d03c93822037082860a3fefdc15b0313f07c6e1c2168aef5401b": ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 192:192 for /run/systemd/netif): lchown /run/systemd/netif: invalid argument
So you don't have to scroll..
"sha256:01eb078129a0d03c93822037082860a3fefdc15b0313f07c6e1c2168aef5401b": ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 192:192 for /run/systemd/netif): lchown /run/systemd/netif: invalid argument
we downgraded the error of not having multiple uids to the warning you are getting:
WARN[0000] using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding subids
Are newuidmap and newgidmap installed? I think you may need to install them separately on Ubuntu
Boum! That did the trick :)
$ sudo apt-get install -y uidmap
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following package was automatically installed and is no longer required:
snapd-login-service
Use 'sudo apt autoremove' to remove it.
The following NEW packages will be installed:
uidmap
0 upgraded, 1 newly installed, 0 to remove and 5 not upgraded.
Need to get 64.8 kB of archives.
After this operation, 336 kB of additional disk space will be used.
Get:1 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 uidmap amd64 1:4.2-3.1ubuntu5.3 [64.8 kB]
Fetched 64.8 kB in 0s (204 kB/s)
Selecting previously unselected package uidmap.
(Reading database ... 455142 files and directories currently installed.)
Preparing to unpack .../uidmap_1%3a4.2-3.1ubuntu5.3_amd64.deb ...
Unpacking uidmap (1:4.2-3.1ubuntu5.3) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up uidmap (1:4.2-3.1ubuntu5.3) ...
$ podman pull docker://fedora:latest
Trying to pull docker://fedora:latest...Getting image source signatures
Copying blob 01eb078129a0 done
Copying config d09302f77c done
Writing manifest to image destination
Storing signatures
d09302f77cfcc3e867829d80ff47f9e7738ffef69730d54ec44341a9fb1d359b
Should we add this to here? (this is in install.md)

@vsoch yes please!
We need more contributors running on ubuntu desktops...
I got lots of those :)
I had this same issue (on ArchLinux). I think the cause was that I had run podman before creating /etc/sub{u,g}id. After killing all running podman-related process and a (probably over-zealous) sudo rm -rf ~/.{config,local/share}/containers /run/user/$(id -u)/{libpod,runc,vfs-*}, the issue disappeared.
I'm on openSUSE Leap 15.1 and confirms @jcaesar's steps are effective. To be more specific I found killing existing podman (cache process?) and rm /run/user/$UID/libpod/pause.pid is enough for me. I guess it'll force a reload of podman to /etc/sub?id.
Full procedure:
sudo touch /etc/sub{u,g}id
sudo usermod --add-subuids 10000-75535 $(whoami)
sudo usermod --add-subgids 10000-75535 $(whoami)
rm /run/user/$(id -u)/libpod/pause.pid
It seems that running podman system migrate instead of deleting the pid file should be more elegant?
I had this same issue (on ArchLinux). I think the cause was that I had run podman before creating
/etc/sub{u,g}id. After killing all running podman-related process and a (probably over-zealous)sudo rm -rf ~/.{config,local/share}/containers /run/user/$(id -u)/{libpod,runc,vfs-*}, the issue disappeared.
works for me at ubuntu 18.04
Wanted to build simple local Wordpress environment for development according to https://docs.docker.com/compose/wordpress/
Was getting this error when using podman-compose on Manjaro 5.1.21-1:
ERRO[0085] Error while applying layer: ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 0:42 for /etc/gshadow): lchown /etc/gshadow: invalid argument
ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 0:42 for /etc/gshadow): lchown /etc/gshadow: invalid argument
what I did to get rid of the error:
Thank you all for helping me figure this out !
Full procedure:
sudo touch /etc/sub{u,g}id sudo usermod --add-subuids 10000-75535 $(whoami) sudo usermod --add-subgids 10000-75535 $(whoami) rm /run/user/$(id -u)/libpod/pause.pid
It's suit for ubuntu.
The reason is mainly because username changed.
rm /run/user/$(id -u)/libpod/pause.pid
it is safer to use podman system migrate as containers need to be restarted as well
Most helpful comment
Full procedure: