Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
When executing a podman save with multiple images only one image is saved.
Steps to reproduce the issue:
podman save > images.tar docker.io/busybox:1.27.2 docker.io/metallb/controller:v0.3.1 docker.io/metallb/speaker:v0.3.1
podman import images.tar
podman images
Describe the results you received:
Only the first image is saved and imported.
Describe the results you expected:
All three images should be saved and imported.
The documentation states podman save is the equivalent to docker save. Where as docker save can actually save multiple images at once.
Additional information you deem important (e.g. issue happens only occasionally):
Output of podman version:
podman version 1.1.2
Output of podman info --debug:
debug:
compiler: gc
git commit: ""
go version: go1.10.2
podman version: 1.1.2
host:
BuildahVersion: 1.7.1
Conmon:
package: podman-1.1.2-2.git0ad9b6b.el7.x86_64
path: /usr/libexec/podman/conmon
version: 'conmon version 1.14.0-dev, commit: 6e07c13bf86885ba6d71fdbdff90f436e18abe39-dirty'
Distribution:
distribution: '"centos"'
version: "7"
MemFree: 96034816
MemTotal: 3973865472
OCIRuntime:
package: runc-1.0.0-59.dev.git2abd837.el7.centos.x86_64
path: /usr/bin/runc
version: 'runc version spec: 1.0.0'
SwapFree: 2147217408
SwapTotal: 2147479552
arch: amd64
cpus: 2
hostname: controller.lan
kernel: 3.10.0-957.5.1.el7.x86_64
os: linux
rootless: false
uptime: 59m 48.22s
insecure registries:
registries: []
registries:
registries:
- registry.access.redhat.com
- docker.io
- registry.fedoraproject.org
- quay.io
- registry.centos.org
store:
ConfigFile: /etc/containers/storage.conf
ContainerStore:
number: 0
GraphDriverName: overlay
GraphOptions: null
GraphRoot: /var/lib/containers/storage
GraphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
ImageStore:
number: 28
RunRoot: /var/run/containers/storage
VolumePath: /var/lib/containers/storage/volumes
Additional environment details (AWS, VirtualBox, physical, etc.):
@haircommander PTAL
I got bitten by this bug as well. Saved multiple entries, all pointing to the _same_ image...
Guess I will use multiple files as a workaround, but would be good if it could be fixed ?
@haircommander Any update on this?
@haircommander Any update on this?
still waiting on https://github.com/containers/image/issues/610 . I don't have time currently to tackle it on c/image side
My workaround was to use one tarball for each image, and a for loop to load them one-by-one.
find ... -type f | xargs -n 1 podman load -i
@haircommander any update on this?
Unfortunately not, I am not sure I have the bandwidth to tackle it both here and in c/image.
Ok @vrothberg You want to grab it?
Let's move discussion over to https://github.com/containers/image/issues/610. I add it to my backlog but I can't commit to a schedule at the moment.
This issue had no activity for 30 days. In the absence of activity or the "do-not-close" label, the issue will be automatically closed within 7 days.
This is still a good issue, not sure when we can get someone to work on it?
@mtrmac @vrothberg PTAL
Version: 1.4.4
RemoteAPI Version: 1
Go Version: go1.10.3
OS/Arch: linux/amd64
debug:
compiler: gc
git commit: ""
go version: go1.10.3
podman version: 1.4.4
host:
BuildahVersion: 1.9.0
Conmon:
package: podman-1.4.4-4.el7.x86_64
path: /usr/libexec/podman/conmon
version: 'conmon version 0.3.0, commit: unknown'
Distribution:
distribution: '"rhel"'
version: "7.7"
MemFree: 4073623552
MemTotal: 8350715904
OCIRuntime:
package: containerd.io-1.2.10-3.2.el7.x86_64
path: /usr/bin/runc
version: |-
runc version 1.0.0-rc8+dev
commit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
spec: 1.0.1-dev
SwapFree: 0
SwapTotal: 0
arch: amd64
cpus: 12
hostname: registry
kernel: 3.10.0-1062.el7.x86_64
os: linux
rootless: false
uptime: 56m 35.65s
registries:
blocked: null
insecure: null
search:
- registry.access.redhat.com
- docker.io
- registry.fedoraproject.org
- quay.io
- registry.centos.org
store:
ConfigFile: /etc/containers/storage.conf
ContainerStore:
number: 1
GraphDriverName: overlay
GraphOptions: null
GraphRoot: /var/lib/containers/storage
GraphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
ImageStore:
number: 2
RunRoot: /var/run/containers/storage
VolumePath: /var/lib/containers/storage/volumes
Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:25:41 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:24:18 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.10
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
runc:
Version: 1.0.0-rc8+dev
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
docker-init:
Version: 0.18.0
GitCommit: fec3683
IDS=$(docker images | awk '{if ($1 ~ /^(registry)/) print $3}')
echo $IDS
docker save $IDS -o xxx.tar
IDS=$(podman images | awk '{if ($1 ~ /^(registry)/) print $3}')
echo $IDS
podman save $IDS -o xxx.tar
This is still a good issue, not sure when we can get someone to work on it?
@mtrmac @vrothberg PTAL
I fear we have other tasks with higher priority on our tables at the moment. Let's move discussion over to https://github.com/containers/image/issues/610.
Sadly no progress.
I am trying to champion podman over docker; however, colleagues use "docker save" to air gap images to a docker-archive tarball. This translates into "podman" not being a drop-in replacement "docker", which is suboptimal. At the least, "podman save" should error out indicating "can't do multiple images" rather than silently succeeding with something that is simply not equivalent to "docker save".
Thanks for the feedback, @marcfiu! Throwing an error is definitely better.
I opened https://github.com/containers/libpod/pull/5659 to address your feedback. Note that I didn't error out but log when more than one argument is passed as I don't want others to regress on the current behavior.
@baude, afaiu this issue should get some attention soon, right?
A friendly reminder that this issue had no activity for 30 days.
This is being worked on .
A friendly reminder that this issue had no activity for 30 days.
Most of the work is happening in containers/image right now.
Most helpful comment
This is being worked on .