Podman: Feature - Add --tree option to podman

Created on 28 Feb 2018  Β·  23Comments  Β·  Source: containers/podman

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind feature

Description

Docker removed the --tree option along time ago. It basically shows the parent child relationship of the image layers, similar to pstree. It was awesome for hunting down provenance. Now, I use a tool called DockViz which is delivered as a container image from DockerHub - sadly it's not secure because it's third part and you have to run it privileged

Steps to reproduce the issue:

  1. docker pull

  2. docker images --tree (gone and depricated)

  3. podman has no option either, and you can't use dockviz

Describe the results you received:

podman run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock nate/dockviz images -t
option parsing failed: Unknown option --exit-dir
write child: broken pipe

Describe the results you expected:
Something like:

podman run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock nate/dockviz images -t
Unable to find image 'nate/dockviz:latest' locally
Trying to pull repository registry.access.redhat.com/nate/dockviz ...
Trying to pull repository registry.access.redhat.com/nate/dockviz ...
Trying to pull repository registry.fedoraproject.org/nate/dockviz ...
Trying to pull repository docker.io/nate/dockviz ...
sha256:902ef37d58ca85e03d0f286651f12754767ccc6704c00e8b78bf45f0ac7e32f1: Pulling from docker.io/nate/dockviz
9e912cc91e6e: Pull complete
Digest: sha256:902ef37d58ca85e03d0f286651f12754767ccc6704c00e8b78bf45f0ac7e32f1
Status: Downloaded newer image for docker.io/nate/dockviz:latest
β”œβ”€ Virtual Size: 195.9 MB
β”‚ └─ Virtual Size: 195.9 MB
β”‚ β”œβ”€ Virtual Size: 362.7 MB
β”‚ β”‚ β”œβ”€09fb7aad87c8 Virtual Size: 1.1 GB Tags: registry.access.redhat.com/openshift3/ose-docker-registry:v3.6.173.0.5
β”‚ β”‚ └─ Virtual Size: 970.1 MB
β”‚ β”‚ β”œβ”€e4aa5cb65487 Virtual Size: 970.1 MB Tags: registry.access.redhat.com/openshift3/ose-deployer:v3.6.173.0.5
β”‚ β”‚ └─e02b7ea3c91f Virtual Size: 988.7 MB Tags: registry.access.redhat.com/openshift3/ose-haproxy-router:v3.6.173.0.5
β”‚ β”œβ”€febdb23e8687 Virtual Size: 208.6 MB Tags: registry.access.redhat.com/openshift3/ose-pod:v3.6.173.0.5
β”‚ └─391e5989cdec Virtual Size: 226.2 MB Tags: registry.access.redhat.co

Or even better:

podman images --tree
β”œβ”€ Virtual Size: 195.9 MB
β”‚ └─ Virtual Size: 195.9 MB
β”‚ β”œβ”€ Virtual Size: 362.7 MB
β”‚ β”‚ β”œβ”€09fb7aad87c8 Virtual Size: 1.1 GB Tags: registry.access.redhat.com/openshift3/ose-docker-registry:v3.6.173.0.5
β”‚ β”‚ └─ Virtual Size: 970.1 MB
β”‚ β”‚ β”œβ”€e4aa5cb65487 Virtual Size: 970.1 MB Tags: registry.access.redhat.com/openshift3/ose-deployer:v3.6.173.0.5
β”‚ β”‚ └─e02b7ea3c91f Virtual Size: 988.7 MB Tags: registry.access.redhat.com/openshift3/ose-haproxy-router:v3.6.173.0.5
β”‚ β”œβ”€febdb23e8687 Virtual Size: 208.6 MB Tags: registry.access.redhat.com/openshift3/ose-pod:v3.6.173.0.5
β”‚ └─391e5989cdec Virtual Size: 226.2 MB Tags: registry.access.redhat.co

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

(paste your output here)

Additional environment details (AWS, VirtualBox, physical, etc.):

All 23 comments

This sounds perfectly doable, though the display code could get interesting to write

@runcom WDYT

Seems like something we could have an intern do?
@haircommander WDYT?

It definitely seems doable!

@haircommander Have you looked into this?

No I have not yet @rhatdan

@haircommander if you are now working on this, Can I try this?

@haircommander Is back at school, so @kunalkushwaha you are welcome to work on this.

@rhatdan Thanks :). Also feel free to assign me anything urgent if you feel I can work on.

We have an outstanding issue on --mount support for podman run/create? If you want to tackle that one.

@kunalkushwaha, did you find time to work on this issue? Feel free to chat on #podman in case you have questions.

@vrothberg Hi, yes I am looking into this.
I am trying to figure out to map image layers digest to Image, so I can build the dependency.
Will discuss the detail by end of day.

_Thanks for pointing out #podman, Just joined #podman on IRC._

Awesome. Thanks, @kunalkushwaha!

I did a draft implementation to find out how this can be implemented.

The expected output

<Tag>  <ID>  <Size> <Short Description from history>
     β”œβ”€  <Tag>  <ID>  <Size> <Short Description from history>
     └─  <Tag>  <ID>  <Size> <Short Description from history>

Sample output below

$ sudo podman tree localhost/kk/test2 
└─  [localhost/kk/test2:latest] 243d869d1 156 
  β”œβ”€  [localhost/kk/test2:latest] 268e94b38 3584 
  β”œβ”€  [localhost/kk/test2:latest] 5182e9677 3584 /bin/sh -c #(nop)  CMD ["/bin/bash"]     # Ideally it should show centos:latest
  β”œβ”€  [localhost/kk/test2:latest] <missing> 3584 /bin/sh -c #(nop)  LABEL org.label-schem
  β”œβ”€  [localhost/kk/test2:latest] <missing> 208290304 /bin/sh -c #(nop) ADD file:6340c690b0886

While doing this, I found few findings and limitations to show all possible/expected information as below.

  • OCI image's history have limited information i.e. information like each layer _do not_ have information like ImageID or Digest associated with it.
  • History layers do not have Tags associated with it.
  • Since Digest is not available with each layer, corresponding image ID is hard to find.
  • many of layers in History actually do not reflect to any real Image or layer in ImageStore

While I tried to use similar logic as dockviz project, I am able to build dependency tree for any image based on history and building lookup table for each image in local registry.

Though few things are not correct and I would like to understand if we can get these information to build the meaningful dependencies.

  1. Is it okay, if we do not show the layers which do not have real reference in local registry? i.e. the layer info present in history but squashed/merged with Image.

    • This will result in less number of layers in tree, but all layers will have real reference.

    • This makes implementation easy and while debugging/tracing all information will have some reference to look at.

      This will result into above sample output as

$ sudo podman tree localhost/kk/test2 
└─  [localhost/kk/test2:latest] 243d869d1 156 
  β”œβ”€  [localhost/kk/test1:latest] 268e94b38 3584 
  β”œβ”€  [centos:latest] 5182e9677 3584 /bin/sh -c #(nop)  CMD ["/bin/bash"]  

  1. How to get Image Tags from Digest like docker/moby do here moby/daemon/images/image_history.go

@kunalkushwaha, that looks really nice! I think at that point, it might be helpful to look at some code to guide the conversation, so feel free to open a PR where we can continue.

@mtrmac might be interested in this change and have some valuable input.

Can someone describe in exact words what is the child/parent/peer relationship in the tree intended to be based on? (Not necessarily the exact underlying implementation, but the desired semantics at least.) It’s not entirely clear to me in the original example, and @kunalkushwaha’s example seems to be quite different (in particular, the original example clearly uses the top-level image name:tag only once, whereas the other example uses it for all layers).

AFAICS there is a fundamental problem in building the history tags based only on the layer history (ChainID) (which is the only exact way to do that, because the history text is ~arbitrary in principle, and the configs of base images are lost) in that there may exist multiple different images with the same layer set (ChainID) but different configs, and building the history based on ChainID only can’t differentiate between them. (This may not be a problem when the local system has precise β€œparent image” links, like Docker has for locally-derived (not pulled) images.)

That’s in addition to the smaller (but necessary to handle) output design concern, that even if the parent image is not ambiguous, it may have multiple tags.

How to get Image Tags from Digest like docker/moby do here moby/daemon/images/image_history.go

Given a github.com/containers/storage.Image, the Names array contains (at least per its use in c/image, and perhaps not exclusively, ) name:tag references. Whether you can get the Image object depends on what the input for the lookup is (as discussed above); if it is a layer, the image’s TopLayer should match (but there may be multiple matches!)

if it is a layer, the image’s TopLayer should match (but there may be multiple matches!)

that is storage.Store.ImagesByTopLayer.

@mtrmac my example is not correct due to limitations I explained in https://github.com/containers/libpod/issues/415#issuecomment-428878575. For above example, I just added Image tag to each layer (which is not expected)

As I stated, If we consider only layers which exist in local registry, it will be easier to display information.
_( I updated the output in above comment, it should be different image name)_

Currently the output looks like below.

$ sudo podman tree localhost/kk/test2 
└─  [localhost/kk/test2:latest] 243d869d1 156 
   └─ [localhost/kk/test1:latest] 268e94b38 3584 
     └─  [centos:latest] 5182e9677 3584 /bin/sh -c #(nop)  CMD ["/bin/bash"]  

But it can be modified as , I guess that you meant by a better picture of "parent", "child" & "peer"

$ sudo podman tree localhost/kk/test2 
└─  [centos:latest] 5182e9677 3584 /bin/sh -c #(nop)  CMD ["/bin/bash"] 
   └─ [localhost/kk/test1:latest] 268e94b38 3584 
     └─ [localhost/kk/test2:latest] 243d869d1 156 

i.e. Root will be the base image, from which test1:latest image was build. test2:latest image was build from test1:latest here.

if test1:latest & test2:latest are both build from centos:latest then the graph should be

$ sudo podman tree localhost/kk/test2 
└─  [centos:latest] 5182e9677 3584 /bin/sh -c #(nop)  CMD ["/bin/bash"] 
   └─ [localhost/kk/test1:latest] 268e94b38 3584 
   └─ [localhost/kk/test2:latest] 243d869d1 156 

@vrothberg I will create a PR to make discussion easier.

PR made as #1642

This continues to be worked on .

Getting closer.

closing per #1642 being merged. Congrats @kunalkushwaha on fixing the current oldest open issue!

Was this page helpful?
0 / 5 - 0 ratings