Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Container process seems to have access to bind volume paths and is referencing them. I know this sounds impossible, but I can't explain this any other way.
Steps to reproduce the issue:
/var/opt/vault
โโโ config
โย ย โโโ conf.json
โโโ file
โโโ logs
With conf.json containing:
storage "file" {
path = "/var/opt/vault/file"
}
listener "tcp" {
address = "[::]:8080"
tls_disable=1
}
podman run \
--hostname=vault \
--cap-add=IPC_LOCK \
--volume /var/opt/vault/file:/vault/file \
--volume /var/opt/vault/config:/vault/config \
vault server
vault operator init
Describe the results you recieved:
This yields
vault operator init
Error initializing: Error making API request.
URL: PUT http://vault.c.artfuldodge.io:8080/v1/sys/init
Code: 400. Errors:
* failed to initialize barrier: failed to persist keyring: mkdir /var/opt/vault: permission denied
Describe the results you expected:
I don't expect the parent folder of the bound volumes to be referenced. In fact I don't expect it to be possible for the contained process to be able to read or know anything about where it is mounted at all.
Additional information you deem important (e.g. issue happens only occasionally):
bind mounts are on a btrfs file system
Output of podman version:
[ben@mullion ~]$ podman version
Version: 0.12.1.2
Go Version: go1.10.3
OS/Arch: linux/amd64
Output of podman info --debug:
debug:
compiler: gc
git commit: ""
go version: go1.10.3
podman version: 0.12.1.2
host:
BuildahVersion: 1.6-dev
Conmon:
package: podman-0.12.1.2-2.git9551f6b.el7.centos.x86_64
path: /usr/libexec/podman/conmon
version: 'conmon version 1.12.0-dev, commit: b909c9e1a3e8f14d5694a118fb9c0c0325a31d4f-dirty'
Distribution:
distribution: '"centos"'
version: "7"
MemFree: 183664640
MemTotal: 16629342208
OCIRuntime:
package: containerd.io-1.2.4-3.1.el7.x86_64
path: /usr/sbin/runc
version: |-
runc version 1.0.0-rc6+dev
commit: 6635b4f0c6af3810594d2770f662f34ddc15b40d
spec: 1.0.1-dev
SwapFree: 6001258496
SwapTotal: 6002044928
arch: amd64
cpus: 4
hostname: mullion
kernel: 3.10.0-957.5.1.el7.x86_64
os: linux
rootless: false
uptime: 528h 21m 30.83s (Approximately 22.00 days)
insecure registries:
registries: []
registries:
registries:
- registry.access.redhat.com
- docker.io
- registry.fedoraproject.org
- quay.io
- registry.centos.org
store:
ContainerStore:
number: 38
GraphDriverName: overlay
GraphOptions: null
GraphRoot: /var/lib/containers/storage
GraphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
ImageStore:
number: 7
RunRoot: /var/run/containers/storage
Additional environment details (AWS, VirtualBox, physical, etc.):
physical box.
it is not a bug that a container knows the host mount source, you can verify it by reading /proc/self/mountinfo in the container, or use findmnt:
$ sudo podman run --rm -v /tmp/foo/bar/baz:/foo fedora findmnt /foo
TARGET SOURCE FSTYPE OPTIONS
/foo tmpfs[/foo/bar/baz] tmpfs rw,nosuid,nodev,seclabel
how does it look in your container?
It might be the vault operator to resolve the mount point to the host path.
Is there anything I can do with podman / mount options to prevent the mount paths from being visible? What's the best place to discuss given that this is expected behaviour? I can't find a mailing list link in the readme.
edit: findmnt info
/ # findmnt /vault/config
TARGET SOURCE FSTYPE OPTIONS
/vault/config /dev/sdc[/vault/config] btrfs rw,relatime,seclabel,space_cache,subvolid=672,subvol=/vault/config
/ # findmnt /vault/file
TARGET SOURCE FSTYPE OPTIONS
/vault/file /dev/sdc[/vault/file] btrfs rw,relatime,seclabel,space_cache,subvolid=672,subvol=/vault/file
/ #
you could probably set up an intermediate mount namespace and from there run podman, but you'd need to do it manually. It is error prone though, restarting the container is probably not going to work.
We have not a mailing list yet, so probably here is the best place for such discussion
I guess the best thing to do then is probably take this up with the vault guys and try to understand how they are ending up with that path. I'll probably also try it on docker and see if it has the same behaviour.
BTW We are working on the mailing list, Stay tuned.
I guess we can close this issue, there is not much we can do from podman