I have a webserver running on localhost:
$ curl http://localhost:5000
Hello World!
I create a container that is on a network. I'd like the container to be able to the localhost webserver.
$ podman network create mynet
$ podman run -ti --network mynet busybox /bin/sh
/ # wget http://10.0.2.2:5000
Connecting to 10.0.2.2:5000 (10.0.2.2:5000)
wget: can't connect to remote host (10.0.2.2): Network is unreachable
--network slirp4netns:allow_host_loopback=true (added in https://github.com/containers/podman/commit/7722b582b4f09df64fb55e3ab9669392754ce75c) option allows a container to access localhost, but it can't be combined with a --network:
$ podman run -ti --network mynet busybox /bin/sh
/ # ping other_container
PING other_container (10.88.23.8): 56 data bytes
64 bytes from 10.88.23.8: seq=0 ttl=64 time=0.098 ms
64 bytes from 10.88.23.8: seq=1 ttl=64 time=0.107 ms
^C
--- other_container ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.098/0.102/0.107 ms
/ # exit
$ podman run -ti --network mynet --network slirp4netns:allow_host_loopback=true busybox /bin/sh
/ # ping other_container
ping: bad address 'other_container'
cc @AkihiroSuda @giuseppe
Why not just use
podman run --net=host ...
If this is just a flag that you would pass to slirp4netns, then I think we would consider it.
@AkihiroSuda WDYT? Want to open a PR to make this happen?
I think we have two options at this point:
--network flag to be passed multiple times, so we can have --net slirp4netns:opt1=val1,opt2=val2 --net cninet2,cninet2--net - so --net cninet1,cninet2 --network-opt slirp4netns:opt1=val1,opt2=val2 or similarI would prefer options 1.
The infra is launched per user, not per pod, so the slirp options need to be global containers.conf fields
podman run --net=host ...
Because I don't want to have all pods sharing the same port space.
The infra is launched per user, not per pod, so the slirp options need to be global containers.conf fields
I don't understand what this means in practice. What happens is the same user launches 1 pod without slirp4netns:allow_host_loopback=true and then one pod with the argument?
Maybe it is an option to handle localhost access at the network level: when creating a network add an option that indicates localhost should be accessible as a host on the new network?
What happens is the same user launches 1 pod without slirp4netns:allow_host_loopback=true and then one pod with the argument?
This isn't possible because all the CNI networks and the CNI pods share the singleton(-per-user) slirp4netns (and rootless-cni-infra) instance.
when creating a network add an option that indicates localhost should be accessible as a host on the new network?
SGTM, but the implementation would be slightly complicated like this:
allow_host_loopback=truerootless-cni-infra would be created with an iptables rule that blocks connections to 10.2.2.2 by defaultloopback option, the iptables rule should be disabled.SGTM, but the implementation would be slightly complicated like this:
I like it is granular at the network level (vs affecting all containers), and it being controllable from the cli.
@AkihiroSuda I don't find docs for this: what should I put in $HOME/.config/containers/containers.conf to set slirp4netns:allow_host_loopback=true?
Unimplemented yet, the conversation above is discussion toward implementing it.
@rhatdan @mheon what are your thoughts on the suggested approach?
when creating a network add an option that indicates localhost should be accessible as a host on the new network
@AkihiroSuda provided some info about what the implementation could look like: https://github.com/containers/podman/issues/7888#issuecomment-703515751
@AkihiroSuda I will add a flag to containers.conf to allow users to specify that they want this on by default.
Then we can use this flag to tell rootless podman to setup slirp4netns to share loopback by default. Does this sound good to you?
SGTM
@rhatdan can you make some time for adding this flag?
Workaround: This NoRouter manifest allows a container to access port 80 of the host as 127.0.42.100:8080
hosts:
host:
vip: "127.0.42.100"
ports: ["8080:127.0.0.1:80"]
podman:
cmd: "podman exec -i some-container norouter"
vip: "127.0.42.101"
$ norouter a.yaml
@ashley-cui Now that containers.conf is updated, can you complete this feature?
@ashley-cui https://github.com/containers/common/releases/tag/v0.30.0 is ready to vendor into Podman.
@rhatdan If it's just wiring in containers.conf, then I can try to knock it out this afternoon
Most helpful comment
Why not just use
podman run --net=host ...