In _oc cluster_, pods can't resolve public DNS records. This is most probably _iptables_ related issue. Because after running iptables -F before oc cluster up it works as it suppose to.
For me this only happens on _Fedora 24_. Also more people like @josefkarasek experience the same issue there. On my second laptop (Arch linux), all works fine.
@csrwng This is the issue we talked about on IRC. I am happy to provide more information, just let me know which one.
I am running latest master (v1.3.0-alpha.2+9ac6923-dirty), although I don't believe it makes difference.
I need to investigate whether there's something that cluster up can do to detect that iptables is not set up correctly and at least warn you.
I installed a minimal F24 VM, scp'd oc, run oc cluster up, and hit an issue @csrwng says may be this one. I did have to run:
firewall-cmd --zone=public --add-port=8443/tcp --permanent
firewall-cmd --reload
To get the web console available after it came up.
I had the same issue with DNS on Fedora 23. Once I ran iptables -F I was all good.
Hit this same issue on Fedora 24. Doing iptables -F before oc cluster up resolved the DNS issues I was seeing in an S2I build when cloning from github.com
I'm having the same issue, but this also happens with docker images so I reported this to Fedora at:
https://bugzilla.redhat.com/show_bug.cgi?id=1394474
@javilinux did flushing iptables help ?
@csrwng Yes, flushing iptables solved the issue.
@csrwng I got reported this IRL several times last week when people were trying oc cluster up :-) I don't think we can solve this easily without messing up the iptables. Perhaps we can add a check and fail if the iptables rules that forbid DNS to work from inside a container are in place.
@mfojtik that's the thing, how do we know which iptables rules are doing this ?
@csrwng we can run a "probe" or something to check if 1) you can run pods, 2) if DNS is setup. If a check fails, we can suggest flushing iptables rules as I think every firewall config is specific :)
@csrwng @mfojtik I think flushing iptables could be a non runner for some users.
It seems very drastic for what is hopefully a small set of rules around DNS.
Is it possible to limit it to some specific commands to disable rules that affect DNS? (if that is what the problem is)
@david-martin the problem is not specifically DNS. The created pods can't access the outside network at all in some cases.
I reproduced this issue on an F25 host with libvirt installed but disabled. Here is my workflow and data:
sudo iptables-save. Output of initial state is here: iptables-pre.txtoc cluster up and then run oc new-app <builder_image>~<github_url>. The build fails because "github.com" cannot be resolved by the builder image.oc cluster down and then run sudo iptables -F.sudo iptables-save. Output of flushed state is here:oc cluster up and oc new-app <builder_image>~<github_url>. The "github.com" domain resolves correctly and the build succeeds.@nhr so with a little bit of tinkering it looks like things are getting dropped along the INPUT -> INPUT_ZONES -> IN_FedoraWorkstation path. I was able to get things going by adding a rule to allow input traffic from docker0:
iptables -A IN_FedoraWorkstation_allow -i docker0 -j ACCEPT
However, FedoraWorkstation is a firewalld zone, so I think the best approach is to update the firewalld rules with firewall-cmd. Without actually testing it, I would say things would work if you allow traffic from 172.30.0.0/16 or interface docker0, or even just allowing traffic to ports 8443, 8053, and 53.
I will dig a little bit more and come up with a good recommendation on the best approach. Thx for the help.
The simplest command I have so far are to open the 8443/tcp and 53/udp ports on your default zone:
firewall-cmd --add-port=8443/tcp --add-port=53/udp
and if you're happy doing that, then make it permanent:
firewall-cmd --permanent --add-port=8443/tcp --add-port=53/udp
If you want to limit traffic by source ip, you could create a new zone:
firewall-cmd --permanent --new-zone=openshift
firewall-cmd --permanent --zone=openshift --add-source=172.0.0.0/8
firewall-cmd --permanent --zone=openshift --add-port=8443/tcp --add-port=53/udp
firewall-cmd --reload
The simplest command I have so far are to open the 8443/tcp and 53/udp ports on your default zone
Is there some way to add this to the other rules that oc cluster up applies? The post-flush iptables output shows a bunch of new rules that are introduced when we run the command, so I would gather that we already have instrumentation in place to modify iptables...
Is there some way to add this to the other rules that oc cluster up applies?
Hmm, I'm not sure that we should be opening ports for you on your firewall. The other rules that are created are created by the kubelet and are mainly there to deal with services, but I don't believe they expose anything externally.
One thing we could do though is spin up a pod, check whether it can access the master API endpoint and the DNS server. If it can't, then tell you that you need to allow that traffic (via iptables rules, firewall-cmd, or whatever else).
/cc @smarterclayton
Doing a check sgtm
On Jan 5, 2017, at 9:32 AM, Cesar Wong notifications@github.com wrote:
Is there some way to add this to the other rules that oc cluster up applies?
Hmm, I'm not sure that we should be opening ports for you on your firewall.
The other rules that are created are created by the kubelet and are mainly
there to deal with services, but I don't believe they expose anything
externally.
One thing we could do though is spin up a pod, check whether it can access
the master API endpoint and the DNS server. If it can't, then tell you that
you need to allow that traffic (via iptables rules, firewall-cmd, or
whatever else).
/cc @smarterclayton https://github.com/smarterclayton
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/openshift/origin/issues/10139#issuecomment-270657090,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p0SaPUNIpCdkVvNTg5k72snDzeIuks5rPP71gaJpZM4JZwlq
.
Most helpful comment
The simplest command I have so far are to open the 8443/tcp and 53/udp ports on your default zone:
and if you're happy doing that, then make it permanent:
If you want to limit traffic by source ip, you could create a new zone: