Website: Issue with k8s.io/docs/tutorials/kubernetes-basics/explore/explore-interactive/

Created on 11 Dec 2019  路  23Comments  路  Source: kubernetes/website

This is a Bug Report

Problem:
Can not run this command:
curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/

it returns this: Error trying to reach service: 'dial tcp 172.17.0.6:80: connect: connection refused'

Step to reproduce:
I follow the instruction from this website: https://kubernetes.io/docs/tutorials/kubernetes-basics/

However, instead of using the simulator, i build up the environment in my computer. Everything is running fine (all commands from "Creating a Cluster" and "Deploy an App" executed perfectly). But when I try to run curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/ I got the above mentioned error.

There is one thing I could see when run: "kubectl describe pods" is that the Hostport and Port are none. which is different to the simulation (Port: 0 and HostPort: 8080).

Please kindly advise.

lifecyclrotten

Most helpful comment

Ok. I've solved the issue by using http://localhost:8001/api/v1/namespaces/default/pods/kubernetes-bootcamp-69fbc6f4cf-tfmfs:8080/proxy/ instead of the official recommended URL.
Although "it works", I have no clue why I have to change the recipe compared to the official recipe. Only possible hypothesis (untested) could be about instanciating a kubernetes cluster with port already use on the host machine may change behavior.

All 23 comments

One of the steps asks you to run:

echo -e "\n\n\n\e[92mStarting Proxy. After starting it will not output a response. Please click the first Terminal Tab\n"; 
kubectl proxy

Did that go OK? You need to _leave that proxy running_; maybe the page should make that clearer for any reader following along from home.

What's the environment that you're running minikube on @vietnguyen1254 ?

Thanks for the answer @sftim

Yes, I leave the proxy running. It can return a json when I call: curl http://localhost:8001/version. But when I call the pod, then, it returns: Error trying to reach service: 'dial tcp 172.17.0.6:80: connect: connection refused'

I am running in CentOS 8 in Vsphere.

I don't know but i think the issue is with the port?? - as I can see that both hostport and port of the container is none in my container but in the simulator, it returns with values 0 and 8080/TCP.

When I try to call server.js with bash session inside the container, it runs smoothly.

I'm wondering what kind of CNI you have installed into your cluster @vietnguyen1254
I think the documentation is accurate though. These guides assume you used Minikube to set up your cluster.

/triage support

I got the same issue using Ubuntu 18.04 64 bits with local install. I also keep the proxy running and like @vietnguyen1254, When I try to call server.js with bash session inside the container, it runs smoothly.

I didn't installed anything in particular for CNI (wasn't aware of it before reading this thread)

minikube version

minikube version: v1.6.1
commit: 42a9df4854dcea40ec187b6b8f9a910c6038f81a

kubectl version

Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}

curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/

returns

{
  "kind": "Pod",
  "apiVersion": "v1",
  "metadata": {
    "name": "kubernetes-bootcamp-69fbc6f4cf-tfmfs",
    "generateName": "kubernetes-bootcamp-69fbc6f4cf-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/kubernetes-bootcamp-69fbc6f4cf-tfmfs",
    "uid": "998fb172-c2e0-4767-8f74-6249df0c1ed4",
    "resourceVersion": "41194",
    "creationTimestamp": "2019-12-14T01:58:10Z",
    "labels": {
      "app": "kubernetes-bootcamp",
      "pod-template-hash": "69fbc6f4cf"
    },
    "ownerReferences": [
      {
        "apiVersion": "apps/v1",
        "kind": "ReplicaSet",
        "name": "kubernetes-bootcamp-69fbc6f4cf",
        "uid": "320e44c3-d5ba-4053-b1d0-373b46904035",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-6kbzh",
        "secret": {
          "secretName": "default-token-6kbzh",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "kubernetes-bootcamp",
        "image": "gcr.io/google-samples/kubernetes-bootcamp:v1",
        "resources": {

        },
        "volumeMounts": [
          {
            "name": "default-token-6kbzh",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "IfNotPresent"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "minikube",
    "securityContext": {

    },
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0,
    "enableServiceLinks": true
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-12-14T01:58:10Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-12-14T01:58:25Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-12-14T01:58:25Z"
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-12-14T01:58:10Z"
      }
    ],
    "hostIP": "192.168.39.82",
    "podIP": "172.17.0.6",
    "podIPs": [
      {
        "ip": "172.17.0.6"
      }
    ],
    "startTime": "2019-12-14T01:58:10Z",
    "containerStatuses": [
      {
        "name": "kubernetes-bootcamp",
        "state": {
          "running": {
            "startedAt": "2019-12-14T01:58:24Z"
          }
        },
        "lastState": {

        },
        "ready": true,
        "restartCount": 0,
        "image": "gcr.io/google-samples/kubernetes-bootcamp:v1",
        "imageID": "docker-pullable://gcr.io/google-samples/kubernetes-bootcamp@sha256:0d6b8ee63bb57c5f5b6156f446b3bc3b3c143d233037f3a2f00e279c8fcc64af",
        "containerID": "docker://513b1c1dc9f2cc0db759d9233d0fe6048e6e9176f426435d79668494cc82d2c7",
        "started": true
      }
    ],
    "qosClass": "BestEffort"
  }
}

Tell me if you need more output to get clue to help us troubleshooting

Regards

@sftim Sorry for the late response.
I have not installed any CNI... And I was not aware of it as well...

Hmm, this sounds like there might be an issue with the advice on the page.
Just in case:
/remove-triage support

OK. Got a clue but need some more feedback to solve the issue.
When I connect to the pod with

kubectl exec -ti $POD_NAME bash

then I run curl localhost:8080 and it's fine.

Then, I change within the pod the call to use

export $POD_NAME="mypodname" # Fake, use your own and do
curl http://172.17.0.6:8080/api/v1/namespaces/default/pods/$POD_NAME/proxy/

And it's fine too. Then, I read again the feedback from the error when calling from the proxy (so outside the kubernetes network) dial tcp 172.17.0.6:80: connect: connection refused

So, I do

curl http://172.17.0.6:80/api/v1/namespaces/default/pods/$POD_NAME/proxy/

Then, I get the same error:

Failed to connect to 172.17.0.6 port 80: Connection refused

The conclusion is that the kubectl proxy matches external port 8001 to internal port 80 of the pod instead of matching port 8080, hence the error.

So, now what I need to know is how can I change the behavior so kubectl proxy matches correctly port 8001 to port 8080. Is it within the application, at pod level, at kubectl proxy level or elsewhere?

Thanks

Ok. I've solved the issue by using http://localhost:8001/api/v1/namespaces/default/pods/kubernetes-bootcamp-69fbc6f4cf-tfmfs:8080/proxy/ instead of the official recommended URL.
Although "it works", I have no clue why I have to change the recipe compared to the official recipe. Only possible hypothesis (untested) could be about instanciating a kubernetes cluster with port already use on the host machine may change behavior.

@ThomasG77 thanks for your share, exactly the same here, and solved by what you said, specify the 8080 port explicitly.

Still wondering, it works inside the tutorial terminal but fails on my Mac.

same here, finally I found a solution. I dropped the idea of learning Kubernetes a few times because everytime I'd end up stuck here.

Any idea why this is not providing a port when we create the deployment? I really don't understand why it is not working locally (linux/ubuntu) with minikube if we do not provide the port, but works fine when using the simulation.

This should be updated on the documentation, or the image should somehow declare herself the port - as i'm new to K8S i do not know how it discovers the ports, but it should spot the 8080 one.

Thanks for the solution, @thomasG77 - I can finally continue my discovery of Kubernetes.

I also had the same issue on Windows10.
I have been struggling for two days until I reached the solution from @ThomasG77 (thank you).
It is really frustrating when you want to learn something from the official manual and it is simply just wrong/incomplete. And it has been in this state for a while, this ticket has been opened several months ago.
I think the manual should say that in some circumstances the port also needs to be given. Or, just replace the example in the tutorial so that others don't waste days with this issue.

I think the issue is that while creating the deployment we haven't specified the port. So I tried with

kubectl run kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --port 8080

and then

http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/

works

Alternatively, you can use kubectl edit deployment/kubernetes-bootcamp after running kubectl create deployment and add this to the containers spec:

        ports:
        - containerPort: 8080
          protocol: TCP

After adding this, the container spec should look like so:

    spec:
      containers:
      - image: gcr.io/google-samples/kubernetes-bootcamp:v1
        imagePullPolicy: IfNotPresent
        name: kubernetes-bootcamp
        ports:
        - containerPort: 8080
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File

Once I did that, I was able to issue the curl command successfully:

$ curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/
Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-fcc5bfb48-w27sh | v=1

The interactive tutorial must be doing something like this (or using kubectl run instead of kubectl create deployment as noted above by @namaggarwal) when setting up the deployment behind the scenes. The instructions for this tutorial should be updated to clarify how it's exposing the deployment via a NodePort so that folks running the interactive tutorial commands in their own development environments don't continue to stumble over this issue.

@ryangsteele - I was able to edit the port config on the container spec, and after my pods reloaded, the cURL statement works fine.

Note to others - after editing your container spec you will need to re-query for your pod name and re-export the env variable for the curl command to work correctly.

export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')

in fact, if you use
kubectl edit deployment/kubernetes-bootcamp
in the Browser->Terminal window of the tutorial ,
which opens a vim with the deployment description , you see that:

ports:
  - containerPort: 8080<
     protocol: TCP

is set

Ok. I've solved the issue by using http://localhost:8001/api/v1/namespaces/default/pods/kubernetes-bootcamp-69fbc6f4cf-tfmfs:8080/proxy/ instead of the official recommended URL.
Although "it works", I have no clue why I have to change the recipe compared to the official recipe. Only possible hypothesis (untested) could be about instanciating a kubernetes cluster with port already use on the host machine may change behavior.

before
http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/

after
http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME:8080/proxy/

all you have to do is to append the port number 8080 after then $POD_NAME

Would that be issue of the app itself, container exposes port 8080 instead of 80 which is expected as a default http port?
For my understanding kubectl proxy forwards to port 80, but there is no response from the pod on that port.
I don't know yet the relation between pod and container ports.

ryangsteele comment https://github.com/kubernetes/website/issues/18079#issuecomment-622011285
points that it is missing for the container port definition, so it points to the container having 8080 instead of 80.

It works in the kubernetes emulator on the website using the same deployment image, so this is a bit strange.

Edit:

Just found it, it is in the app itself:

www.listen(8080,function ()

root@kubernetes-bootcamp-765bf4c7b4-gjznx:/# cat server.js
var http = require('http');
var requests=0;
var podname= process.env.HOSTNAME;
var startTime;
var host;
var handleRequest = function(request, response) {
response.setHeader('Content-Type', 'text/plain');
response.writeHead(200);
response.write("Hello Kubernetes bootcamp! | Running on: ");
response.write(host);
response.end(" | v=1\n");
console.log("Running On:" ,host, "| Total Requests:", ++requests,"| App Uptime:", (new Date() - startTime)/1000 , "seconds", "| Log Time:",new Date());
}
var www = http.createServer(handleRequest);
www.listen(8080,function () {
startTime = new Date();;
host = process.env.HOSTNAME;
console.log ("Kubernetes Bootcamp App Started At:",startTime, "| Running On: " ,host, "\n" );
});

Funny enough the tutorial points further to server.js and the right port once you get into the container bash.

curl localhost:8080

But, I started to investigate it as I have been running it on localhost in minikube.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings