Eksctl: Prompt for Username and Password when running "kubectl get all"

Created on 15 Aug 2018  Â·  15Comments  Â·  Source: weaveworks/eksctl

I used the following command to create my EKS cluster:
% eksctl create cluster --name=eks-c5-xlarge-5 --nodes=5 --node-type=c5.xlarge

Then I was prompted to enter Username and Password when trying to access the cluster by $ kubectl get all. I have no idea what my Username and Password are for the cluster.

kinhelp

Most helpful comment

i had the same issue, it was provided because of a bad kubectl cluster context, make sure that kubeconfig point to ~/.kube/config in your bashrc then verfiy your context in ~/.kube/config file

All 15 comments

@gmflau - i'm guessing you weren't prompted for a username/password when you ran eksctl create cluster?

Could you confirm what version you are using? (eksctl version)

Also, can you confirm kubectl is using your eksctl context? If you run this command kubectl config current-context you should see the context name ending in eksctl.io.

It did not prompt for username/password when running "eksctl create cluster" command. The output from the command was as follows:
gilbertlau:~$ eksctl create cluster --name=eks-c5-xlarge-5 --nodes=5 --node-type=c5.xlarge
2018-08-15T12:21:27-07:00 [ℹ] setting availability zones to [us-west-2c us-west-2b us-west-2a]
2018-08-15T12:21:27-07:00 [ℹ] importing SSH public key "/Users/gilbertlau/.ssh/id_rsa.pub" as "eksctl-eks-c5-xlarge-5-83:c8:52:95:cc:f8:0b:3a:1c:12:6d:88:69:87:ca:65"
2018-08-15T12:21:27-07:00 [ℹ] creating EKS cluster "eks-c5-xlarge-5" in "us-west-2" region
2018-08-15T12:21:27-07:00 [ℹ] creating ServiceRole stack "EKS-eks-c5-xlarge-5-ServiceRole"
2018-08-15T12:21:27-07:00 [ℹ] creating VPC stack "EKS-eks-c5-xlarge-5-VPC"
2018-08-15T12:34:10-07:00 [✔] created control plane "eks-c5-xlarge-5"
2018-08-15T12:34:10-07:00 [ℹ] creating DefaultNodeGroup stack "EKS-eks-c5-xlarge-5-DefaultNodeGroup"
2018-08-15T12:37:51-07:00 [✔] created DefaultNodeGroup stack "EKS-eks-c5-xlarge-5-DefaultNodeGroup"
2018-08-15T12:37:51-07:00 [✔] all EKS cluster "eks-c5-xlarge-5" resources has been created
2018-08-15T12:37:51-07:00 [✔] saved kubeconfig as "/Users/gilbertlau/.kube/config"
2018-08-15T12:37:51-07:00 [ℹ] the cluster has 0 nodes
2018-08-15T12:37:51-07:00 [ℹ] waiting for at least 5 nodes to become ready
2018-08-15T12:38:13-07:00 [ℹ] the cluster has 5 nodes
2018-08-15T12:38:13-07:00 [ℹ] node "ip-192-168-125-xxx.us-west-2.compute.internal" is ready
2018-08-15T12:38:13-07:00 [ℹ] node "ip-192-168-144-yyy.us-west-2.compute.internal" is ready
2018-08-15T12:38:13-07:00 [ℹ] node "ip-192-168-157-zzz.us-west-2.compute.internal" is ready
2018-08-15T12:38:13-07:00 [ℹ] node "ip-192-168-217-aaa.us-west-2.compute.internal" is ready
2018-08-15T12:38:13-07:00 [ℹ] node "ip-192-168-93-bbb.us-west-2.compute.internal" is not ready
2018-08-15T12:38:14-07:00 [✖] parsing kubectl version string "0.0.0": Version string empty
2018-08-15T12:38:14-07:00 [ℹ] cluster should be functional despite missing (or misconfigured) client binaries
2018-08-15T12:38:14-07:00 [✔] EKS cluster "eks-c5-xlarge-5" in "us-west-2" region is ready

$ eksctl version
2018-08-16T11:11:36-07:00 [ℹ] versionInfo = map[string]string{"gitCommit":"dca39f69e893b89d67156635c483b3f3e8236407", "gitTag":"0.1.0", "builtAt":"2018-08-02T13:58:30Z"}

$ kubectl config current-context
gilbert.[email protected]

But when I ran $ kubectl get all, I got the following prompt:
Please enter Username:

@gmflau - thanks for sending that through. It sounds like you may have an old version of kubectl. Could you check the version you have by running:

kubectl version
2018-08-15T12:38:14-07:00 [✖] parsing kubectl version string "0.0.0": Version string empty

That line indicates that there is certainly something odd with kubectl. Is it an alias or do you have some kind of wrapper?

You need version 1.10.x, the latest is 1.10.7.

I will upgrade my kubectl to 1.10.x or higher and give it a try.

On Tue, Aug 21, 2018 at 3:43 AM, Ilya Dmitrichenko <[email protected]

wrote:

2018-08-15T12:38:14-07:00 [✖] parsing kubectl version string "0.0.0": Version string empty

That line indicates that there is certainly something odd with kubectl.
Is it an alias or do you have some kind of wrapper?

You need version 1.10.x, the latest is 1.10.7.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/weaveworks/eksctl/issues/164#issuecomment-414631727,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AKPuaUW8UD6jDS6LpCx1wsZ0hMV-329_ks5uS-RPgaJpZM4V-uRz
.

@gmflau did you manage to upgrade and re-test? can we close the issue?

@gmflau - did upgrading kubectl help with this issue?

Yes. You can close the issue now.

On Wed, Sep 5, 2018 at 1:17 AM Richard Case notifications@github.com
wrote:

@gmflau https://github.com/gmflau - did upgrading kubectl help with
this issue?

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/weaveworks/eksctl/issues/164#issuecomment-418641590,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AKPuaZ8lwN3BHfJJZBFraR_fFbnKPSFdks5uX4iYgaJpZM4V-uRz
.

Thanks @gmflau. Closing.

I experienced this issue today too.

Leaving these commands here for reference for anyone else who has kubectl installed both via Homebrew and Docker for Mac.

$ kubectl version
Please enter Username: 
$ kubectl config current-context
(output matches expected for eksctl)

In my case, kubernetes-cli in Homebrew was up-to-date on v1.11.x but which kubectl was pointing to an outdated v1.9.2 alias set by Docker for Mac edge.

$ brew doctor
...
Warning: You have unlinked kegs in your Cellar
Leaving kegs unlinked can lead to build-trouble and cause brews that depend on
those kegs to fail to run properly once built. Run `brew link` on these:
  kubernetes-cli
...

I'd been hanging on an old build of DfM edge as Kubernetes support is somewhat unstable when upgrading. I'm trying the latest DfM edge build now.

I'm not sure about the best way to handle having 2 different local Kube versions like this but as a temp fix for now, I've overwritten the alias that currently points to DfM kubectl.

$ brew link --overwrite --dry-run kubernetes-cli
Would remove:
/usr/local/bin/kubectl -> /Applications/Docker.app/Contents/Resources/bin/kubectl
$ brew link --overwrite kubernetes-cli
Linking /usr/local/Cellar/kubernetes-cli/1.11.3... 191 symlinks created

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-10T11:44:36Z", GoVersion:"go1.11", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-28T20:13:43Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

I wonder if I should restrict my client version to 1.10.x to match what's available on EKS. Anyone have thoughts on this?

I have my Homebrew installed under ${HOME}/Library/Local/Homebrew, so my PATH is set to
${HOME}/Library/Local/Homebrew/bin:${PATH} (i.e.$HOME/Library/Local/Homebrew/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin).

By the way, I use stable Docker for Mac (18.06.0-ce-mac70), and it worked just fine. I recall having to go through multiple resets/reinstalls doing upgrades on unstable channel, but I believe most of those issues got fixed. In any case, the stable version comes with kubectl 1.10.3, so if you upgrade, you shouldn't have the compatibility issue any more.

Generally speaking to your question of managing kubectl versions, most of the time staying within 3 major versions works no problem at all (once in a while you get a handy new flag, but it's not a deal breaker most of the time). The reason EKS depends on 1.10 is the addition of auth plugins, and this level of compatibility is not something that I've seen happening very often. So I wouldn't worry long term.

Perhaps, we could improve the homebrew package to let user know of the caveat with Docker for Mac and Homebrew both using /usr/local/bin, but it would have to be added to the upstream formula.

Thanks, I'll give the stable channel a shot again.

Do you think it makes sense to add a note about this in the docs (README.md)? I'm happy to send a PR if so.

Yes, please propose what you think would be helpful, please open a PR and
we take it from there :)

On Tue, 11 Sep 2018, 5:11 pm Taylor D. Edmiston, notifications@github.com
wrote:

Thanks, I'll give the stable channel a shot again.

Do you think it makes sense to add a note about this in the docs
(README.md) for this? I'm happy to send a PR if so.

—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/weaveworks/eksctl/issues/164#issuecomment-420329641,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAPWSy7KKoZYZqRzKZgTyaSRIt8Q7BjRks5uZ-C-gaJpZM4V-uRz
.

i had the same issue, it was provided because of a bad kubectl cluster context, make sure that kubeconfig point to ~/.kube/config in your bashrc then verfiy your context in ~/.kube/config file

I had similar issue today ..

  1. All kubectl command are prompting for username and password
  2. Kubectl was working fine,few minutes before.
  3. I looked at what has been changed recently.
  4. new context was created and new user was created and deleted.
  5. During deletion .kube/config file was edited using vi editor
  6. I have to delete the context and user and export conf file
    -bash-4.2$ kubectl version
    Please enter Username: ^C
    -bash-4.2$ kubectl config -h
    Modify kubeconfig files using subcommands like "kubectl config set current-context my-context"

    The loading order follows these rules:

    1. If the --kubeconfig flag is set, then only that file is loaded. The flag may only be set once and no merging takes
      place.
    2. If $KUBECONFIG environment variable is set, then it is used as a list of paths (normal path delimiting rules for
      your system). These paths are merged. When a value is modified, it is modified in the file that defines the stanza. When
      a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the
      last file in the list.
    3. Otherwise, ${HOME}/.kube/config is used and no merging takes place.

Available Commands:
current-context Displays the current-context
delete-cluster Delete the specified cluster from the kubeconfig
delete-context Delete the specified context from the kubeconfig
get-clusters Display clusters defined in the kubeconfig
get-contexts Describe one or many contexts
rename-context Renames a context from the kubeconfig file.
set Sets an individual value in a kubeconfig file
set-cluster Sets a cluster entry in kubeconfig
set-context Sets a context entry in kubeconfig
set-credentials Sets a user entry in kubeconfig
unset Unsets an individual value in a kubeconfig file
use-context Sets the current-context in a kubeconfig file
view Display merged kubeconfig settings or a specified kubeconfig file

Usage:
kubectl config SUBCOMMAND [options]

Use "kubectl --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
-bash-4.2$ kubectl config delete-context k8s
warning: this removed your active context, use "kubectl config use-context" to select a different one
deleted context k8s from ~/.kube/conf
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$ kubectl version
Error in configuration: context was not found for specified context: k8s
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
kubernetes-admin@kubernetes kubernetes kubernetes-admin
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$ kubectl config set-context kubernetes-admin@kubernetes
Context "kubernetes-admin@kubernetes" modified.
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$ kubectl version
Error in configuration: context was not found for specified context: k8s
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$ kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.1", GitCommit:"7879fc12a63337efff607952a323df90cdc7a335", GitTreeState:"clean", BuildDate:"2020-04-08T17:38:50Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.1", GitCommit:"7879fc12a63337efff607952a323df90cdc7a335", GitTreeState:"clean", BuildDate:"2020-04-08T17:30:47Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
-bash-4.2$
-bash-4.2$
Finally Working fine

Was this page helpful?
0 / 5 - 0 ratings