Kops: kubefed v1.7.2 Unable to join cluster to federation - error: server does not support API version "federation/v1beta1"

Created on 2 Aug 2017  路  15Comments  路  Source: kubernetes/kops

I am working on creating a federation with three clusters created with kops. The clusters spin up and I am able to init the federation successfully. However when I try to join one of the clusters I receive an error error: server does not support API version "federation/v1beta1"

Here are the steps I took:

#Create the "master"
kops create cluster --zones=us-west-1a federation-master.example.com
kops edit cluster federation-master.example.com
 -- Update Kubernetes version 1.7.0 to 1.7.2 
kops update cluster federation-master.example.com --yes

#Create the two "slaves"
kops create cluster --zones=us-east-2a ohio-slave.example.com
kops edit cluster ohio-slave.example.com
 -- Update Kubernetes version 1.7.0 to 1.7.2 
kops update cluster ohio-slave.example.com --yes

kops create cluster --zones=us-west-2a oregon-slave.example.com
kops edit cluster oregon-slave.example.com
 -- Update Kubernetes version 1.7.0 to 1.7.2 
kops update cluster oregon-slave.example.com --yes

#Initialize the federation
kubefed init testenv-federation --host-cluster-context=federation-master.example.com --dns-provider="aws-route53" --dns-zone-name="example.com."
Creating a namespace federation-system for federation system components... done
Creating federation control plane service..... done
Creating federation control plane objects (credentials, persistent volume claim)... done
Creating federation component deployments... done
Updating kubeconfig... done
Waiting for federation control plane to come up................................................................................ done
Federation API server is running at: a1<..>.us-west-1.elb.amazonaws.com

#Testing connectivity to the newly created federation
kubectl create ns ops --context=testenv-federation
kubectl get ns --context=testenv-federation
namespace "ops" created
NAME      STATUS    AGE
ops       Active    1s

#Joining first cluster to the federation
kubefed join ohio --host-cluster-context=federation-master.example.com --cluster-context=ohio-slave.example.com
error: server does not support API version "federation/v1beta1"

kops version
Version 1.7.0 (git-e04c29d)

kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:23:22Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:08:00Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

kubefed version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:23:22Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:08:00Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

This is not the first attempt at running federated kops clusters. I have successfully joined two "slave" clusters with kubefed v1.6.7 on top of clusters created with kops v.1.6.2. There are some features in 1.7 that I would like to leverage so I wanted to get a new federation created with kubefed/kubectl v1.7.2.

aredocumentation good first issue lifecyclrotten

Most helpful comment

We really need a document about federation. Ran across this the other day https://kumorilabs.com/blog/k8s-10-setup-kubernetes-federation-different-aws-regions/

All 15 comments

I had the same error. Edit the cluster and set spec.kubeAPIServer.runtimeConfig.federation/v1beta1: "true".

It should look like this:

spec:
  kubeAPIServer:
    runtimeConfig:
      federation/v1beta1: "true"

Then apply with kops update cluster --yes

I hope this helps.

@david92rl - I tried updating my existing cluster as well as creating new clusters with the information you provided, but I am still receiving the same error error: server does not support API version "federation/v1beta1" What version of kops and kubectl/kubefed are you using with these settings?

I am getting the same error when running two new clusters in GKE on version 1.7.2.

Executed commands:

$ kubectl config use-context federation-cluster-eu
$ kubefed init federation \
  --host-cluster-context=federation-cluster-eu \
  --dns-zone-name="xxxx.io." \
  --dns-provider="google-clouddns"

Creating a namespace federation-system for federation system components... done
Creating federation control plane service................ done
Creating federation control plane objects (credentials, persistent volume claim)... done
Creating federation component deployments... done
Updating kubeconfig... done
Waiting for federation control plane to come up................. done
Federation API server is running at: x.x.x.x

$ kubectl create namespace default --context=federation
namespace "default" created

$ kubectl config use-context federation-cluster-asia  
$ kubefed join federation --host-cluster-context=federation-cluster-eu
error: server does not support API version "federation/v1beta1"

I am not sure how I should edit the cluster as mentioned above:

$ kubectl get cluster --context=federation
No resources found.

Probably not related, but I had to use

gcloud config set container/use_client_certificate True
export CLOUDSDK_CONTAINER_USE_CLIENT_CERTIFICATE=True

To connect to the clusters for the federation init to work.

Note: only just saw that this is the kops repo, so it might not be related to kops, because I have it with GKE.

@nmarshall-cst I'm on 1.7.0

Kubernetes: Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T22:55:19Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Kops: Version 1.7.0 (git-e04c29d)

BTW I'm running Kubernetes on AWS

Double check that your API server is running with federation/v1beta1=true set in the --runtime-config

kubectl -n kube-system describe pods -l k8s-app=kube-apiserver

You'll see it in Containers > kube-apiserver > Command

@david92rl I have same issue with @nmarshall-cst . I did add the config and start from scratch, but the issue still exists.

Result when running the above check:

Containers:
  kube-apiserver:
    Container ID:       docker://d8ae1e31c42bea283a1b8d5579f47e1e8ea8e5d4cd90bf6e0f18fb1e2f0d5f66
    Image:              gcr.io/google_containers/kube-apiserver:v1.7.5
    Image ID:           docker-pullable://gcr.io/google_containers/kube-apiserver@sha256:aa5674d9cfb2e7c445d10b25cda16f0db03455a1adf4550bdc9121dc7fd5b504
    Ports:              443/TCP, 8080/TCP
    Command:
      /bin/sh
      -c
      /usr/local/bin/kube-apiserver --address=127.0.0.1 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,ResourceQuota --allow-privileged=true --anonymous-auth=false --apiserver-count=3 --authorization-mode=RBAC --basic-auth-file=/srv/kubernetes/basic_auth.csv --client-ca-file=/srv/kubernetes/ca.crt --cloud-provider=aws --etcd-servers-overrides=/events#http://127.0.0.1:4002 --etcd-servers=http://127.0.0.1:4001 --insecure-port=8080 --kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP --runtime-config=federation/v1beta1=true --secure-port=443 --storage-backend=etcd2 1>>/var/log/kube-apiserver.log 2>&1

My server & client versions:

kubefed version                                                                                                                                                                                             
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T09:14:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T08:56:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

kops version 1.7.0

I also run kops on AWS

In addition to @david92rl actions:

kubectl get pods namespace=federation-system --context=federation-master.example.com

Read the logs of controller-manager. You may have a lack of role permissions. error querying for DNS zones: AccessDenied: User

Create a policy like:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
              "route53:*"
            ],
            "Resource":"*"
        }
    ]
}

Attach the policy to your Role. To use in production, the actions and resources shall be well defined.

We really need a document about federation. Ran across this the other day https://kumorilabs.com/blog/k8s-10-setup-kubernetes-federation-different-aws-regions/

@chrislovecnm I just followed that tutorial. I didn't have to add the runtimeConfig (guess because of 1.7.6 instead of 1.7.2). However, I still had to add the Role.
It's not a kops bug, it's a kubefed limitation.

By role you mean iam permissions?

Right, IAM policy. I will follow up with the exact required actions

To document in this thread you can add policies with kops as well.

https://github.com/kubernetes/kops/blob/master/docs/iam_roles.md

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Was this page helpful?
0 / 5 - 0 ratings