Zero-to-jupyterhub-k8s: Add support for additional cloud providers

Created on 14 Jun 2017  Â·  81Comments  Â·  Source: jupyterhub/zero-to-jupyterhub-k8s

If you're interested in support for this software on AWS, Jetstream, or other cloud providers, please let us know here... or even better, send us a Pull Request with your contributions to getting the code working on your desired cloud provider!

We so far have heard interest in supporting Jetstream using the OpenStack Magnum API, as well as using kubeadm.

We also have heard interest in supporting AWS. Here are some links provided to us by our AWS reps:

https://kubernetes.io/docs/getting-started-guides/aws/
https://aws.amazon.com/quickstart/architecture/heptio-kubernetes/

documentation help wanted

Most helpful comment

The most graceful way is:

  1. log into aws console and go to CloudFormation
  2. Find the stack that you want to scale out (name ends in 12 uppercase alphanumeric string, both stacks share the same prefix name)
  3. Select above mentioned stack, then from Actions menu, click on Update Stack
  4. Click Next
  5. In parameters, change value of Node Capacity to desired value.
  6. Click Next twice
  7. Confirm change and click on Update.

//cc @choldgraf

All 81 comments

Thanks @aculich. For those that wish to help by submitting a PR, please limit changes that are vendor/cloud provider specific to its own section within https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/master/doc/source/create-k8s-cluster.rst file. We would like to keep the remainder of the documentation vendor agnostic. Thanks. Please let us know if you have questions.

@choldgraf and I tested Heptio based on pointers from our AWS rep, and @yuvipanda mentioned kops as a direction the open source community is moving, however it relies on having a DNS name already registered for its discovery process which can get in the way for quick testing on an IP address.

note that we also had to disable RBAC (which is not desirable in the long-term) with our Heptio install: https://kubernetes.io/docs/admin/authorization/rbac/#permissive-rbac-permissions

There is more to do.... and we'll ask for input from folks at the UCCSC AWS User Group meeting today.

Nice to see work happening with Heptio, @aculich and folks.

@rdodev, do you know who would be a good contact if we have additional questions? :sunny:

Hey @willingc happy to help and can be point person with any questions or issues relating to our AWS quickstart.

Thanks @rdodev. Good stuff happening at Heptio :smile:

FWIW I really need to get something like this working on AWS within a week or so...otherwise we'll need to switch to something else for the bootcamp in early September. @aculich do you have time to give it another go with me this week?

@rdodev would you have a chance to do a live-chat with @aculich and I as we try to get k8s running on AWS? I'm helping teach a bootcamp to a buncha neuroscientists in early September and was hoping to run a k8s-based jupyterhub on AWS!

@choldgraf I got the heptio tutorial https://aws.amazon.com/quickstart/architecture/heptio-kubernetes/ up and running the other day with no issues. I haven't had time to try with JupyterHub but kubectl and helm were working. Heptio's friday podcasts on YouTube are really good too. The first one basically walks you through the tutorial install.

Huh - that is the same one Aaron and I were using and we ran into a buncha
problems in the end (that I of course don't remember now). I'll give it
another shot soon though. Been wrestling with binder DNS records all

morning :-)

On Thu, Aug 10, 2017 at 9:43 AM Carol Willing notifications@github.com
wrote:

@choldgraf https://github.com/choldgraf I got the heptio tutorial
https://aws.amazon.com/quickstart/architecture/heptio-kubernetes/ up and
running the other day with no issues. I haven't had time to try with
JupyterHub but kubectl and helm were working. Heptio's friday podcasts on
YouTube are really good too. The first one basically walks you through the
tutorial install.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/88#issuecomment-321607532,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABwSHQpNNFrXDJ4Q5HrOwpYQ9GG-fhZcks5sWzMbgaJpZM4N6cry
.

FYI. I used the new VM option FWIW.

Great to see things are working as expected @willingc one thing worth highlighting is the fact that AWS QS clusters are not "production-grade" and are only meant for testing/staging. Would be glad to help productionize (sic) your environment if and when you folks are ready.

I've got things running up to the point of the helm install. I followed the heptio guide and got my kubernetes machines running. Helm + kubectl are also installed. Here's the error that I'm getting:

helm install jupyterhub/jupyterhub --version=v0.4 --name=kube --namespace=kube -f config.yaml

    Error: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "kube". (get namespaces kube)
helm version
    Client: &version.Version{SemVer:"v2.5.1", GitCommit:"7cf31e8d9a026287041bae077b09165be247ae66", GitTreeState:"clean"}
    Server: &version.Version{SemVer:"v2.5.1", GitCommit:"7cf31e8d9a026287041bae077b09165be247ae66", GitTreeState:"clean"}

Any ideas?

You should try a helm init again with the service account instructions in
https://github.com/jupyterhub/zero-to-jupyterhub-k8s/pull/124

On Fri, Aug 11, 2017 at 11:54 AM, Chris Holdgraf notifications@github.com
wrote:

I've got things running up to the point of the helm install. I followed
the heptio guide and got my kubernetes machines running. Helm + kubectl are
also installed. Here's the error that I'm getting:

helm install jupyterhub/jupyterhub --version=v0.4 --name=kube --namespace=kube -f config.yaml

Error: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "kube". (get namespaces kube)

helm version
Client: &version.Version{SemVer:"v2.5.1", GitCommit:"7cf31e8d9a026287041bae077b09165be247ae66", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.5.1", GitCommit:"7cf31e8d9a026287041bae077b09165be247ae66", GitTreeState:"clean"}

Any ideas?

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/88#issuecomment-321892082,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAB23s-YohRpM_FE1Htx0lHxAK2dV2WZks5sXKNPgaJpZM4N6cry
.

--
Yuvi Panda T
http://yuvi.in/blog

Oh you mean from that PR that I created and have already forgotten that I created? whoops ;-)

that fixes the namespace error...now helm is hanging on install:

helm install jupyterhub/jupyterhub --version=v0.4 --name=kube --namespace=kube -f config.yaml --debug
   [debug] Created tunnel using local port: '61697'

   [debug] SERVER: "localhost:61697"

   [debug] Original chart version: "v0.4"
   [debug] Fetched jupyterhub/jupyterhub to /home/choldgraf/.helm/cache/archive/jupyterhub-v0.4.0+fb6fc47.tgz

   [debug] CHART PATH: /home/choldgraf/.helm/cache/archive/jupyterhub-v0.4.0+fb6fc47.tgz

been stuck on that last one for like 10 minutes, ended with:

Error: timed out waiting for the condition

UPDATE: I got it working by running this command: https://kubernetes.io/docs/admin/authorization/rbac/#permissive-rbac-permissions

which @yuvipanda mentions makes the cluster insecure. I think there's a better solution coming soon but just putting this here for reference

OK I think I am close. Got jupyterhub deployed and everything with one snag:

It's not generating a public-facing IP address:

kubectl --namespace=kube get svc
   NAME           CLUSTER-IP      EXTERNAL-IP        PORT(S)        AGE
   hub            10.109.128.19   <none>             8081/TCP       3m
   proxy-api      10.96.110.230   <none>             8001/TCP       3m
   proxy-public   10.100.36.195   a72d589697ecd...   80:31656/TCP   3m

I'd assume that EXERNAL-IP would have a proper IP address. I wonder if this is something about how my AWS instance is set up? Do I need to configure something special to allow public access?

The address under external IP is a valid dns name you can use. If it is cut
off, try doing a describe svc proxy-public with kubectl to copy the full
url.

On Aug 11, 2017 12:50 PM, "Chris Holdgraf" notifications@github.com wrote:

OK I think I am close. Got jupyterhub deployed and everything with one
snag:

It's not generating a public-facing IP address:

kubectl --namespace=kube get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hub 10.109.128.19 8081/TCP 3m
proxy-api 10.96.110.230 8001/TCP 3m
proxy-public 10.100.36.195 a72d589697ecd... 80:31656/TCP 3m

I'd assume that EXERNAL-IP would have a proper IP address. I wonder if
this is something about how my AWS instance is set up? Do I need to
configure something special to allow public access?

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/88#issuecomment-321903635,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAB23r-Ed8WLtZbBKKoifoxSb3SfL0llks5sXLB0gaJpZM4N6cry
.

boosh! a72d589697ecd11e7b8e202ffae2b2ec-945672095.us-west-2.elb.amazonaws.com

getting PersistentVolumeChain is not bound errors...I think there's a fix for that in the guide IIRC

What is the output of

kubectl get storageclass -o yaml?

On Aug 11, 2017 1:11 PM, "Chris Holdgraf" notifications@github.com wrote:

getting PersistentVolumeChain is not bound errors...I think there's a fix
for that in the guide IIRC

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/88#issuecomment-321907747,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAB23vx9vaUAGdDsX-4fZZfXj1d76GgIks5sXLVogaJpZM4N6cry
.

apiVersion: v1
items: []
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

So @yuvipanda and I chatted and it seems like this could be an issue for AWS. We need users to be able to have their own disks and it looks like this isn't something that comes by default.

@willingc when you got this up and running did you figure out a way to allow for people to have disks in their jupyterhub instance? @rdodev any thoughts on how one might enable this w/ the current setup?

@choldgraf I guess I'm not fully abreast what the use case architecture is for jupyterhub. Is it similar to tmpnb.org? If you have literature or diagrams would be greatly helpful.

hmmm, well there's lotsa docs describing JupyterHub and the tools it utilizes here:

https://zero-to-jupyterhub.readthedocs.io/en/latest/

As an example, a common use-case is a classroom setting. A teacher puts together a Docker image that contains all the requirements/dependencies/code/data etc needed for the class, and that image is served to students via JupyterHub. When students log in, kubernetes spins up a pod for them and attaches it to a persistent disk that contains the student's files (so that they can modify their notebooks and those changes will persist in time). It sounds like we're having trouble with the persistent-disk-attaching part.

@choldgraf great, thanks for the info. Let me look into it.

@choldgraf are the manifest files you've used in the master branch of the repo?

which repo? at this point I'm not actually working from any repo. just following the instructions post-kubernetes-install from here: https://zero-to-jupyterhub.readthedocs.io/en/latest/

(also just FYI I think that @yuvipanda will be of more help than I here, he's a lot better at debugging kube stuff)

@choldgraf yeah would like to see the manifests files and how y'all are provisioning pods, volumes, etc. It's "blind" debugging right now :)

ah - I think this might be the best place to look actually:

https://github.com/jupyterhub/helm-chart/tree/master/jupyterhub

this is a repo with helm-charts to deploy a few tools in the jupyterhub ecosystem. That's probably what you wanted :-)

@choldgraf after a cursory look through the manifests seems this should be working. I'll spin up a cluster and try to replicate it -- might be a while though.

no problem - thanks for your help on a saturday morning :-)

@choldgraf sorry to bother you, but what version of kubernetes is the cluster running right now? 1.7+?

No worries at all!

kubectl version
   Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3", GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1", GitTreeState:"clean", BuildDate:"2017-08-03T07:00:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
   Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:08:00Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

It looks like the cluster he spun up didn't have a default storageclass (or
any storage classes). The Jupyterhub setup from that helm chart assumes
there is a default storageclass, and that seems to be the current failing.

Also as Chris said - thank you so much for helping us out!

On Aug 12, 2017 10:50 AM, "Chris Holdgraf" notifications@github.com wrote:

No worries at all!

kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3", GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1", GitTreeState:"clean", BuildDate:"2017-08-03T07:00:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:08:00Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/88#issuecomment-321995939,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAB23gk0C8vulryQi35gD3-GOuFEMwxXks5sXeXmgaJpZM4N6cry
.

@yuvipanda I think you're spot on. Good catch. PVs will fail to provision because the PVC doesn't have a storageclass declared. https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1

Yup, exactly! That was an intentional decision I made when making the Helm Chart, since I figured most clusters should have a default provisioner. It let me keep cloud specific code off the chart, which was great!

@choldgraf consider this example for setting object class and give it a try:
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#aws

@rdodev does the heptio installer not set up a storage class by default? But since it sets up the AWS Cloud Provider, us creating a storageclass will be good enough?

@choldgraf can you try:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  annotations:
     storageclass.beta.kubernetes.io/is-default-class: "true"
  name: gp2
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2

Put this into a file and do 'kubectl apply -f ' on it?

@yuvipanda that is correct AWS QS clusters do not come with a default StorageClass

@rdodev ah I see. is that an explicit decision that's unlikely to change in the future? Curious what the reasoning is!

@yuvipanda flexibility. We already have a few "opinions" codified there, I think default storageclass was omitted to allow folks expressly declare storageclass; however, if you can make a good case why this should be the case we're always happy to hear it: https://github.com/heptio/aws-quickstart

(lemme just jump in here and say both you guys are awesome, thanks for helping out....we are getting close!)

I think we got it working!!!

btw: @rdodev do you know if there's a non-PDF version of that guide somewhere? It would make it easier for me to link to specific sections

@choldgraf excellent. You mean like the README here? https://github.com/heptio/aws-quickstart

@rdodev awesome!

My expectation on default storage classes mostly comes from https://kubernetes.io/docs/concepts/storage/persistent-volumes/#writing-portable-configuration, especially:

In the future, we expect most clusters to have DefaultStorageClass enabled, and to have some form of storage available. However, there may not be any storage class names which work on all clusters, so continue to not set one by default. At some point, the alpha annotation will cease to have meaning, but the unset storageClass field on the PVC will have the desired effect.

All the charts in the github.com/kubernetes/charts repo expect you to have a default storage class, for example (overrideable if needed), and I think that's a great practice to provide an experience that does the right thing in most cases by default and allows tweaking.

I can write up a more thought out issue in the quickstart repo later if that'd be helpful!

@yuvipanda By all means, drop an issue and we'll consider it.

@rdodev done :) Thank you for your help here! And thanks @willingc for the connect!

See #129 for instructions I've added to z2jh...comments welcome!

Thanks @rdodev. You are helpful as always :-)

Nicely done @yuvipanda @aculich and @choldgraf.

Thanks for the feedback @willingc !

@rdodev I noticed that some of the amazon machine types aren't available in the drop-down list. Specifically I was looking for r4.2large and couldn't find any of the r4 series in there. Is that an intentional heptio decision? Or an AWS thing?

@choldgraf since the main goal of AWS QS is evaluation and testing of K8s, we tried to keep the tested machine types to a reasonable subset of machines that are good for that purpose. Machines not in that list haven't been tested by us; however, you could modify the template, add the type manually and then launch the cluster manually.

ah ok - that makes sense! Along those lines...I just tried creating a cluster of seven r3.large's, and they failed to be created. It looks like 3 of the 7 didn't give a success message to AWS and so it rolled back the whole deployment. Have you guys encountered instability with certain machine types?

pinging you @rdodev in case you're only paying attention to parts of this thread in which you're mentioned ;-)

@choldgraf no, never seen consistent failures w/ any type of instance. Those types of errors are usually on AWS' side.

ok, I'll give it a shot again...

hmmm...I got the same failure to create + rollback. @aculich have you experienced any issues like this on AWS before?

Strange. Are you trying to launch into an existing VPC? What's the exact errors you're seeing?

nope - I'm creating a new one (the button on the left in the guide). It was hard to pin down a specific error message, but it seemed like a subset of the machines being requested didn't succeed (like 3 out of 7) so the whole thing failed and rolled back...

One theory is that this is related to some kind of limit on my AWS account...not sure how to test that out though. This works fine for all the tN machines

hmm - we were requesting r3.large, which isn't listed on that page, so not sure what kind of limits it has. :-/

@choldgraf "All Other Instance Types | 20" this is total per region so if you have any other deployed in a different AZ will count against quota.

Gotcha - yeah we were only requesting 7 so I guess this isn't the issue...hmmm, I can try and ask someone in a different part of the country to deploy w/ heptio and the same computational config

Let _me_ give it a try :D

:-)

Spinning up a cluster with 7 x r3.larges as we speak. Will update when done (or error).

@choldgraf
awsparams
awsresult

Region: Oregon (us-west-2)

damnit!

I mean.....that's great! :-)

hmmm, OK I can give it another shot with us-west-2b. This makes me wonder if it is something with my account...

If your account is a child/sub account it's possible other users under the same umbrella account have VMs running in that region and are invisible to you (thus bumping on the quota).

well either way, that's good news - let me send these instructions to another guy we're working with at UW and see if he can get the machines set up...I'm trying to do this so that we can use AWS + JupyterHub for a training camp in early September...so really it just needs to work for him :-)

@choldgraf so it worked, I presume? Please ping me if need be. Though I'm on Eastern time so probably won't check until tomorrow morning.

I still haven't got it working with r3 but it's working with the two machines... I'll let you know if my colleague can get it working. Thanks so much for your help! I'll report back w an update but either way I owe ya a :beer: or two!

hey @rdodev - I wonder if you're still around for a quick question!

First off - the AWS deployments are working quite well, I think...thanks so much for the great guide/template and all the help!

A question: somebody is asking about how to rescale thier AWS cluster after deploying (specifically the "1-20" nodes). I looked through the guide but couldn't find a clear way to do this. Do you have any intuition for how to do this?

ping @arokem since he's interested in this

@choldgraf looking into this. Give me 1/2 hour or so to test solution.

The most graceful way is:

  1. log into aws console and go to CloudFormation
  2. Find the stack that you want to scale out (name ends in 12 uppercase alphanumeric string, both stacks share the same prefix name)
  3. Select above mentioned stack, then from Actions menu, click on Update Stack
  4. Click Next
  5. In parameters, change value of Node Capacity to desired value.
  6. Click Next twice
  7. Confirm change and click on Update.

//cc @choldgraf

Thanks! I will give this a try later today. I assume that other parameters can also be changed? For example, instance type, etc.?

@arokem it is possible, but that's a bit more complicated since changing instance type will nuke existing nodes and any data or workloads therein will be lost.

Hey all - as we now have more mature docs for a number of providers, I'm going to close this. If people would like to re-open, please feel free to do so! Though I think it'll be more useful if we have issues for specific cloud providers we haven't supported, rather than one-catch all (especially since this one is quite long already!)

Was this page helpful?
0 / 5 - 0 ratings

Related issues

betatim picture betatim  Â·  4Comments

tylere picture tylere  Â·  4Comments

Boes-man picture Boes-man  Â·  3Comments

consideRatio picture consideRatio  Â·  3Comments

consideRatio picture consideRatio  Â·  4Comments