Deleting Deployment should delete Replica Sets and Services
Dashboard version: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1
Kubernetes Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:57:05Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Kubernetes Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:52:01Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Operating system: Linux k8s-master01 4.8.0-22-generic #24-Ubuntu SMP Sat Oct 8 09:15:00 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Node.js version:
Go version: 1.7.4
I'm assuming with the way K8s's container orchestration is meant to work, when a new Deployment is created, it spawns all the underlying tasks (Replica Sets, Pods, and Services). So shouldn't removing the Deployment also remove all tasks created by it?
This is something I was confused with about OpenShift as well because when you delete a deployment it doesn't delete the ReplicaSet either when kubectl does. Deleting the ReplicaSet seems to clean up the pods though.
I'm less clear on services though. Kubectl won't delete services if you delete a Deployment. Why do you think services should be deleted as well? One of the problems with Kubernetes is there is no sense of an "app" to delete. It's all just a bunch of loosely coupled components so there's not a good generic way to know what to delete.
@IanLewis I was relieved when deleting the ReplicaSet also deletes the pods. I've tried, for fun before, setting replicas to 100
I actually think the Services could remain so they can be reused/reattached or duplicated. I've often find myself going through the process of kubectl --namespace ns exec pod /bin/bash -i and then doing a netstat -ant to figure out what ports that particular service has decided to listen on, then deleting that deployment and recreating it with an External Service that can reach that listening port
I think you described it perfectly, an "app". A deployment should be that "app". A deployment will deploy any docker image with replica set to "1". Most times, the deployment will fail when not provided with sufficient environment variables (i.e. "kylemanna/openvpn").
But that's okay, because in a future version of kubernetes/dashboard:
After deployment, and after changing any of the options above, kubernetes will just automatically re-deploy with those new changes
Currently, if I want to create any new changes through the UI, the best way is to remember what I want changed, delete the Deployment, ReplicaSet, and Services, and then re-deploy with the new settings. It's very tedious, and loses a few UX points
We are currently discussing ideas to make it easier to use but defining what an "app" is is hard. Kubernetes itself doesn't really have the idea of an "app" because these things are complex and user definable, objects can be reused across apps etc. It's a lower level framework or set of components other things can build on top of.
As for the Deployment, the ReplicaSet should really get cleaned up by the Deployment controller in core Kubernetes. Right now it's cheating and deleting it for you from the kubectl client.
@maciaszczykm @floreks @IanLewis Does cascade delete work? I thought it works, meaning that this bug is fixed.
I'll verify.
@bryk Checked on latest master, it doesn't remove pods. After deleting deployment only replica set was deleted automatically. All pods stayed in the cluster. Moreover, Dashboard crashes when trying to enter on pod page, because its creator doesn't exist.
In the latest stable kubectl:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:53:09Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.0-alpha.0.851+6d92abdc0a2d35", GitCommit:"6d92abdc0a2d352fcb0e884ad6bf14c6d702bc0a", GitTreeState:"clean", BuildDate:"2017-03-06T11:38:24Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl get all
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes 10.0.0.1 <none> 443/TCP 1d
$ kubectl create -f deployment.yaml
deployment "nginx-deployment" created
$ kubectl get all
NAME READY STATUS RESTARTS AGE
po/nginx-deployment-4234284026-0z8tq 0/1 ContainerCreating 0 2s
po/nginx-deployment-4234284026-5g4n7 1/1 Running 0 2s
po/nginx-deployment-4234284026-x727j 1/1 Running 0 2s
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes 10.0.0.1 <none> 443/TCP 1d
NAME KIND
deploy/nginx-deployment Deployment.v1beta1.apps
NAME DESIRED CURRENT READY AGE
rs/nginx-deployment-4234284026 3 3 3 2s
$ kubectl delete deployment nginx-deployment --cascade=true
deployment "nginx-deployment" deleted
$ kubectl get all
NAME READY STATUS RESTARTS AGE
po/nginx-deployment-4234284026-0z8tq 1/1 Running 0 11s
po/nginx-deployment-4234284026-5g4n7 1/1 Running 0 11s
po/nginx-deployment-4234284026-x727j 1/1 Running 0 11s
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes 10.0.0.1 <none> 443/TCP 1d
NAME DESIRED CURRENT READY AGE
rs/nginx-deployment-4234284026 3 3 3 11s
I guess Dashboard has 2 bugs here.
Hmm... What K8s cluster version do you have? I thought that as of 1.5 cascade delete works for all objects
@bryk
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:53:09Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.0-alpha.0.851+6d92abdc0a2d35", GitCommit:"6d92abdc0a2d352fcb0e884ad6bf14c6d702bc0a", GitTreeState:"clean", BuildDate:"2017-03-06T11:38:24Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
That's sad. Could we perhaps implement this on our side?
If cascade is enabled and it doesn't delete pods when deleting a deployment then that's a server side bug AFAICT. We should have a server bug if we can reproduce it.
FWIW this bug says 1.6 ¯_(ツ)_/¯
https://github.com/kubernetes/kubernetes/issues/40014
AFACT, when using the API, cascading delete requires an extra parameter. We should add a checkbox to the delete confirm dialog for cascading delete that defaults to true and provide that option to the API when the checkbox is checked.

1.6 is out so the dashboard can rely on the GC now
what is the progress about cascade delete?
@kargakis I just tried to cascade-delete a deployment created on 1.6 cluster. It deleted replica sets, but pods stayed there.
Is this a bug or WAI?
I think we still need to add the flag to tell it to cascading delete.
It's a bug. @caesarxuchao has a PR open https://github.com/kubernetes/kubernetes/pull/44058
Yes, it's a known issue. Setting DeleteOptions.PropagationPolicy="Foreground" will delete both the replicasets and the pods.
Based on latest master branch of client-go, I wrote a simple program to test DeleteOptions.PropagationPolicy="Foreground". Here is the testing result:
@caesarxuchao is the behavior against v1.5.7 as expected?
@Huang-Wei Latest code won't work with 1.5.x because api objects have changed between 1.5.x and 1.6.x. This object is in different package now and older apiserver won't know how to deserialize it. There will be probably some kind of version mismatch or not recognized error. Also on client-go page you can see compatibility matrix. We are trying to keep it up to date so some api objects may differ but common stuff will work.
Thanks @floreks for the clarification. I'm still on latest client-go, and it's a pain to change the version to v2.0.0. To adapt to k8s 1.5, I tried the solution given at https://github.com/kubernetes/client-go/issues/50#issuecomment-268689926:
It works, a little dumb though...
@caesarxuchao Do you know how to configure DeleteOptions to force pod deletion together with job? Currently set delete propagation Foreground does not work for jobs and OrphanDependants are deprecated.
Same here for the Jobs. No way to delete the related pods
@joan38 #2176
Thanks @floreks
Most helpful comment
AFACT, when using the API, cascading delete requires an extra parameter. We should add a checkbox to the delete confirm dialog for cascading delete that defaults to true and provide that option to the API when the checkbox is checked.