Origin: Project with provisioned service stuck in marked for deletion state

Created on 9 May 2018  路  6Comments  路  Source: openshift/origin

Upon deleting a project containing a provisioned service (via ASB), the provisioned service gets stuck in a marked for deletion state.

Version

oc v3.9.0+ba7faec-1
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO

openshift v3.9.0+ba7faec-1
kubernetes v1.9.1+a0ce1bc657

Steps To Reproduce
  1. Provision an ASB item to a new project.
  2. Delete the project.
Current Result

All resources in project delete, except for the provisioned service, which infinitely gets stuck at 'The service was marked for deletion.'

Expected Result

The provisioned service, followed by the project gets deleted.

Additional Information

Two errors seemed to occur, initially the logs for asb showed:

[2018-05-08T19:54:15.25Z] [WARNING] - unable to retrieve provision time credentials - an empty namespace may not be set when a resource name is provided
聽 | 10.128.0.1 - - [08/May/2018:19:54:15 +0000] "DELETE /ansible-service-broker/v2/service_instances/78532cd6-0dba-4fd4-9307-d97f47253503?accepts_incomplete=true&plan_id=a1d243f4d7f22777c8d0d67cb4ac1503&service_id=1dd62d51c52cc2ac404d58abc0c8fa94 HTTP/1.1" 500 90
聽 | time="2018-05-08T19:54:15Z" level=error msg="Unable to load secret '78532cd6-0dba-4fd4-9307-d97f47253503' from namespace ''"
聽 | time="2018-05-08T19:54:15Z" level=error msg="unable to get secret data for 78532cd6-0dba-4fd4-9307-d97f47253503, in namespace: "
聽 | time="2018-05-08T19:54:15Z" level=error msg="unable to get extracted credentials - an empty namespace may not be set when a resource name is provided"
聽 | [2018-05-08T19:54:15.881Z] [INFO] - All Jobs for instance: 78532cd6-0dba-4fd4-9307-d97f47253503 in state: in progress -

For each of the two projects this is occurring on.

Upon double checking just then though, it looks like nothings being logged on the asb now, instead the provisioned service last status message is:

message: >-
Error deprovisioning, ClusterServiceClass (K8S:
"1dd62d51c52cc2ac404d58abc0c8fa94" ExternalName: "dh-vnc-desktop-apb")
at ClusterServiceBroker "ansible-service-broker": Delete
https://asb.openshift-ansible-service-broker.svc:1338/ansible-service-broker/v2/service_instances/78532cd6-0dba-4fd4-9307-d97f47253503?accepts_incomplete=true&plan_id=a1d243f4d7f22777c8d0d67cb4ac1503&service_id=1dd62d51c52cc2ac404d58abc0c8fa94:
dial tcp 172.30.149.118:1338: connect: cannot assign requested address
reason: DeprovisionCallFailed
status: Unknown
type: Ready

siansible-service-broker

Most helpful comment

@shaunmp this is fixed in 3.10. The workaround in 3.9 is:

for i in $(oc get projects  | grep Terminating| awk '{print $1}'); do echo $i; oc get serviceinstance -n $i -o yaml | sed "/kubernetes-incubator/d"| oc apply -f - ; done

All 6 comments

@openshift/sig-ansible-service-broker

@shaunmp this is fixed in 3.10. The workaround in 3.9 is:

for i in $(oc get projects  | grep Terminating| awk '{print $1}'); do echo $i; oc get serviceinstance -n $i -o yaml | sed "/kubernetes-incubator/d"| oc apply -f - ; done

@shaunmp this is fixed in 3.10. The workaround in 3.9 is:

for i in $(oc get projects  | grep Terminating| awk '{print $1}'); do echo $i; oc get serviceinstance -n $i -o yaml | sed "/kubernetes-incubator/d"| oc apply -f - ; done

just saying that this did the job for me on 3.10, but that means it isnt fixxed in 3.10

I believe the workaround is intended to remove a finalizer so the project may be cleaned up.

There could be a few scenarios to leave the finalizer (so even with the fix that went into 3.10, there are still scenarios where a finalizer is left on a Service Instance as desired behavior). In regard to Service Instances....leaving the finalizer is intended to force an admin to ensure off-cluster resources are cleaned up.

The admin then removes the finalizer to restore clean up operations.

Having the same problem on 3.11.

For example:

this serviceinstance cannot be deleted:

cakephp-mysql-example-2g4rm ClusterServiceClass/cakephp-mysql-example default Provisioning 12m

Just removing this from yaml the serviceinstance can be deleted

finalizers:

  • kubernetes-incubator/service-catalog

@lucky-sideburn ensure you have upgraded/reinstalled the service catalog and template service broker.

Our clusters had these issues all the way up to 3.11 because the upgrade path didn't upgrade either catalog or TSB.

I think it's enough to run the playbooks again, but I just uninstalled them using openshift_service_catalog_remove=true and template_service_broker_remove=true before reinstalling.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

thincal picture thincal  路  5Comments

smugcloud picture smugcloud  路  5Comments

syui picture syui  路  3Comments

guangxuli picture guangxuli  路  4Comments

rkrmishra picture rkrmishra  路  4Comments