In my current application, we have an indexing service that pulls selected data from the main database application, and indexes it into ElasticSearch. One of the things it does in the process is add a URL field to the indexed data that can be used to go directly to the corresponding endpoint in the main API server.
The only practical way I've found to do this so far is to put the actual domain name of the external endpoint into an environment variable for the indexing service.
What I'd like to be able to do instead is to be able to put the _route_ name in the settings for the indexing service, and have it somehow query Kubernetes/OpenShift from inside the running pod to get the external endpoint information for that route. (The indexing service is running under Python 3.5, but a command line based query capability would be fine)
So something like:
>>> import openshift3.endpoints as endpoints
>>> client = endpoints.Client()
>>> routes = client.oapi.v1.namespaces(namespace='myproject').routes.get(name='myapp')
>>> routes.items[0].spec.host
'myapp-myproject.127.0.0.1.xip.io'
>>> routes.items[0].spec.tls.termination
'edge'
Does require REST API access to be enabled:
oc adm policy add-role-to-group view system:serviceaccounts:myproject
The namespace could be passed in via downward API in environment variable, although technically could get the list of projects and grab only one in it as the service account set with token in project can only see its own project anyway.
The route name you either pass in, or depending on how you name things, you could assume you could drop off last two segments of the pod name and use that as route name. Usually it would be the same as route name as you make everything match.
For my purposes, a get_default_route_endpoint feature that reported the external endpoint for a route that had the same name as the pod name with the last two segments removed would be perfect.
That way I wouldn't need any new configuration settings at all, I'd just need to make sure that the main external route used the conventional name.
From a maintainability perspective, folks wouldn't need to know the details of how it was looked up at runtime, they'd just need to know that yes, the service _does_ have a default external endpoint, and yes you _can_ query for the details of how to access it from the outside world.
Closer.
import openshift3.endpoints as endpoints
def get_default_route_endpoint():
client = endpoints.Client()
project = client.api.v1.projects.get().items[0].metadata.name
name = '-'.join(os.environ['HOSTNAME'].split('-')[:-2])
route = client.oapi.v1.namespaces(namespace=project).routes.get(name=name).items[0]
host = route.spec.host
path = route.spec.path or '/'
if route.spec.tls:
return 'https://%s%s' % (host, path)
return 'https://%s%s' % (host, path)
I couldn't test the path bit as need to fix a bug I found in client first.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
/remove-lifecycle stale
As far as I know, it's still really unclear how to emit a stable external endpoint reference for use in logs, etc.
@openshift/sig-networking
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.
/close
In the same mood with oc and jq:
$ oc get routes -o json | jq -r '.items[0].spec.host'
lunging-zebu-kong-proxy.cluster1.zoobab.com