Running against external kube version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"3+", GitVersion:"v1.3.0-alpha.5-dirty", GitCommit:"914163247d9a16b46921e83f7dbedb572859b3e4", GitTreeState:"dirty", BuildDate:"2016-06-14T18:43:00Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.5", GitCommit:"b0deb2eb8f4037421077f77cb163dbb4c0a2a9f5", GitTreeState:"clean", BuildDate:"2016-08-11T20:21:58Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
SecurityContextConstraint does not exist in Kubernetes API, I think. So it's trying to hit external kube for something that is openshift specific.
So reading through the code:
E0812 00:23:49.586940 1 reflector.go:216] github.com/openshift/origin/pkg/controller/shared/shared_informer.go:87: Failed to list *api.SecurityContextConstraints: t
he server could not find the requested resource
I0812 00:23:49.886093 1 ensure.go:193] Ignoring default security context constraints when running on external Kubernetes.
So it should _not_ use security context constraints.
E0812 00:23:58.859631 1 reflector.go:216] github.com/openshift/origin/pkg/controller/shared/shared_informer.go:87: Failed to list *api.SecurityContextConstraints: the server could not find the requested resource
I think something that should be ignoring the constraints is not.
@pweil- @danmcp Any updates on this? I can't deploy OpenShift because of this.
Updating this issue, now after updating to 1.4.0-alpha.0:
E1009 21:25:45.788504 1 reflector.go:214] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/storageclass/default/admission.go:62: Failed to
list *storage.StorageClass: the server could not find the requested resource
E1009 21:25:45.804624 1 reflector.go:214] github.com/openshift/origin/pkg/build/controller/factory/factory.go:246: Failed to list *api.Pod: the server could not fin
d the requested resource (get builds)
E1009 21:25:45.805077 1 reflector.go:214] github.com/openshift/origin/pkg/build/controller/factory/factory.go:285: Failed to list *api.ImageStream: the server could
not find the requested resource (get imageStreams)
E1009 21:25:45.808053 1 reflector.go:214] github.com/openshift/origin/pkg/build/controller/factory/factory.go:191: Failed to list *api.Build: the server could not f
ind the requested resource (get builds)
E1009 21:25:45.808676 1 reflector.go:214] github.com/openshift/origin/pkg/build/controller/factory/factory.go:324: Failed to list *api.BuildConfig: the server could
not find the requested resource (get buildConfigs)
E1009 21:25:45.811129 1 reflector.go:214] github.com/openshift/origin/pkg/build/controller/factory/factory.go:288: Failed to list *api.BuildConfig: the server could
not find the requested resource (get buildConfigs)
Here's my master-config.yaml:
admissionConfig:
pluginConfig: null
apiLevels:
- v1
apiVersion: v1
assetConfig:
extensionDevelopment: false
extensionProperties: null
extensionScripts: null
extensionStylesheets: null
extensions: null
loggingPublicURL: ''
logoutURL: ''
masterPublicURL: https://[redacted]:443
metricsPublicURL: ""
publicURL: https://[redacted]:443/console/
servingInfo:
bindAddress: 0.0.0.0:443
bindNetwork: tcp4
certFile: master.server.crt
clientCA: ''
keyFile: master.server.key
maxRequestsInFlight: 0
namedCertificates: null
requestTimeoutSeconds: 0
auditConfig:
enabled: false
controllerConfig:
serviceServingCert:
signer:
certFile: service-signer.crt
keyFile: service-signer.key
controllerLeaseTTL: 0
controllers: '*'
corsAllowedOrigins:
- 127.0.0.1
- localhost
- [redacted]:443
disabledFeatures: null
dnsConfig:
allowRecursiveQueries: true
bindAddress: 0.0.0.0:8053
bindNetwork: tcp4
etcdClientInfo:
ca: etcd.server.crt
certFile: master.etcd-client.crt
keyFile: master.etcd-client.key
urls:
- http://etcd:4001
etcdConfig: null
etcdStorageConfig:
kubernetesStoragePrefix: kubernetes.io
kubernetesStorageVersion: v1
openShiftStoragePrefix: openshift.io
openShiftStorageVersion: v1
imageConfig:
format: openshift/origin-${component}:${version}
latest: false
imagePolicyConfig:
disableScheduledImport: false
maxImagesBulkImportedPerRepository: 5
maxScheduledImageImportsPerMinute: 60
scheduledImageImportMinimumIntervalSeconds: 900
jenkinsPipelineConfig:
enabled: true
parameters: null
serviceName: jenkins
templateName: jenkins
templateNamespace: openshift
kind: MasterConfig
kubeletClientInfo:
ca: ''
certFile: master.kubelet-client.crt
keyFile: master.kubelet-client.key
port: 10250
kubernetesMasterConfig: null
masterClients:
externalKubernetesClientConnectionOverrides:
acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
burst: 400
contentType: application/vnd.kubernetes.protobuf
qps: 200
externalKubernetesKubeConfig: external-master.kubeconfig
openshiftLoopbackClientConnectionOverrides:
acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
burst: 600
contentType: application/vnd.kubernetes.protobuf
qps: 300
openshiftLoopbackKubeConfig: openshift-master.kubeconfig
masterPublicURL: https://[redacted]:443
networkConfig:
clusterNetworkCIDR: 10.128.0.0/14
externalIPNetworkCIDRs: null
hostSubnetLength: 9
networkPluginName: ''
serviceNetworkCIDR: 172.30.0.0/16
oauthConfig:
alwaysShowProviderSelection: false
assetPublicURL: https://[redacted]:443/console/
grantConfig:
method: auto
serviceAccountMethod: prompt
identityProviders:
- name: github
challenge: false
login: true
mappingMethod: claim
provider:
apiVersion: v1
kind: GitHubIdentityProvider
clientID: [redacted]
clientSecret: [redacted]
organizations:
- fuserobotics
masterCA: ca-bundle.crt
masterPublicURL: https://[redacted]:443
masterURL: https://10.0.196.249:443
sessionConfig:
sessionMaxAgeSeconds: 300
sessionName: ssn
sessionSecretsFile: ''
templates: null
tokenConfig:
accessTokenMaxAgeSeconds: 86400
authorizeTokenMaxAgeSeconds: 300
pauseControllers: false
policyConfig:
bootstrapPolicyFile: policy.json
openshiftInfrastructureNamespace: openshift-infra
openshiftSharedResourcesNamespace: openshift
userAgentMatchingConfig:
defaultRejectionMessage: ''
deniedClients: null
requiredClients: null
projectConfig:
defaultNodeSelector: ''
projectRequestMessage: ''
projectRequestTemplate: ''
securityAllocator:
mcsAllocatorRange: s0:/2
mcsLabelsPerProject: 5
uidAllocatorRange: 1000000000-1999999999/10000
routingConfig:
subdomain: router.default.svc.cluster.local
serviceAccountConfig:
limitSecretReferences: false
managedNames:
- builder
- deployer
masterCA: ''
privateKeyFile: ''
publicKeyFiles:
- serviceaccounts.public.key
servingInfo:
bindAddress: 0.0.0.0:443
bindNetwork: tcp4
certFile: master.server.crt
clientCA: ca.crt
keyFile: master.server.key
maxRequestsInFlight: 500
namedCertificates: null
requestTimeoutSeconds: 3600
volumeConfig:
dynamicProvisioningEnabled: true
cc @liggitt
More fundamental work is required to support that deployment model. That work is ongoing, but won't be completed this release.
@liggitt It worked in previous releases - what exactly broke it? Is there some kind of hack I could do to get this working in the meantime?
API groups were added upstream without supporting code to federate groups from different servers. That work is still ongoing upstream and will be needed to unify a server running with API groups from Kube and from OpenShift
Great, thanks. I'll track upstream on that.
@liggitt Is there an issue on the Kubernetes repo I can follow regarding this? It's a nightmare trying to find something like this in the issues.
@liggitt Another ping on this... We can't use OpenShift right now because of this, since we deploy Kubernetes clusters separately. Has this been fixed? What work needs to be done to fix it? I'd be happy to look into doing it myself if you can point me to what needs to be done.
See the last comment on this issue #8124 "external kubernetes mode has been removed. In the future, API aggregation and external initialization will allow OpenShift APIs to be surfaced and consumed by normal kubectl clients." I'm not sure External Kubernetes clusters will be supported for some time... I'm sad about that too.
Oh man that's stupid.
An immense amount of work has gone into the upstream Kubernetes Aggregator. We aim to deploy OpenShift Origin under this umbrella in the future, after that feature is ready. You will be free to use Kubernetes API federation as you want at that point, with OpenShift just being one of the underlying API Servers.
There are so many countless reasons why openshift should support Kubernetes externally...
And more... This is a pretty bad decision imo...
Which use-case is not handled with k8s API federation? I am confused about what, specifically, you think is a bad decision. To be clear, in the future it will be possible to deploy a kubernetes cluster, on top of that deploy an OpenShift API server, have oc and kubectl talk to the k8s aggregator and have API requests reach their federated endpoints as needed.
I sent that message at basically the exact same moment you submitted yours - after reading into federation I think that approach looks good and appreciate the work going into it. I just wish we had an alternative in the mean time.
Thanks for the response!
To answer your question, removing a capability before its replacement is made available is a bad decision. Quite terrible in fact. It has barely been a year since several developers created examples/instructions on how to deploy OpenShift to external Kubernetes before the feature was removed. Didn't even git it enough time to settle and prove its value.
Well this last comment was in April. It is now the future. Am I able to deploy an OpenShift API server onto the latest stable release of Kubernetes?
To be fair, I don't think this was ever a stable feature that has since been removed. The original examples where really proof of concepts that barely worked. Much of the work to make this possible is (I believe) still ongoing in upstream Kubernetes to allow some of this to happen (such as third party API objects) and there may be more work still to do - I'm not on the core OpenShift team who would know and be able to give more detail here.
They did not barely work, they worked perfectly. There was a core design change in OpenShift that broke it - a design change where the developers decided it was too difficult to use a remote kubernetes server, and that it would be easier to just force everyone to run their own vendored version of it. With the lackluster promise of federation somewhere off in the future. It was sloppy and made our company refuse to ever use openshift again for anything, since we had started using it and suddenly been told our way was wrong.
I've wrestled with trying to get the ansible system to work for weeks now, and even with vanilla centos machines it's not easy. I'm very disappointed with the focus that openshift had taken - it's clear that RedHat cares more about making a service they can host themselves and control than something truly useful for people to deploy on their own.
Deploying kubernetes is easy, and there's a thousand ways and platform to do it. Placing openshift inside an existing cluster is the only way that makes sense at all to me. OpenShift needs the kubernetes master - why refuse to allow them to be separate?
Hi @paralin - sorry you feel that way. The original separation was very early in the project, before the vast majority of the OpenShift specific security changes were in place. We left the flag, but it was never able to really provide openshift behavior on top of a Kubernetes cluster. We've spent most of the last year or two bringing Kubernetes up to the point where we can consider running OpenShift on top of an existing cluster - RBAC, PSP, External API servers, CRD, External API aggregation, External admission control, Initializers, etc. I hope that in a few releases it'll be possible to have a full multi-tenant Kubernetes cluster that delivers everything that OpenShift does today by installing OpenShift on top, but that flag is just a fraction of the real work.
Ultimately, OpenShift is a distribution of Kubernetes, and the analogy to the RHEL kernel to the upstream linux kernel is still appropriate in this case. I'd very much like to have OpenShift on top of Kube be trivial to do. But practically, upstream Kubernetes is not ready yet to be the foundation for a fully multi-tenant platform for applications. The extensibility mechanisms, security protections, and feature set are very broad, and it's only now starting to become possible. We are definitely committed to moving in that direction, but it is going to take some time.
With regards to installing OpenShift - we are definitely trying to make the ansible install experience be simple. There's always wrinkles, but any feedback you have about it I'd be happy to take.
I respect that you guys must be careful to ensure that the experience, particularly around security, is strong and consistent. And I do agree that there were some serious cludges with the old implementation. I'm glad to hear you guys are moving in this direction. Tools like Spinnaker sort of gloss over the multitenant part by allowing everyone in the company access to everything which ultimately can't be used for hosting shared platforms. This is why I'm still returning to openshift for our clusters here at Purdue.
I've managed to work around the problems I had with the ansible setup and it seems you're fixing an awful lot of stuff in the newest branch. I'm glad to see the progress there and will continue trying to use OpenShift for our bare metal clusters.
I'm looking forward to
Shit, fat fingered the button!
Looking forward to where this is going. Closing this issue since it's no longer relevant. Thanks for the reply.
@mfojtik can you make sure that we have a clear tracking card for openshift on kube that we can reference here in trello? I would hope for folks it's easy to follow along and see the general arc (even if some of the middle pieces may be under specified)
It's very likely we'll start carving off pieces first (work david and jordan and others have done to make features usable on kube). We also want to have openshift start node launch the kubelet binary directly (mapping config) as well as having the node dns, openshift-sdn, and kube-proxy components run separately. I would expect you'll start seeing those pieces start to happen soon.
@smarterclayton https://trello.com/c/xcSkBfbf/1034-openshift-running-on-kubernetes I will try to fill in description later and link the corresponding Trello cards as this seems more like an "epic card".
And yes, as Clayton mentioned, we are currently putting a lot of effort to granularize OpenShift infrastructure that includes carving out the API servers for OpenShift API groups, controller initialization refactoring, migration to generated clients, openshift/client-go and more.
All this should ultimately lead us to make it possible to run OpenShift on Kube and have OpenShift API aggregated with the Kube API (Clayton or @deads2k can correct me ;-)
Is there some way for me to subscribe to updates on that trello card?
On Wed, Sep 6, 2017 at 3:26 PM, Michal Fojtik notifications@github.com
wrote:
@smarterclayton https://github.com/smarterclayton
https://trello.com/c/xcSkBfbf/1034-openshift-running-on-kubernetes I will
try to fill in description later and link the corresponding Trello cards as
this seems more like an "epic card".And yes, as Clayton mentioned, we are currently putting a lot of effort to
granularize OpenShift infrastructure that includes carving out the API
servers for OpenShift API groups, controller initialization refactoring,
migration to generated clients, openshit/client-go and more.All this should ultimately lead us to make it possible to run OpenShift on
Kube and have OpenShift API aggregated with the Kube API (Clayton or
@deads2k https://github.com/deads2k can correct me ;-)—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
https://github.com/openshift/origin/issues/10367#issuecomment-327588283,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAgpZwh3BJLCA4yMmxKVshKVLSp5wkDjks5sfvHqgaJpZM4Jiqt7
.
@paralin are you able to hit the "Subscribe" button on the card as explained in this doc? You may need to be a member of the board but I am not sure.
that board is public, so I think anyone can join
Most helpful comment
@liggitt Is there an issue on the Kubernetes repo I can follow regarding this? It's a nightmare trying to find something like this in the issues.