Quarkus: Improved OpenShift Deployments

Created on 28 May 2020  路  17Comments  路  Source: quarkusio/quarkus

Description
I like the Kubernetes and OpenShift extensions since they generate the yaml files and if necessary you can change these settings easily, for example via properties file. I think that's the right approach. Make it simple for first time Quarkus developers, but allow them to change things later when needed. The same goes for the generated Dockerfiles, so that, for example, you can change to OpenJ9.

It's too bad though that you can only use S2I to deploy to OpenShift. In my case I used Java 11 locally which didn't work with the S2I standard image which only supports Java 8 since I had to compile locally rather than on the server. I could implement my own image builder, but why should I have to do this? I have my Dockerfile and yaml files exactly how I want them. Even if there were a Java 11 S2I, I'd prefer to use my own Dockerfile.

The alternatives have weaknesses for my requirements too: Binary builds allow you to build on the server side and to use existing Dockerfiles and yaml files, but I don't see how you can refer to existing (remote) Git repos. When you create builds from remote Git repos, you cannot apply your own yaml files. The Jib and Docker options I didn't get to work on my Mac even after I created a route to the registry.

I think it would be great to have solutions for two scenarios:

1) Developer wants to quickly push and test
The Quarkus extensions come very close to solving this already. Unfortunately S2I is the only option which means I cannot use my own Dockerfile and I'm forced to use an older Java version I haven't tested with.

2) Deployments to production environments
In this case the code is pushed to a Git repo first, tagged, etc. which means that mechanisms like binary builds cannot be used. Jenkins pipelines seem to be deprecated. I haven't seen documentation or best practices for how to do this.

Implementation ideas
For scenario (1): Could the Kubernetes/OpenShift extension be extended to use the Dockerfile in the project rather than using S2I?

For scenario (2) it seems like there is a discussion going on already: https://github.com/quarkusio/quarkus/issues/6147

arekubernetes kinenhancement

All 17 comments

thanks @nheidloff - good comments; did you try use "quarkus-container-image-docker" with kubernetes extension ? then you should have the control you are after in #1 (if I understand correctly)

for #2, you can use jenkins pipelines or tekton or even github actions ...we defintiely should get these options documented and where possibly setup for you as suggested in #6147

/cc @geoand @iocanel

+1 to everything @maxandersen said

Thanks @maxandersen.

Does the Docker extension work for OpenShift? I thought only s2i is supported for OpenShift. I get the following error:
Failed to execute goal io.quarkus:quarkus-maven-plugin:1.4.2.Final:build (default) on project kubernetes-quickstart: Failed to build quarkus application: io.quarkus.builder.BuildException: Build failure: Build failed due to errors
The documentation says that you can only access the OpenShift registry from Linux via podman. Are there other ways? I created a route to the registry but was still not able to connect to it via Docker.

Are the Jenkins pipelines still supported? The documentation says they are deprecated.

Is there a sample demonstrating a Tekton pipeline for Quarkus apps?

We don't have any Quarkus / Tekton integration yet, but that should arrive fairly soon - cc @iocanel

Some alternatives:

  1. Use quarkus-kubernetes with container-image-docker or even container-image-jib.
  2. Use s2i with a custom quarkus.s2i.base-jvm-image it should work with something like openjdk/openjdk-11-rhel7:latest

Hi @iocanel , I've tried your (1), but without success since I cannot access the container registry even after I created a route.

Your (2) is certainly an option, but it's not really what I want to do since I want to use my own Dockerfile and yaml files.

@nheidloff: Ok, then I'll start working on (1). Do you have any additional info on why (1) is not working? An error message, a stacktrace or anything else?

I don't have time to redo this right now, but it was some connectivity error. I'll try later (probably only tomorrow). Thanks a lot!

The documentation says that you can only access the OpenShift registry from Linux via podman. Are there other ways? I created a route to the registry but was still not able to connect to it via Docker.

the documentation just mentions podman as one option. docker or any other similar client should just work as soon as you have the route created.

@nheidloff
This seams to relate to my latest issue here: https://github.com/quarkusio/quarkus/issues/10055
I prefer to use the container-image-docker and the kubernetes extension for openshift deployment (without S2I).
For me this is the easiest and fastest deployment and the built container-image can be easily deployed to other stages (from dev to test to prod)

I am wondering what's the best way to solve this...

When the user is not using container-image-s2i or somehow selects a different image building method:

  1. Use Deployments
  2. Use DeploymenttConfig without triggers.
  3. Use a normal ImageStream pointing to a docker registry instead of using an output ImageStream and BuildConfig.

I am leaning towards 3 as it maximizes flexibility. I am not sure though of possible side effects so I needed your opinions.

@maxandersen @Ladicek @geoand @rmh78 @nheidloff ^

3) sounds reasonable to me as well, but I have personally never used IS like that, so I can't comment on side effects

So if there's no container-image-s2i, it doesn't make sense to create a build config (and its output image stream), right? (At least not in their present form, which is tailored to S2I binary builds.) That is irrespective of whether we use Deployments or DeploymentConfigs, right?

That said, I also think that using ImageStreams is idiomatic in the OpenShift world, so would also lean towards option 3.

As mentioned in #10055 I prefer DeploymentConfig without trigger which points to an ImageStream which points to any Docker-Registry. I think it could make sense to optionally set a trigger on the ImageStream which causes a re-deployment on image-change.

Thanks for the feedback, I'll try to provide a fix asap

if a user want to use plain kubernetes deployment he would just use kubernetes extension no openshift - is that the thinking here ? (that adding openshift means "deploy nicely, but use openshift features that helps if available")

This can be closed now.

Was this page helpful?
0 / 5 - 0 ratings