Ingress-nginx: Call for Discussion: FastCGI Support?

Created on 24 Aug 2018  路  19Comments  路  Source: kubernetes/ingress-nginx

Is this a request for help?
No.

What keywords did you search in NGINX Ingress controller issues before filing this one?
php, cgi


Is this a BUG REPORT or FEATURE REQUEST?
FEATURE REQUEST

If this call for discussion isn't in the clash with project's roadmap and intended functionality I'd like to discuss implementing FastCGI support in the ingress.

The reason for this is that currently, for people using PHP-FPM at least topology looks like this:

nginx-ingress > nginx + php

If nginx is perfectly capable of talking fastcgi, topology could be:

nginx-ingress > php

This would completely remove the need for using nginx in pods and ingress would directly talk to the backend and removed one hop from the original request to the app thus reducing resources footprint to the cluster.

Thoughts?

Most helpful comment

We have a completed solution for this feature request, we just have to complete the doc and we'll PR.

All 19 comments

Speaking as someone who has to deal with multiple layers of nginx (including the ingress controller) to run PHP apps in k8s, I would readily welcome this.

Challenges I see:

  • Many applications are going to also need to deploy static assets, which means probably building an nginx image anyway
  • Even with a standalone backend for static assets, I don't believe there's a decent way to write an ingress rule that's roughly equivalent to try_files, which is the common approach for routing requests to php-fpm. In a well-organized application that's hopefully a small fixed number of path prefixes, but still not super-clean. You could of course do this in the application itself, but it's _way_ slower
  • All of the API docs around k8s' Ingress object suggest that it _really_ wants to be using the http protocol - it's right there in the name of the Ingress rules. The ideal case is that k8s.next Ingress has first-class support for fastcgi, but I don't feel that's likely to happen
  • Assuming the above doesn't happen, how does an ingress actually indicate that it should use fastcgi? I'd think an annotation, but that applies across the whole ingress. nginx.ingress.kubernetes.io/fastcgi-backend as a boolean screws up static assets (if they're on a different backend), and a string to indicate the backend could get messy very quickly. Being forced to create multiple ingresses rather than one with different routing rules is sub-optimal.

Despite all this, I'd be really happy to see acceptance of this - the extra hops quickly turn into a performance problem. For better or worse I have quite a lot of experience dealing with PHP-FPM (and by extension, fastcgi) so I can probably give a fair bit of feedback on this. It's only about three lines of nginx config to make it work, assuming the upstream stuff is solvable.

Many applications are going to also need to deploy static assets, which means probably building an nginx image anyway

Yes, this is the way

ssuming the above doesn't happen, how does an ingress actually indicate that it should use fastcgi? I'd think an annotation, but that applies across the whole ingress. nginx.ingress.kubernetes.io/fastcgi-backend as a boolean screws up static assets (if they're on a different backend), and a string to indicate the backend could get messy very quickly.

We already have this https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol

Being forced to create multiple ingresses rather than one with different routing rules is sub-optimal.

Actually, this is a limitation of using annotations. We don't have a cleanest way to do this.

"Many applications are going to also need to deploy static assets, which means probably building an nginx image anyway"

Why would it be required to build additional image just for sake of packaging assets? Aren't init containers meant for this (copying of assets before pod starts)?

We already have this https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol

Oh cool, I had no idea. I searched for a bunch of keywords on the page, but they were all related to cgi so I missed it.

Why would it be required to build additional image just for sake of packaging assets? Aren't init containers meant for this (copying of assets before pod starts)?

Maybe it's not necessary. I have basically a two-line dockerfile that copies public/ to a simple Nginx image and serve static assets from there, so that's my thought process. There could certainly be a better way.

In such a small/basic deployment this probably doesn't apply. I'm talking large complex PHP-FPM implementations spanning across tens/hundreds of replicas where removing one component like unnecessary nginx container can have real operational value.

Why would it be required to build additional image just for sake of packaging assets? Aren't init containers meant for this (copying of assets before pod starts)?

The init container for copying the assets might not be workable for many deployments because the nginx controller usually runs in a separate namespace from the application. Also the nginx controller is shared by many applications so it's not clear how it would work. I hate the idea of it but a nfs mount on the controller might be a solution for some.

Anyway the use case for "pure http" applications which only need php-fpm and have no static assets to be delivered by nginx exists and it would be nice to be able to remove the extra reverse-proxy layer in those cases.

@owengo There are two possible interpretations of:

Many applications are going to also need to deploy static assets, which means probably building an nginx image anyway

I don't think the suggested solution was to use an init container in nginx-ingress to have it statically serve out the data.

I think it was a suggestion on an alternate for building a container with the content slid in. you could do a pod that was just nginx and use an init container to git clone some content.

I'm actually working on a csi driver that would let you use a pure image as a volume too, to solve that kind of thing:
https://github.com/kubernetes-csi/drivers/issues/85
It may be possible in k8s 1.12 now that they have pod info. But may require 1.13 before it will work smoothly. I need to port the prototype to the new interfaces and see what happens.

I have also been looking into this as solution to managing thousands of php based applications on k8. I think that one solution might be to build this out using custom templates.

+1

For micro-service backends written in PHP the need to have a baked in nginx really grows the size of images. Having something like this would be very helpful and would truely separate deployment concerns.

Were using a shared volume between the php-fpm container and the nginx container. This way you don't have to bake nginx into your php-fpm.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: {{CF_REPO_NAME}}
  labels:
    app: {{CF_REPO_NAME}}
spec:
  replicas: 1
  revisionHistoryLimit: 5
  template:
    metadata:
      name: {{CF_REPO_NAME}}
      labels:
        app: {{CF_REPO_NAME}}
        tier: web

    spec:
      containers:
        # [START app container]
        - name: {{CF_REPO_NAME}}
          image: {{BUILD_IMAGE}}
          imagePullPolicy: Always
          envFrom:
          - configMapRef:
              name: environment-variables
          ports:
          - name: web
            containerPort: 8080
          lifecycle:
            postStart:
              exec:
                command: ["/bin/bash", "/app/lifecycle-exec-command.sh"]
          livenessProbe:
            tcpSocket:
              port: 8080
            initialDelaySeconds: 15
            timeoutSeconds: 15
          readinessProbe:
            tcpSocket:
              port: 8080
            initialDelaySeconds: 5
            timeoutSeconds: 3
          volumeMounts:
            - name: app-oauth-keys
              mountPath: /app/storage/oauth-keys
            - name: shared-files
              mountPath: /var/www/html
        # [END app container]

        # [START nginx container]
        - name: nginx
          image: nginx:1.7.9
          name: nginx
          volumeMounts:
            - name: shared-files
              mountPath: /var/www/html
            - name: nginx-config-volume
              mountPath: /etc/nginx/nginx.conf
              subPath: nginx.conf
        # [END nginx container]

      # [START volumes]
      volumes:
        - name: cloudsql-instance-credentials
          secret:
            secretName: cloudsql-instance-credentials
        - name: cloudsql
          emptyDir:
        - name: ssl-certs
          hostPath:
            path: /etc/ssl/certs
        - name: app-oauth-keys
          secret:
            secretName: app-oauth-keys

    # Create the shared files volume to be used in both pods for the code base
        - name: shared-files
          emptyDir: {}

    # Add the nignx ConfigMap we declared above as a volume for the pod
        - name: nginx-config-volume
          configMap:
            name: nginx-config

# [END volumes]

lifecycle-exec-command.sh

#!/bin/bash
# php-fpm and nginx have to have access to a volume which contains the code.
# this volume is mounted at /var/www/html and, in the deployment.yaml can be
# seen as a volumeMount called 'shared-files'.

# this script copies the code to the shared volume and makes it readable by the
# standard user for php and nginx - www-data.

cp -r /app/. /var/www/html
chown -R www-data:www-data /var/www/html 

+1
It would be very helpful for my microservices, written in php.

@mooperd have you been able to get things like this working: http://geekyplatypus.com/dockerise-your-php-application-with-nginx-and-php7-fpm/

This would let you avoid using the source copy and package the correct dependencies into your php containers.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

We have a completed solution for this feature request, we just have to complete the doc and we'll PR.

@cdemers Let me know if I can be of any assistance, regarding the docs or anything.

This is a much needed feature. Thanks for the awesome work. :+1:

Amazing. Can't wait to test. Thanks all!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

silasbw picture silasbw  路  3Comments

smeruelo picture smeruelo  路  3Comments

lachlancooper picture lachlancooper  路  3Comments

kfox1111 picture kfox1111  路  3Comments

whereisaaron picture whereisaaron  路  3Comments