Peertube: Domain change with traefik

Created on 13 Oct 2018  路  16Comments  路  Source: Chocobozzz/PeerTube

I'm trying to run peertube behind a traefik reverse proxy, and gettings this:

2018-10-13 15:43:08.295 warn: It seems PeerTube was started (and created some data) with another domain name. This means you will not be able to federate! Please use NODE_CONFIG_DIR=/config NODE_ENV=production npm run update-host to fix this.

I got PEERTUBE_WEBSERVER_HOSTNAME set to my external domain name, and am running traefik with letsencrypt ACME - pod name (hostname that node recognizes) will always differ from the real name.

Can the dockerfile runner be amended to run npm run update-host every time it starts pls ? or am I missing something else ? Cheers

Component Question

Most helpful comment

Basically I have three identical servers (2x3TB, 16G RAM, Intel i7) which I linked using a Openvpn setup for private network (ring based to avoid crash of the VPN server).

On this, I setup a k8s cluster (master, etcd and node using same server. Not ideal in term of security, but acceptable regarding the cost) using 100GB of one of the 3TB disk.
I also setup a ceph cluster using the remaining disk, with two replicas, getting a 16TB storage.
I use cephs provisionner for k8s to branch PV on ceph.

After that, deploying postgres, redis and Peertube is just a matter of yaml :-) (personally, I use helm for deployment)

It's not perfect. I have still some performances problem due to the network configuration, crash detection could be increased and some hypothesis I choose to accept are risky (DNS round robin, high ciphering for VPN using CPU, etc etc) but it's still workable :-)

I plan to write some blog articles (in French) about this infra once I have something that convince myself, but I am pretty hard to convince ^^'
And my users still face some very high and random latency when uploading that need more work. The basis are here though and it works :-)

All 16 comments

I'm not sure I understand: your domain name changes every time you start the container?

Well, I run on kubernetes behind traefik proxy, so my ingress HTTP traffic looks like following:

Internetz -> traefik ingress proxy -> kubernetes -> pod (docker containers)

now the reverse proxy let's say has a domain pinned to it like peertube.awesome.com, so it gets a HTTP host header like this. Now the problem is that pods (docker containers on kube) have each its unique, randomized hostnames (on the basis of a hash). Therefore I can't figure out why it's restarting all the time after printing a message above.

Ping @LecygneNoir who has a kubernetes setup too

Hello @sokoow

On kubernetes, I have this message the first time I start my pod, but only because I migrated the postgres database and videos data from another instances (which indeed had a different hostname).

Once I have launched the update-host one time, it's patched and the error disappears, even when pods restart or is upgraded.

And from fresh instance (for dev purpose) with no video and new databases, I have never seen this error 馃
For my part I do not use traeffik, but I don't think it's linked. For what I know about the peertube internal process, this error is printed at the very launch of the pods, before any request are send by traefik, so I tends to think there is something wrong with the config, some existing torrent or the database :)

Are you sure your database and config directories are persistent and not recreated at each launch?
Do you use helm charts to create the pods, or manual deployments?

Thanks for replying, nice to see another kube-head around :) I got a bunch of loose yaml manifests for a moment only, just to prove it's working, that'll get templatized soon. Now, the fact we are discussing this means that obviously the original docker image doesn't do update-host ever, right ? What about rebuilding it so it does on every run ? It shouldn't break existing workflows, and it'll make kubernetes deployment possible.

You're welcome!

Mmmh, I am unsure relaunching at each start a good idea, as the process can take long time (around 5 hours for my instances with all videos for instance). Launching it at each start will consume lot of resources and time :-/

In my setup, I never have to relaunch the update, even if I restart the pod, or update it with new images. I use kubernetes from beta11 from beta16 and now 1.0, the only time I have to launch the update is when I have migrated data from my legacy instance to k8s, and after that Peertube has never complained.

Did the update takes time on your instance? Perhaps it has no time to complete before the ssh connection breaks.
Beware that the docker image has 2 persistent storages, one for the videos/logs/torrent and one for the config. If one is lost when you restart a pod, it could causes the problem?

Again with fresh installation I do not have the problem, so if you install a totally new instance, with a new databases, do you still have this problem?

Are you migrating data from another instance? Or just changing the domains for your existing k8s instance? If we find where is the cause, I am pretty confident we could solve the problem as for me it does'nt exist anymore ;-)

I'm not doing anything fancy, just deploying peertube with this:

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name:  peertube
  labels:
    app: peertube
spec:
  replicas: 1
  selector:
    matchLabels:
      app: peertube
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: peertube
    spec:
      containers:
      - name: redis
        image: "library/redis"
        imagePullPolicy: Always
        ports:
          - name: redis
            containerPort: 6379
            protocol: TCP
      - name: peertube
        image: "chocobozzz/peertube:production-stretch"
        imagePullPolicy: Always
        env:
        - name: PEERTUBE_DB_HOSTNAME
          value: xxx
        - name: PEERTUBE_DB_USERNAME
          value: xxx
        - name: PEERTUBE_DB_PASSWORD
          value: xxx
        - name: PEERTUBE_REDIS_HOSTNAME
          value: peertube-redis-svc
        - name: PEERTUBE_WEBSERVER_HOSTNAME
          value: peertube.xxx
        - name: PEERTUBE_WEBSERVER_PORT
          value: "80"
        - name: PEERTUBE_WEBSERVER_HTTPS
          value: "false"
        ports:
          - name: svc2pod
            containerPort: 9000
            protocol: TCP
        livenessProbe:
          httpGet:
            path: /
            port: svc2pod
        readinessProbe:
          httpGet:
            path: /
            port: svc2pod
        volumeMounts:
        - name: data
          mountPath: /data
      volumes:
       - name: data
         hostPath:
          path: /data/peertube
---
apiVersion: v1
kind: Service
metadata:
  name: peertube-redis-svc
spec:
  selector:
    app: peertube
  ports:
  - name: redis
    port: 6379
---
apiVersion: v1
kind: Service
metadata:
  name: peertube-svc
spec:
  selector:
    app: peertube
  ports:
  - name: svc2pod
    port: 9000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: peertube
spec:
  rules:
  - host: peertube.xxx
    http:
      paths:
      - path: /
        backend:
          serviceName: peertube-svc
          servicePort: 9000

and in peertube logs I got:

$ kubectl logs peertube-597d977456-2c2xr peertube

> [email protected] start /app
> node dist/server

[peertube.puczat.pl:443] 2018-10-23 16:26:30.044 info: Database peertube is ready.
[peertube.puczat.pl:443] 2018-10-23 16:26:30.521 info: Using xxx:487 as SMTP server.
[peertube.puczat.pl:443] 2018-10-23 16:26:30.529 info: Testing SMTP server...
[peertube.puczat.pl:443] 2018-10-23 16:26:30.588 warn: It seems PeerTube was started (and created some data) with another domain name. This means you will not be able to federate! Please use NODE_CONFIG_DIR=/config NODE_ENV=production npm run update-host to fix this.

and it restarts all the time. Obviously kube is running it under a hostname of peertube-597d977456-2c2xr , not peertube.xxx and that might be a problem ?

Hello,

Normally, the pod's name should not be used as servername by peertube, as Peertube does not use server hostname on legacy (not docker) installation. It should only use the PEERTUBE_WEBSERVER_HOSTNAME env variable.

Your deployment seems correct except for one thing. If I read it clearly, you missed the persistent volume for the configuration.

Indeed, the docker volume contains 2 volumes, one for the data (which you have), and one for the config mounted to /config

Here is my Volume part for my deployment:

        volumeMounts:
        - mountPath: /data
          name: data
        - mountPath: /config
          name: config

According to the docker entrypoint.sh, if this mount is missing, it copies at start the default config. I guess that even with env variables, it should have something in the config that reset the domain, or perhaps consider it's a fresh instance :thinking:

Could you try to add a persistent volume for /config and see if the problem occurs again?
Of course, you need to launch the NODE_CONFIG_DIR=/config NODE_ENV=production npm run update-host command from inside the pod once it's launched with its config volume :-)

Tell me if it's help!

Closing, thanks @LecygneNoir for the help!

@LecygneNoir sorry off topic, but how are you handling storage with Kubernetes? It's a bit tricky because Peertube only supports local files. Any recommendations?

@Nutomic you could create an issue about this to discuss more if you want, but to make it short I am using two persistent volumes based on ceph :-)

@LecygneNoir I see, so the data is stored on your own servers? I would definitely be interested if you could describe your setup in more detail.

I'd be interested too :)

Basically I have three identical servers (2x3TB, 16G RAM, Intel i7) which I linked using a Openvpn setup for private network (ring based to avoid crash of the VPN server).

On this, I setup a k8s cluster (master, etcd and node using same server. Not ideal in term of security, but acceptable regarding the cost) using 100GB of one of the 3TB disk.
I also setup a ceph cluster using the remaining disk, with two replicas, getting a 16TB storage.
I use cephs provisionner for k8s to branch PV on ceph.

After that, deploying postgres, redis and Peertube is just a matter of yaml :-) (personally, I use helm for deployment)

It's not perfect. I have still some performances problem due to the network configuration, crash detection could be increased and some hypothesis I choose to accept are risky (DNS round robin, high ciphering for VPN using CPU, etc etc) but it's still workable :-)

I plan to write some blog articles (in French) about this infra once I have something that convince myself, but I am pretty hard to convince ^^'
And my users still face some very high and random latency when uploading that need more work. The basis are here though and it works :-)

Wow that's an expensive setup. Do you have all the data on HDD? Maybe slow database access is the reason for high latency (just a wild guess).

@Nutomic yes all data are on HDD, through the ceph storage.

Honestly ceph implies a relatively good speed in term of IO. I have up to 500Mbps write on the whole cluster for the pool of 6x7200rpm, BUT at the same time, the network part implies random latencies ^^

For example, writing a big file sequentially (typically what Peertube do when uploading a big video file): 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 2.3117 s, 464 MB/s

(yes, this is a quick test, I know :-p)

BDD are stored on a replicated postgres cluster, so read should be quick (as data are available locally), but multiples files and/or multiples access have their own perf problem and it's indeed my actual hypothesis.

From my observations, the lag appears after the upload, when Peertube works on its internal process: file is uploaded, need to compile ID, video information, etc etc, so the BDD part is my first choice for investigation :D

But this is a totally different subject regarding this traeffik issue ^^

Was this page helpful?
0 / 5 - 0 ratings

Related issues

milleniumbug picture milleniumbug  路  3Comments

sschueller picture sschueller  路  3Comments

ChameleonScales picture ChameleonScales  路  3Comments

XenonFiber picture XenonFiber  路  3Comments

gegeweb picture gegeweb  路  3Comments