Mailcow-dockerized: I cant get my SSL certificates to work

Created on 24 Mar 2017  路  14Comments  路  Source: mailcow/mailcow-dockerized

Everything works and it is very easy to install. Only to get the certificates working I am facing some difficulties.

Also this part of the documentation I cant understand:

mv data/assets/ssl/cert.{pem,pem.backup}
mv data/assets/ssl/key.{pem,pem.backup}
ln $(readlink -f /etc/letsencrypt/live/${MAILCOW_HOSTNAME}/fullchain.pem) data/assets/ssl/cert.pem
ln $(readlink -f /etc/letsencrypt/live/${MAILCOW_HOSTNAME}/privkey.pem) data/assets/ssl/key.pem

First we rename the cert.pem and key.pem files. And then we try to link them?
Could someone explain this to me?

support

Most helpful comment

First you either have to generate a certificate or order a certificate from a certificate authority.

After that you rename the snakeoil certificates. These were generated by default and are not trusted but usable if you need encrypted communication. You either CD to your mailcow dir or use the full path to the given certs.

cd /dir/to/mailcow
mv data/assets/ssl/cert.{pem,pem.backup}
mv data/assets/ssl/key.{pem,pem.backup}

As you already have generated or bought valid certificates, you want to hardlink (either hard- or softlink) them so mailcow can use them. If you are still operating out of the mailcow folder use the given commands to hardlink the certificate (always the the most recent one because letsencrypt creates a softlink to the /archive folder) or you have to use the full path to your mailcow dir in the second argument,

You can drop the "source mailcow.conf" command and instead use the FQDN for ${MAILCOW_HOSTNAME}

source mailcow.conf
ln $(readlink -f /etc/letsencrypt/live/${MAILCOW_HOSTNAME}/fullchain.pem) data/assets/ssl/cert.pem
ln $(readlink -f /etc/letsencrypt/live/${MAILCOW_HOSTNAME}/privkey.pem) data/assets/ssl/key.pem

If you are not working on root level you have to use sudo to create the links

source mailcow.conf
sudo ln $(sudo readlink -f /etc/letsencrypt/live/${MAILCOW_HOSTNAME}/fullchain.pem) data/assets/ssl/cert.pem
sudo ln $(sudo readlink -f /etc/letsencrypt/live/${MAILCOW_HOSTNAME}/privkey.pem) data/assets/ssl/key.pem

Hope this helped

EDIT: Oh and don'T forget to restart the containers

docker-compose restart postfix-mailcow dovecot-mailcow nginx-mailcow

All 14 comments

First you either have to generate a certificate or order a certificate from a certificate authority.

After that you rename the snakeoil certificates. These were generated by default and are not trusted but usable if you need encrypted communication. You either CD to your mailcow dir or use the full path to the given certs.

cd /dir/to/mailcow
mv data/assets/ssl/cert.{pem,pem.backup}
mv data/assets/ssl/key.{pem,pem.backup}

As you already have generated or bought valid certificates, you want to hardlink (either hard- or softlink) them so mailcow can use them. If you are still operating out of the mailcow folder use the given commands to hardlink the certificate (always the the most recent one because letsencrypt creates a softlink to the /archive folder) or you have to use the full path to your mailcow dir in the second argument,

You can drop the "source mailcow.conf" command and instead use the FQDN for ${MAILCOW_HOSTNAME}

source mailcow.conf
ln $(readlink -f /etc/letsencrypt/live/${MAILCOW_HOSTNAME}/fullchain.pem) data/assets/ssl/cert.pem
ln $(readlink -f /etc/letsencrypt/live/${MAILCOW_HOSTNAME}/privkey.pem) data/assets/ssl/key.pem

If you are not working on root level you have to use sudo to create the links

source mailcow.conf
sudo ln $(sudo readlink -f /etc/letsencrypt/live/${MAILCOW_HOSTNAME}/fullchain.pem) data/assets/ssl/cert.pem
sudo ln $(sudo readlink -f /etc/letsencrypt/live/${MAILCOW_HOSTNAME}/privkey.pem) data/assets/ssl/key.pem

Hope this helped

EDIT: Oh and don'T forget to restart the containers

docker-compose restart postfix-mailcow dovecot-mailcow nginx-mailcow

Thanks for your answer. It helped me.

The problem was that I had my home folder on a different partition then my root folder. So I couldn't make a hard link. Now I used symbolic links instead.

Can anyone tell me why hardlinks are used for the certificates?
It seems rather inconvenient to me, updating the cert, remove link, create link and restart the composers.

There is no need to remove the link because it's a link not a copy. I don't really understand why the manual is mentioning this.

Each hardlink points to the same inode (or data). w. If you change the data for one the other ones will be the same. Nowadays it's rather uncommon to use hardlinks because you have to delete each one if you really want to delete a file EDIT: and for other reasons.

I would use a symbolic link instead to make life easier.

But I it could be I am overseeing something here.

EDIT: some containers (i.e. nginx) must be restarted to use the correct certificate. As far as I know atleast nginx must be restarted to display the correct data.

EDIT: great explanation http://askubuntu.com/questions/108771/what-is-the-difference-between-a-hard-link-and-a-symbolic-link

Why is there no built in ACME client in mailcow-dockerized? I would recommend the usage of dehydrated, its simple to configure, works like a charm and ist fully functional via cron. That inside of mailcow-dockerized could get new certs, and these could be shared between the containers using a shared volume I guess.
The benefit would be valid certs out of the box, and no hassle with the host/container barrier. I use dehydrated with nginx in a setup, where nginx is not required to restart to get the certs. The challenge and verification process just goes through nginx, and afterwards a reload to nginx is enough to get the new certs in use. Dehydrated reads its domains.txt always when refreshing the certs, so there is zero maintenance for dehydrated even if domains get added or removed.

Symbolic links to, for example, /etc/letsencrypt/live/mail.example.org/something.pem will result in Nginx failing to start:

 nginx: [emerg] BIO_new_file("/etc/ssl/mail/cert.pem") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/ssl/mail/cert.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)

What happens next is ERROR: for rspamd-mailcow Container "XY" is unhealthy. due to failing health checks.

Also: mailcow does not come with a LE client as this is something you decide on your own. You also need to accept LEs ToS and register with a valid email address. So this is nothing I plan to (re-)add to mailcow anytime soon. :-(

Weird because it is working perfectly fine for me for host/container nginx...

Why is that a problem, if people have to agree to Terms of Service and register with an email? Of course it should be optional. But then it's just like DKIM, it's relying on external config/services. On the old mailcow there was no barrier between the mailcow services and the host system, so it worked just fine to symlink the mailcow certs to the symlink of the LE client, which then symlinked these links on the most actual cert. This does not work, probably because the symlink points outside of folders accessible by the regarding containers. So a hardlink ist necessary. But I can't hardlink the symlink to the most actual cert, because this again would point somewhere outside of the storage accessible by the containers. That's why there is the readlink -f in the install tutorial I guess. So we have a hardlink which links the most actual cert file which is then brought inside the containers via overlayfs, so we have to restart the containers and recreate them to get the new cert inside.

If there was an integrated LE Client, the container all could renew the certs on there own without hassle by the admin after initial setup, even without a restart as a reload would be enough. So what I am about to say is, that the cut down on needed maintenance is definitely worth thinking about integrating LE Client or a more elegant way of sharing certs between host and containers. And as far as I know, there probably is a huge overlap between people who use LE and people who use mailcow ;)

  1. Most people seem to use a reverse proxy.
  2. The hard link "problem" ist hardly a problem. Every LE client should be able to run a post hook.
  3. Not everyone uses LE.
  4. These are not virtual machines. You cannot (or, well, should not) run a background service in a Docker container (sucks that I am doing this in SOGo actually).
  5. A container cannot be reloaded .. well, you can run something like exec nginx-mailcow nginx -r, but it is easier to just restart them. :-)

I can extend the docs explain the post hook stuff. I just don't see the problem here, sorry. :-(
The Nginx configuration is pre-configured for an easy implementation.

Maybe you can leverage this guy as an optional choice in the docker-compose?

https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion

@andryyy
to 1. I do indeed, so the WebUI is accessible with valid certs. But, e.g. the default iOS mailclient doesn't support invalid certs anymore, at least that was one comlain of one of my users. So the certs are used for communication between client and imap server too. This I guess nginx could rproxy this too. But also the communication between my and other mail server is encrypted using these certs I guess? To cut it down: is it possible to rproxy the complete communication (not only the WebUI) which relies on these certs?

to 2. that's true, tho. And even if they don't, if you use e.g. cron to run them, you could run cron too to restart the container

to 3. Yes, but no. As already stated, the match between people who run a mailserver in a semi-professional manner and people who care about Encryption but don't want to spend more money as necessary on it is probably quiet high.

to 4. For my interest, why is it considered bad to run additional daemons inside docker containers?

to 5. True too.

Please don't missunderstand me, I fully respect your choice of what to implement and what not. I just did not fully understand your basis to this choice.

@Braintelligence
That sounds like a cool idea

@wucke13 I use this together with nginx-proxy to provide web-apps via usage of environment-variables and setting ports accordingly. Works very well up till now.
You'd have to tweak the settings, though, to make it work with non-http-stuff, with using symlinks and whatnot.

I guess this issue is resolved.
Please feel free to reopen this if further assistance is needed.

@wucke13 here is a rundown on point 4: https://devops.stackexchange.com/a/451

Was this page helpful?
0 / 5 - 0 ratings

Related issues

bonanza123 picture bonanza123  路  3Comments

mritzmann picture mritzmann  路  3Comments

zkryakgul picture zkryakgul  路  3Comments

RogerSik picture RogerSik  路  3Comments

schoebelh picture schoebelh  路  3Comments