Let's Encrypt has a limit of 100 Names per Certificate this means that a mail server at mail.example.org with autoconfig.example.org and autodiscover.example.org could only have a maximum of 48 additional domains adding with autoconfig.* and autodiscover.* subjectAltNames.
The situation will improve some when Let's Encrypt supports wild cards in January 2018 then there could be a maximum of 100 domains per cert.
However I have a mailserver with 200+ domains that I'm in the process of migrating to Mailcow and I'd like to help to add support for Mailcow to run with multiple certs as a work around for this issue.
Basically, you'd need to automatically modify data/conf/nginx/site.conf to contain one server section for the first 48 domains plus the mailcow hostname and additional server sections for 50 further domains each. The ACME script would need to be modified to request certificates only containing the SANs for one server section each. Wildcard certificates "only" save you a factor of two, so you might just as well do it now without wildcard certificates since you a total of 400 SANs anyway.
Unfortunately, Nginx config is currently static in Mailcow and cannot easily be made such that it reads from the database.
My suggestion would be to simply use a reverse proxy to deal with the large number of domains. Modify the ACME script (data/Dockerfiles/acme/docker-entrypoint.sh) to set $VALIDATED_CONFIG_DOMAINS to be empty if there are more than 48 domains. Then, configure Apache or Nginx as a reverse proxy with one site that serves the mailcow hostname and anything in the $ADDITIONAL_SAN variable. Configure further sites for 50 domains each, each of them with autoconfig and autodiscover entries in the ServerAlias/server_name field. Now you can use Certbot to automatically request the certificates for the autoconfig and autodiscover domains and have the ACME container deal with the mailcow hostname's certificate only.
Actually, there is also another solution that wouldn't require so many certificates:
So to solve your problem, set the autodiscover DNS records as described above and modify data/Dockerfiles/acme/docker-entrypoint.sh to not add any domain-specific SANs if there are more than 48 domains. To do that, add something like the following after line 141:
if [ "${#ALL_VALIDATED[@]}" -gt 100 ]; then
ALL_VALIDATED=($(echo ${ADDITIONAL_VALIDATED_SAN[*]} ${VALIDATED_MAILCOW_HOSTNAME} | xargs -n1 | sort -u | xargs))
fi
Some clients are fine with only a SRV, some _need_ an A/AAAA. In case you know you are fine with only a SRV, this is good.
Autoconfig will be supported by further clients in the future, I don't know if some already need HTTPS.
With a Mailcow server with lots (one or two hundred) domains I'd suggest that the best option might be to have a Nginx server configuration for the domain name used for Mailcow admin, which shares a TLS cert with Postfix and Dovecot, which is kept separate the certs needed for Ngnix server configuration for autoconfig.* and autodiscover.*.
This would have the advantage that when adding (or removing) a new domain to the server you wouldn't have the restart Postfix and Dovecot as their cert would remain unchanged, only Ngnix would need restarting and this can be done without causing problems for clients since HTTP is stateless and it is quick and a restart shouldn't cause issues for the Mailcow admin interface or SOGo (there also isn't really an issue restarting Postfix but Dovecot restarts really need to be avoided as some (all?) IMAP clients need to reconnect (Mutt needs restarting)).
Furthermore to make it scale, in a way that avoids the Let's Encrypt limit of 100 subjectAltNames, how about having one cert per domain, eg containing autoconfig.example.org and autodiscover.example.org -- this would mean that when domains are added and removed from the server it is only necessary to remove / get new certs for the domains that are being changed rather then getting new certs for up to 50 domains.
Have I explained that in a understandable way and does it make sense?
If people think that this approach I've suggested above, separate certs for the autoconf and autodiscover sub-domains makes sense then, with some help, I could have a go at trying to get this working...
Could the enhancement label be added to this ticket? I still haven't seen a better idea than having one cert per domain, eg containing autoconfig.example.org and autodiscover.example.org for servers that have more than 49 domains.
I had a productive chat with @andryyy about this in IRC last night and we agreed on the best way to do this would be to use a HTTP reverse proxy to Mailcow on a different IP address for all the autoconfig.* and autodiscover.* sub-domains, the only change then that would be needed in Mailcow would be to add the ability to disable adding the autoconfig.* and autodiscover.* sub-domains to the main Mailcow Let's Encrypt cert.
With this approach the only relevant Let's Encrypt rate limit would be the IP rate limit:
You can create a maximum of 10 Accounts per IP Address per 3 hours. You can create a maximum of 500 Accounts per IP Range within an IPv6 /48 per 3 hours.
I'm writing an Ansible Playbook to configure a minimal Nginx server for this role and I'll link to this from here when it is working.
This clearly isn't ideal but I have written a basic Ansible playbook to setup a Nginx server to act as a reverse proxy, this installs a bash script which can be used to add domains (I might switch to using Ansible for this) and it seems to work, but I think the ability to stop Mailcow trying to get certs for autodiscover.* / autoconfig.* would be needed?
So I have this all working now, but have also discussed it further with @andryyy in IRC and hopefully the functionality will be incorporated into the acme container at some point in the future and the work-around I have implemented won't be needed, the Nginx config I'm generating looks like this:
# example.org
server {
listen 80;
server_name autoconfig.example.org;
root /var/www/html;
location /.well-known/acme-challenge/ {
allow all;
}
location / {
proxy_pass http://example.com/;
}
}
server {
listen 80;
server_name autodiscover.example.org;
root /var/www/html;
location /.well-known/acme-challenge/ {
allow all;
}
location / {
return 301 https://autodiscover.example.org$request_uri;
}
}
server {
listen 443;
server_name autoconfig.example.org autodiscover.example.org;
ssl on;
ssl_certificate /etc/ssl/le/example.org.chain.pem;
ssl_certificate_key /etc/ssl/le/example.org.key.pem;
location / {
proxy_pass http://example.com/;
}
}
A word of warning to anyone else who implements this — if you use a MUA which doesn't react well to an IMAP server being restarted (for example the version of NeoMutt in Debian Stretch) then the resultant daily restarting of all the containers that use certs is rather annoying -- I'm guessing that when the auto*.* sub-domains don't point to the Mailcow IP address this causes the Acme container to attempt to get a new cert every day and this results in the restarts?
Hm, we talked about the many downsides and work this will cause. There is much more scripting to it (including checks) that needs to be done.
The easiest option is using a reverse proxy.
The reverse proxy I set up with the Nginx config above appears to simply result in Redirects rather than content, for example:
lynx -head -dump "https://autoconfig.webarch.co.uk/.well-known/autoconfig/mail/config-v1.1.xml" | grep Location
Location: https://webarch.email/.well-known/autoconfig/mail/config-v1.1.xml
Does anyone have an idea why this might be the case? I guess it is something in the main Nginx site.conf?
Also it has occurred to me that since the reverse proxy is on the same subnet as the Mailcow server and we have whitelisted the subnet for fail2ban that this reverse proxy provides a convenient way to bypass fail2ban?
So, it was my mistakes with the Nginx config that was causing the problems, I think this is now a correctly working example to proxy requests for autoconfig.webarch.net and autodiscover.webarch.net to webarch.email but only for things that need to be proxied, everything else is redirected to config.webarch.email — have I missed anything here?
# Redirect all port 80 requests to 443 apart from ones needed by Let's Encrypt
server {
listen 80;
server_name autoconfig.webarch.net;
root /var/www/html;
location ^~ /.well-known/acme-challenge/ {
allow all;
default_type "text/plain";
}
location / {
return 301 https://autoconfig.webarch.net$request_uri;
}
}
# Redirect all port 80 requests to 443 apart from ones needed by Let's Encrypt
server {
listen 80;
server_name autodiscover.webarch.net;
root /var/www/html;
location ^~ /.well-known/acme-challenge/ {
allow all;
default_type "text/plain";
}
location / {
return 301 https://autodiscover.webarch.net$request_uri;
}
}
# Redirect everything apart from URLs starting with /.well-known/autoconfig and proxy those ones
server {
listen 443;
server_name autoconfig.webarch.net;
ssl on;
ssl_certificate /etc/ssl/le/webarch.net.chain.pem;
ssl_certificate_key /etc/ssl/le/webarch.net.key.pem;
location /.well-known/autoconfig {
proxy_pass https://webarch.email/;
}
location / {
return 301 https://config.webarch.email/;
}
}
# Redirect everything apart from URLs starting with /Autodiscover and proxy those ones
server {
listen 443;
server_name autodiscover.webarch.net;
ssl on;
ssl_certificate /etc/ssl/le/webarch.net.chain.pem;
ssl_certificate_key /etc/ssl/le/webarch.net.key.pem;
location /autodiscover {
proxy_pass https://webarch.email/;
}
location /Autodiscover {
proxy_pass https://webarch.email/;
}
location / {
return 301 https://config.webarch.email/;
}
}
I'm stuck, the autoconfig stuff above works OK but not the autodiscover, for example this is a 404:
curl -I https://autodiscover.webarch.co.uk/autodiscover/autodiscover.xml
HTTP/2 404
server: nginx/1.10.3
date: Thu, 22 Feb 2018 14:57:19 GMT
content-type: text/html; charset=utf-8
content-length: 162
I have tried all sorts of things including suggestions like those here but no joy, does anyone know how to setup a reverse Ngnix HTTPS proxy to a HTTPS Nginx server that passes through HTTP Authentication?
My last attempt:
location /Autodiscover/ {
proxy_http_version 1.1;
proxy_pass_request_headers on;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
more_set_input_headers 'Authorization: $http_authorization';
proxy_set_header Accept-Encoding "";
proxy_pass https://webarch.email/;
proxy_redirect default;
more_set_headers -s 401 'WWW-Authenticate: Basic realm="$http_host"';
}
location /autodiscover/ {
proxy_http_version 1.1;
proxy_pass_request_headers on;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
more_set_input_headers 'Authorization: $http_authorization';
proxy_set_header Accept-Encoding "";
proxy_pass https://webarch.email/;
proxy_redirect default;
more_set_headers -s 401 'WWW-Authenticate: Basic realm="$http_host"';
}
This seems to work, I haven't yet tested it with a POST:
location /autodiscover/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://webarch.email$request_uri;
}
I'm not sure why $request_uri is needed with proxy_pass for /autodiscover but doesn't appear to be for /.well-known/autoconfig.
It works with a POST for example with a post.xml file containing:
<Autodiscover
xmlns="http://schemas.microsoft.com/exchange/autodiscover/outlook/requestschema/2006">
<Request>
<EMailAddress>[email protected]</EMailAddress>
<AcceptableResponseSchema>
http://schemas.microsoft.com/exchange/autodiscover/outlook/responseschema/2006a
</AcceptableResponseSchema>
</Request>
</Autodiscover>
And the request:
curl --data @post.xml -u "[email protected]:PASSWORD" https://autodiscover.webarch.co.uk/autodiscover/autodiscover.xml
So sorry for all the noise in this thread, but I appear to have got there in the end... :-)
I have a separate virtual server (seperate from the Mailcow virtual server) acting as a reverse proxy for auto*.* sub-domains as a work-around for the Let's Encrypt rate limit of 100 Names per Certificate and subsequent Mailcow limit of 49 domains per server if you are using the acme container for the server certificate, I also have the situation where the whole subnet that the reverse proxy for auto*.* is on is whitelisted in the Mailcow fail2ban config because a Icinga monitoring server kept causing the subnet to blocked.
I would expect that other people using reverse proxies on remote servers also have to also whitelist their reverse server IP addresses to due to the potential for malicious actions causing the reverse proxy to be blocked by fail2ban on the Mailcow server (for example a series of login requests designed to fail).
However when the reverse proxy is whitelisted it can then be used for brute force attempts against the Mailcow server using HTTP Authentication to a autodiscover.* sub-domain without triggering a fail2ban block — I have tested this using curl for example:
curl -I -u "[email protected]:TESTFAIL" https://autodiscover.example.org/autodiscover/autodiscover.xml
To mitigate this attack vector on the reverse proxy you can configure Nginx to write a simple fail2ban.log file, in /etc/nginx/nginx.conf under Logging Settings:
log_format fail2ban '$remote_addr $status [$time_local]';
access_log /var/log/nginx/fail2ban.log fail2ban;
Then create a /etc/fail2ban/filter.d/nginx.conf file containing:
[Definition]
failregex = ^<HOST> 401
ignoreregex =
And a /etc/fail2ban/jail.local file containing:
[DEFAULT]
ignoreip = 127.0.0.1
findtime = 600
bantime = 86400
banaction = iptables-multiport
logpath = /var/log/auth.log
[nginx]
enabled = true
filter = nginx
port = http,https
maxretry = 5
logpath = /var/log/nginx/fail2ban.log
And then when brute force attempts are made against the reverse proxy the 401's which are returned by the Mailcow server trigger the reverse proxy to block the remote IP addresses.
I have a Ansible Playbook to configure the reverse proxy I'm running and people are free to copy this config but some specific things would probably need changing (for example the DNS resolvers and the default index.html page for the server).
Add the reverse proxy to "set_real_ip_from" and _don't_ whitelist your RP. Default is to trust internal networks, your RP is external, so you need to add it.
@andryyy thanks I have added a set_real_ip_from line to the list and restarted the Nginx and fail2ban containers but I'm unable to remove the subnet from the whitelist as we have a Icinga monitoring server on the same subnet which checks that the IMAP, SMTP and HTTPS services are available and this was causing the Mailcow server to block the whole subnet all the time.
You can set IPs instead of subnetworks, your monitoring probably is not the same server than your reverse proxy, right?
You may also want to use authenticated monitoring or just connect to the port.
I don’t see any issue with mailcow here. :-/
Am 26.02.2018 um 14:37 schrieb Chris Croome notifications@github.com:
@andryyy thanks I have added a set_real_ip_from line to the list and restarted the Nginx and fail2ban containers but I'm unable to remove the subnet from the whitelist as we have a Icinga monitoring server on the same subnet which checks that the IMAP, SMTP and HTTPS services are available and this was causing the Mailcow server to block the whole subnet all the time.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
I tried simply adding the IP address of the Icinga monitoring server and removing the subnet, however then when making a autodiscover.* request with incorrect data to the reverse proxy it did trigger the Mailcow fail2ban:
curl -I -u "[email protected]:FAKEPASSWD" https://autodiscover.webarch.co.uk/autodiscover/autodiscover.xml
fail2ban-mailcow_1 | 81.95.52.45 matched rule id 6
fail2ban-mailcow_1 | 9 more attempts in the next 600 seconds until 81.95.52.0/24 is banned
So I don't see an alternative to having the reverse proxy whitelisted — if the reverse proxy isn't whitelisted then anyone can sent 10 incorrect requests to it in order to trigger the Mailcow server to block it and therefore prevent it from working, or am I missing something here?
The things I posted today, above, all relate to configuring fail2ban on the reverse proxy in order that it can't be used for brute force attempts, so I agree that it isn't directly related to Mailcow and since I now have a solution to the Let's Encrypt subjectAltName limit I'd be happy for this ticket to be closed.
I´ve runned today in the same issue - more than 100 Domains.
How this would prevent that mailcow / acme.sh would request a set up domain?
Edit
Got it.
Wouldn´t be useful that the following would be part auf acme itself?
So to solve your problem, set the autodiscover DNS records as described above and modify data/Dockerfiles/acme/docker-entrypoint.sh to not add any domain-specific SANs if there are more than 48 domains. To do that, add something like the following after line 141:
if [ "${#ALL_VALIDATED[@]}" -gt 100 ]; then ALL_VALIDATED=($(echo ${ADDITIONAL_VALIDATED_SAN[*]} ${VALIDATED_MAILCOW_HOSTNAME} | xargs -n1 | sort -u | xargs)) fi
Currently it would break the mailcow application, if it goes > 48 Domains (with HSTS some wouldn´t access the page anymore).
Something like "if there are more than 48 Domain, just try to get the certificate for mailcow hostname (and 48 Domains that have a mailbox).
while read domains; do
SQL_DOMAIN_ARR+=("${domains}")
done < <(mysql -h mysql-mailcow -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT domain FROM domain WHERE backupmx=0 UNION SELECT alias_domain FROM alias_domain" -Bs)
Why is it important to include the alias_domains into the certificate request?
Isn´t it just important for 'mailbox-domains', that they have a valid autodiscover / autoconfig config?
The query for the le request could be like that:
d.domain
from
domain d
left join (
select domain, count(username) as mailboxes
from mailbox
group by domain
)m on m.domain = d.domain
where m.mailboxes > 0
That will only list domains that have a mailbox.
FIDDLE:
https://www.db-fiddle.com/f/3B7ioKLawy19H2LSTbv7xB/0
If you have more than 48 domains on a Mailcow server and want auto*.* Let's Encrypt certs than the only thing you can do currently is to implement a reverse proxy for the auto*.* domains and certs, there is a Ansible Playbook to configure this and the key file is this Bash script for adding domains.
Thanks!
But why exactly there is the need of getting certs for alias domains and domains that don´t have a mailbox
@develth, I think you are right. They are not needed for alias domains.
Alias domains can be dropped, yes.
@mkuron - I want to drop autoconfig from ACME with the next push. Do you agree? I already updated the reverse proxy configuration examples.
Yes, I think that is fine. Thunderbird uses an unencrypted connection for autoconfig. Just keep the SAN for autodiscover.
But the additional SAN will still work and create certs?
Yes
Thunderbird uses an unencrypted connection for autoconfig.
I have just checked this using tcpflow and can confirm that this is the case, however it is also the case that if there is a HTTP redirect to port 443 then Thunderbird follows it and works fine — there isn't an issue with serving the data via HTTPS.
Of course there isn't much to gain in terms of privacy with a HTTP redirect since the email address is contained in the GET, this has been discussed on Bugzilla, see Bug 986967 — autoconfig mechanism should use HTTPS URLs, but I'm afraid it is currently set to WONTFIX because:
ISPs need http, because they can't put up certificates for every single domain customer, and only for autoconfig.
So I can understand not having a cert for the autoconfig.* sub-domains but it still doesn't feel right to me not to encrypt everything we can... so I'd still favour having one cert for each domain for the autoconfig.* and autodiscover.* sub-domains as suggested near the top of this thread.
Dropping certs for autoconfig.* only give you so much more room for additional domains — once a Mailcow server has 100 or more domains then a reverse proxy is still going to be needed for the autodiscover.* domains and this is the situation I'm in with one server.
As mentioned above I have setup a small virtual server just for a reverse proxy for autoconfig.* and autodiscover.* and it currently has 109 domains on it, currently this is configured using Ansible and Bash, ideally I think this could be done using a Docker container and be part of the Mailcow project.
There is no reason to encrypt autoconfig.
The whole one-LE-account-per-domain just does not have a high priority. Most of these setups are resellers, who can sponsor this to support mailcow development. ;-)
As long as there is a workaround by using a reverse proxy, the priority is low. This is a very time-consuming feature request.
I think that with this merge this issue can be closed, thanks for all the work @mhofer117 and others have done on this :slightly_smiling_face: .
Most helpful comment
There is no reason to encrypt autoconfig.
The whole one-LE-account-per-domain just does not have a high priority. Most of these setups are resellers, who can sponsor this to support mailcow development. ;-)
As long as there is a workaround by using a reverse proxy, the priority is low. This is a very time-consuming feature request.