I've followed the instructions to set a password on the rspamd webinterface as stated in the documentation. When I go to rspamd webinterface, it asks me for a password: however, I can simply enter any password I want and I will be logged in. Any thoughts? Can anyone reproduce?
This is not happening for me - is this a fresh install, or one you've had for a while? Latest git + docker images?
That's very likely due to a misconfigured reverse proxy.
The RP passes its internal IP to Rspamd, which then recognizes the IP as local and grants access. You could check the Nginx logs when calling the URL in a browser.
Indeed, this could be the reason. My reverse proxy is also running in a docker container in the same mailcow-dockerized network. I suppose it comes from a internal ip range. How would you suggest to proceed?
Could you fix it?
We need to see your configs if you still need help. :)
I have the same issue. I temporary fixed it, by deleting the following lines from the data/conf/rspamd/override.d/worker-controller.inc
secure_ip = "192.168.0.0/16";
secure_ip = "172.16.0.0/12";
secure_ip = "10.0.0.0/8";
I'm closing this for now because the details to troubleshoot this issue is not enough.
Hey guys, I've just had this issue too and fixed it right away after reading this.
My Mailcow Server is running behind a nginx-proxy, which proxies to the mailcow one.
When docker creates default network ranges those are within 172.16.0.0/12.
The main settings for rspamd should be changed to
secure_ip = "192.168.0.0/16";
secure_ip = "172.22.1.0/24";
secure_ip = "10.0.0.0/8";
secure_ip = "127.0.0.1";
secure_ip = "::1";
secure_ip = "fd4d:6169:6c63:6f77::/64"
because Mailcows config is always within range
subnet: 172.22.1.0/24 subnet: fd4d:6169:6c63:6f77::/64
(when it's not manually changed)
Should be an easy fix and no one will run into that problem later.
Best regards,
It only is like this because you don’t pass the real ip to mailcow correctly. The docs explain how to setup the proxy the right way.
Removing the secure ips will stop auto learning of spam/ham when moving a mail to/from the junk folder. Dovecot is not trusted then.
Am 24.09.2017 um 12:20 schrieb Denis Evers notifications@github.com:
Hey guys, I've just had this issue too and fixed it right away after reading this.
My Mailcow Server is running behind a nginx-proxy, which proxies to the mailcow one.
When docker creates default network ranges those are within 172.16.0.0/12.The main settings for rspamd should be changed to
secure_ip = "192.168.0.0/16"; secure_ip = "172.22.1.0/24"; secure_ip = "10.0.0.0/8"; secure_ip = "127.0.0.1"; secure_ip = "::1"; secure_ip = "fd4d:6169:6c63:6f77::/64"
because Mailcows config is always within range
subnet: 172.22.1.0/24 subnet: fd4d:6169:6c63:6f77::/64
(when it's not manually changed)Should be an easy fix and no one will run into that problem later.
Best regards,
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
I rechecked my proxy and customized it with just your configurations from the docs, but I can't get letsencrypt to work now. I can see the challenges passed thru the proxy to 127.0.0.1:8080, but the mailcow-acme container always returns an error
docker logs nginx -f
mail.example.org 22.222.222.22 - - [24/Sep/2017:12:26:38 +0000] "GET /.well-known/acme-challenge/7YADn75d-QY-KBNn9faKxot5IoAI0sNZSBwh4ahq-xg HTTP/1.1" 502 173 "http://mail.example.org/.well-known/acme-challenge/7YADn75d-QY-KBNn9faKxot5IoAI0sNZSBwh4ahq-xg" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"
smtp.example.org 22.222.222.22 - - [24/Sep/2017:12:26:38 +0000] "GET /.well-known/acme-challenge/_-E0BBJaLdkfoKeENJcOhxQAIj3Zfh67Wue46ogaRrs HTTP/1.1" 301 185 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"
2017/09/24 12:26:39 [error] 6#6: *19 connect() failed (111: Connection refused) while connecting to upstream, client: 22.222.222.22, server: mx01.example.org, request: "GET /.well-known/acme-challenge/_-E0BBJaLdkfoKeENJcOhxQAIj3Zfh67Wue46ogaRrs HTTP/1.1", upstream: "http://127.0.0.1:8080/.well-known/acme-challenge/_-E0BBJaLdkfoKeENJcOhxQAIj3Zfh67Wue46ogaRrs", host: "smtp.example.org", referrer: "http://smtp.example.org/.well-known/acme-challenge/_-E0BBJaLdkfoKeENJcOhxQAIj3Zfh67Wue46ogaRrs"
smtp.example.org 22.222.222.22 - - [24/Sep/2017:12:26:39 +0000] "GET /.well-known/acme-challenge/_-E0BBJaLdkfoKeENJcOhxQAIj3Zfh67Wue46ogaRrs HTTP/1.1" 502 173 "http://smtp.example.org/.well-known/acme-challenge/_-E0BBJaLdkfoKeENJcOhxQAIj3Zfh67Wue46ogaRrs" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"
docker logs mailcowdockerized_acme-mailcow_1 -f
acme-client: transfer buffer: [{ "type": "http-01", "status": "invalid", "error": { "type": "urn:acme:error:unauthorized", "detail": "Invalid response from http://mx01.example.org/.well-known/acme-challenge/-y4Ssj5gCEjpHSMQ0jfKbK01ECHiESO6sSJmvr8lHJg: \"\u003chtml\u003e\r\n\u003chead\u003e\u003ctitle\u003e502 Bad Gateway\u003c/title\u003e\u003c/head\u003e\r\n\u003cbody bgcolor=\"white\"\u003e\r\n\u003ccenter\u003e\u003ch1\u003e502 Bad Gateway\u003c/h1\u003e\u003c/center\u003e\r\n\u003chr\u003e\u003ccen\"", "status": 403 }, "uri": "https://acme-v01.api.letsencrypt.org/acme/challenge/p4_7GJ2jaVlc0he8ryxA5V_sGOsnSni3jXVPJ0r70z4/2054964548", "token": "-y4Ssj5gCEjpHSMQ0jfKbK01ECHiESO6sSJmvr8lHJg", "keyAuthorization": "-y4Ssj5gCEjpHSMQ0jfKbK01ECHiESO6sSJmvr8lHJg.-9hwg_tugIONTtQ8ZTarTkcB1KHnsM1iV16rWmcH8IM", "validationRecord": [ { "url": "https://mx01.example.org/.well-known/acme-challenge/-y4Ssj5gCEjpHSMQ0jfKbK01ECHiESO6sSJmvr8lHJg", "hostname": "mx01.example.org", "port": "443", "addressesResolved": [ "111.111.111.111" ], "addressUsed": "111.111.111.111", "addressesTried": [] }, { "url": "http://mx01.example.org/.well-known/acme-challenge/-y4Ssj5gCEjpHSMQ0jfKbK01ECHiESO6sSJmvr8lHJg", "hostname": "mx01.example.org", "port": "80", "addressesResolved": [ "111.111.111.111" ], "addressUsed": "111.111.111.111", "addressesTried": [] } ] }] (1464 bytes)
acme-client: bad exit: netproc(111): 1
Verified hashes.
Retrying in 30 minutes...
nginx proxy config:
server {
server_name mx01.example.org autodiscover.example.org autoconfig.example.org mail.example.org smtp.example.org imap.example.org;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
return 301 https://$host$request_uri;
}
server {
server_name mx01.example.org autodiscover.example.org autoconfig.example.org mail.example.org smtp.example.org imap.example.org;
listen 443;
access_log /var/log/nginx/access.log vhost;
charset utf-8;
override_charset on;
ssl on;
ssl_certificate /etc/nginx/custom/cert.pem;
ssl_certificate_key /etc/nginx/custom/key.pem;
ssl_dhparam /etc/nginx/custom/dhparams.pem;
ssl_protocols TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security "max-age=15768000; includeSubDomains";
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1d;
ssl_session_tickets off;
add_header Strict-Transport-Security max-age=15768000;
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 100m;
}
}
this is highly annoying.
i have to think about removing the secure_ips each and every time i do the manual merge
Why don’t you configure your RP to pass the real IP?
relevant haproxy config:
mode http
timeout connect 5s
timeout check 5s
timeout client 12s
timeout server 1m
option forwardfor
option http-server-close
option http-keep-alive
E.g. It should.
Maybe in the old nginx configuration it did not accept it from the reverse proxy IP. I sometimes fiddle with it. Which makes it kinda error prone unfortunately.
Regardless, I changed the secure_ip in worker-controller.inc to
secure_ip = "172.22.1.0/16";
secure_ip = "127.0.0.1";
secure_ip = "::1";
secure_ip = "fd4d:6169:6c63:6f77::/64
That should not break mailcow and not simply trust anything and everything on the LAN right?
I'll also look more closely at the nginx-mailcow configuration, it should also not simply accept everything from the LAN as truth.. We only have one reverse proxy.
I do think my request is valid; why blindly trust the entire LAN? Or actually, any RRFC1918 IP-address? Just add an environment variable for reverse proxy IP's for usage in nginx and done. Add the dovecot-mailcow as a secure_ips in rspamd, and its all good?
No X-Forwarded-For?
what?
We trust them because it is not exposed and only meant to be inside the mailcow network. The internal network can be any private network.
No, it is not that easy, there is more to it. We also want to get rid of static IPs.
If your Reverse Proxy does not forward the client IP, you should check your setup.
I will add a second worker for Dovecot. But that does not fix your reverse proxy problem at all.
Isn't getting rid of static-ips as easy as using the hostnames? I saw in your latest commit you already did that for most of the images.
secure_ips doesn't work with hostnames. :-/
That's understandable and disappointing, I guess.
Maybe fixable by generating the secure_ips configuration at start-up of the image, and including the configuration file in rspamd?
The same you now automatically generate the configuration files for dovecot when someone starts the container? Or is that what you meant withi 'adding a second worker for dovecot' ?
My reverse proxy does add the IP, I think i may have removed something important in the nginx configuration.
„Set real ip from“ may be missing.
Am 08.02.2018 um 12:58 schrieb ChessSpider notifications@github.com:
That's understandable and disappointing, I guess.
Maybe fixable by generating the secure_ips configuration at start-up of the image, and including the configuration file in rspamd?
The same you now automatically generate the configuration files for dovecot when someone starts the container? Or is that what you meant withi 'adding a second worker for dovecot' ?My reverse proxy does add the IP, I think i may have removed something important in the nginx configuration.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
I changed everything to use a newly created socket in dev.
Most helpful comment
Why don’t you configure your RP to pass the real IP?