Hi everybody
Testing my Website with Lighthouse I always get the message in the SEO-section
"robots.txt is not valid
Lighthouse was unable to download your robots.txt file"
But robots.txt IS valid, it is a plain text file, it resides in the root directory, access rights are OK AND it is found and testified as correct by any other performance analyzer tool.
The site is https://heilendeh盲nde.de (which means healing hands) and as you can see, it is an IDN with a "Umlaut" in the name. The internally translated ACE-string is http://xn--heilendehnde-ocb.de
Maybe, not finding robots.txt has something to do with this IDN, but that's just an assumption of mine.
Any idea?
Thanks in advance
Bernhard
Thanks for filing @einerdinger!
It seems like it's actually failing because of your content security policy.

We try to fetch the robots.txt in the context of your page, so we occasionally run into limitations like this. Fixing would require us to fetch the robots.txt in another context. There are a few other cases like this where we could also speed things up, so it's a good move for us to do in the future 馃憤
Hi Patrick
and thanks for the lightning fast answer. CSP is a good point and I will add the connect-src directive, but I need to know the url to allow it. I can see a red alert message, but it disappears too fast
best regards
Bernhard
It's added as a content script, so no particular URL. I'm not 100% sure if it's classified as unsafe-inline or not. self might do the trick already, I haven't tested it out yet sorry! :)
Hi again
connect-src 'self' did the job!
Thanks again
@einerdinger, fyi there is 1+ year old bug open for this #4386
Most helpful comment
Hi again
connect-src 'self' did the job!
Thanks again