Google started to reports root url blocked by robots.txt - but it's not -

by woop   Last Updated September 17, 2018 16:04 PM

My website has been running for multiple years without problem, however I recently noticed my root url on Google does not show any description/title.

The Webmaster Tool reports this error:

Crawl allowed?

 No: blocked by robots.txt Page fetch  Failed: Blocked by robots.txt

Here is my robots.txt which is pretty simple and allow all requests:

User-agent: *
Allow: /

I confirmed I don't have any HTML tag blocking the index, my meta robots is <meta name="robots" content="index, follow" />

Not sure why I'm getting this error message. The robots.txt tester (https://www.google.com/webmasters/tools/robots-testing-tool) reports no error, but yet I'm getting this problem for > 3 weeks now.

Also the root url return HTTP 200

HTTP/1.1 200 OK
Date: Sat, 15 Sep 2018 18:41:34 GMT
Content-Type: text/html
Connection: keep-alive
Last-Modified: Sat, 15 Sep 2018 18:20:13 GMT
Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
Server: cloudflare
CF-RAY: 45ad3a6cab3192d6-SJC

Any idea what could be the problem? This is only happening for my root URL.



Related Questions




Google Search Console: 404 errors on existing pages

Updated February 24, 2017 09:04 AM