The request is disallowed by a Robots.txt rule
The request to URL is disallowed by a Robots.txt rule. Robots.txt rules are used to tell search engines which Web site locations should not be crawled.
If the blocked URL is supposed to be accessible for search engines to crawl and index, modify the rules in the Robots.txt file to allow search engines to access this URL.
These violations are from the Bing SEO Toolkit, and are the same violations
that Bing finds when crawling the internet, it is a safe bet that Google would find
much the same violations.
"exactly the same as the crawler of the search engine will see it"
Alessandro Catorcini ,
Lead Program Manager, Bing API
Date Recorded: 2011-11-07
Back to SEO Violations