Hello All, I have my Robots.txt as Code: User-agent: * Allow: / But when looking at go4expert.com's robots.txt I see it has blocked many valid urls and so can anyone suggest for what and why
robots.txt is not designed to tell what CAN be viewed, their point is to tell robots what they are NOT allowed to index. Search engines browse all pages by default so remove the "Allow: /" as it is invalid. Also Go4Expert.com disallows pages where we can assume some spamming is done like member profile page
I think rather than using "allow"...you should disallow as that is what you want to do if you want to hide some pages from crawlers.