Robot.txt is a way to tell search engines whether they are allow to index a page in the search result or not. The bots are automatic, and before they could access your site, they check the Robot.txt file to make sure whether they are allowed to crawl this page or not. Sometime, people do not want to stop searching engine from crawling their whole website. On the other hand, they want to specify few pages, which should not be indexed in the search results. Therefore, today in this article, we will show you How to Enable Custom Robot.txt File in Blogger?
First of all, go to Blogger.com >> your site >> Settings >> Search engine Preference >> Crawlers and indexing. Now you will be able to see two options i.e. Custom robots.txt and Custom robots header tags. These two options would provide you the flexibility to customize your robot.txt file.
In the text area, type the content which you want to exclude from being crawled. After editing the file according to your needs, press the save button to conclude. However, if you want to revert back to the default robot file then, you can select “No” instead of “Yes.
Disallow: /about
User-agent: *
Disallow: /contact
User-agent: *
Disallow: /services
We hope this tip would help you. If you are having any issue regarding crawling then, feel free to leave your questions in the comments below and our experts would try to help you in solving them.
Note: If this tutorial worked for you (and it should work), please leave a comment below. Thanks.
First of all, go to Blogger.com >> your site >> Settings >> Search engine Preference >> Crawlers and indexing. Now you will be able to see two options i.e. Custom robots.txt and Custom robots header tags. These two options would provide you the flexibility to customize your robot.txt file.
- Custom robots.txt: This option provides you the ability to edit your whole robot.txt file. You can just type your content whether you want the content to be crawled by spiders or not. However, You can always undo your actions and can always revert back to normal.
- Custom robots header tags: This option is a bit completed. It does not provide you the ability to write your codes instead it provides few options with the check boxes. So, if have no idea about head tags then stay away from it.
In the text area, type the content which you want to exclude from being crawled. After editing the file according to your needs, press the save button to conclude. However, if you want to revert back to the default robot file then, you can select “No” instead of “Yes.
For Example:
User-agent: *Disallow: /about
User-agent: *
Disallow: /contact
User-agent: *
Disallow: /services
We hope this tip would help you. If you are having any issue regarding crawling then, feel free to leave your questions in the comments below and our experts would try to help you in solving them.
Note: If this tutorial worked for you (and it should work), please leave a comment below. Thanks.
0 comments:
Post a Comment