How to avoid common mistakes in robots.txt?


Robots.txt is one of the most critical tools in the SEO arsenal. It is used for laying down guidelines.

It instructs crawlers or bots about which sections of the site can or cannot be crawled. This feature looks simple. However, one mistake can wreak massive havoc.

Let’s look more into robots.txt and what mistakes you should avoid. 

What is a robots.txt file?

Robots.txt is a text file that webmasters build to guide web robots (usually search engine robots) to browse sites on their website.

It is a component of the Robots Exclusion Protocol (REP). REP is a community of web standards.

It governs how robots browse around the web, view, index information, and deliver users’ content.

Ensure that the usage of this feature is appropriate. One Google study state that improper use of robots.txt might block out vital pages of your website. It will harm the efforts that go into SEO. 

Common Mistakes To Avoid in robots.txt

Incorrect File Location

Perhaps one of the fundamental problems individuals make is failing to put the files in the right position.

The robots.txt file has to be saved in the root directory of a site. If placed in other subdirectories, it would make the file uncoverable for the crawler as it visits the website under consideration. 

  • Site URL

People sometimes fail to note the location of the sitemap in the robots.txt format, and this is not ideal.

Specifying the location of the sitemap will help the crawler explore the sitemap. It can be done from the robot file only.

Googlebot would not be spending time searching the sitemap as it was described before. It will prove advantageous for your website if you make it easy for the web spiders to crawl through it. 

  • Trailing Slash Usage 

One of the common mistakes is using a trailing slash when blocking or allowing a URL in the robots.txt.

Using unnecessary trailing slash will malfunction, and it will not actually ‘disallow’ crawlers to visit a particular section of your website. 

  • Incorrect Use of Wildcards

Wildcards are special characters or symbols that are used in the robot’s.txt. Two different types of wildcards can be used in robots.txt.

These are- $ and *. The ‘$’ character is used for indicating the end of the URL. The ‘*’ character is used for denoting ‘all’ or ‘0 or more instances of any real character’.

The working mechanism of wildcards should be understood thoroughly. Any slight mistake can cause severe issues on your website. 

  • Usage of NoIndex in robots.txt

It is an old directive. Most people have discontinued their usage. In an official announcement, Google mentioned that NoIndex directives would cease to work from 1st September 2019. If this directive is still in your code, we suggest you discard it. 

Robots.txt is a part of the technical SEO. Hence, it is not something for the newbies. If you have got more queries, consult with SEO companies in Pune to know more.


These are a few common mistakes that one usually makes in robots.txt. If you are confused about getting rid of it, hire services from top SEO companies in India. 


Are you an Entrepreneur or Startup?
Do you have a Success Story to Share?
SugerMint would like to share your success story.
We cover entrepreneur Stories, Startup News, Women entrepreneur stories, and Startup stories

Savit Interactive is a top digital marketing company in India that offers SEO, SEM, PPC, and website designing services since 2004.