Back to SEO & Analytics

robots.txt

5 min readUpdated 15 March 2026

What Is a robots.txt?

Robots.txt is a text file used to instruct web bots on how to crawl pages on your website. You can use it to tell search engines whether to crawl specific pages or follow links, providing site-wide, page-specific, or directory-level instructions. If a page appears in your sitemap but should be excluded from crawling, using 'nofollow' is recommended instead of blocking the page entirely.

Where to Access the robots.txt Function

Navigate to More > SEO Manager > robots.txt tab from the top menu when logged in. A single text field is available for entering or editing your commands.

robots.txt Tutorial

Commands to Use

The robots.txt file uses records with User-agent and Disallow directives. The User-agent line specifies which bot the rule applies to, and the Disallow line specifies which paths to block.

Common Commands

  1. 1<strong>Exclude all robots from the entire site:</strong> <code>User-agent: * Disallow: /</code>
  2. 2<strong>Allow all robots complete access:</strong> <code>User-agent: * Disallow:</code> (leave the disallow value empty)
  3. 3<strong>Exclude all robots from part of the site:</strong> <code>User-agent: * Disallow: /private/</code>
  4. 4<strong>Exclude a single robot:</strong> <code>User-agent: BadBot Disallow: /</code>
  5. 5<strong>Allow a single robot:</strong> Set Disallow for all bots, then add a separate User-agent entry for the allowed bot with no Disallow
  6. 6<strong>Exclude all files except one:</strong> Use explicit Disallow rules for each directory or page you want blocked
Misconfiguring your robots.txt file can accidentally block search engines from indexing your entire site. Always double-check your commands before saving, and test using Google Search Console's robots.txt testing tool.
Was this helpful?

Related Articles

Still need help?

Submit a Support Ticket