Easily manage your sites robots.txt file through the settings page of your site.
Robots.txt is a text file webmasters or SEOs use to instruct web bots on how to crawl pages on their website. You can instruct robots on a page, subdirectory, or site-wide instructions for how search engines should crawl your pages or treat links (such as follow or nofollow). However if a page is included in your sitemap that you wish to exclude, it is recommended to use “nofollow” instead to blog a page. These robot crawl instructions are giving instruction to disallow or allow the behavior of certain (or all) user agents (not only, but often thought of as search engines).
On the top menu when logged in, go to: More > SEO Manager > robots.txt tab. There is only a single field you need to place the new commands or edited commands.
The "/robots.txt" file is a text file, with one or more records. Usually contains a single record looking like this:
User-agent: * Disallow: /cgi-bin/ Disallow: /tmp/ Disallow: /~resources/
These 3 lines showing disallow are telling the robots to ignore and not index them.
User-agent: * Disallow: /
User-agent: * Disallow:
Or just leave your robots.txt empty, however this is not considered best practice.
User-agent: * Disallow: /cgi-bin/ Disallow: /tmp/ Disallow: /resources/
User-agent: BadBot Disallow: /
User-agent: Google Disallow: User-agent: * Disallow: /
There is no "Allow" field. The easiest way to deal with this is to put all files to be disallowed into a separate directory, e.g. "misc", and leave the single file outside of that directory.
User-agent: * Disallow: /~resources/misc/
Alternatively you can explicitly disallow all disallowed pages:
User-agent: * Disallow: /~resources/index.html Disallow: /~resources/business.html
Learn more here.
Try using searching below: