Websites use the robots exclusion standard (robots.txt) to tell web crawlers and other web robots which areas of the website should not be processed or scanned. Search engines often use web robots to categorize websites.
Click Save.
To exclude all web crawlers from indexing any part of your site, you can use the following content in your robots.txt file:
User-agent: *Disallow: /
User-agent: * refers to all web crawlers.Disallow: / tells these web crawlers not to index any page or file on the site.
To exclude specific directories and/or files in your site, add each page or file as a new line in robots.txt:
Disallow: /search/Disallow: /login/Disallow: /file.aspx
You can specify which web crawlers and bots Axero recognizes using the Crawlers system property. This setting assists with SEO and site indexing by identifying key crawlers.
|
SEOChat::Bot|Gecko XML-Sitemaps|Googlebot|msnbot
is requesting access to a wiki that you have locked: https://my.axerosolutions.com/spaces/5/communifire-documentation/wiki/view/22311/robots-txt
Your session has expired. You are being logged out.