The robots exclusion standard (robots.txt) is used by websites to tell web crawlers and other web robots which areas of the website should not be processed or scanned. Web robots are often used by search engines to categorize websites.
To exclude all web crawlers from indexing any part of your site, you can use the following content in your robots.txt file:
User-agent: *Disallow: /
User-agent: * refers to all web crawlers.Disallow: / tells these web crawlers not to index any page or file on the site.
To exclude specific directories and/or files in your site, add each page or file as a new line in robots.txt:
Disallow: /search/Disallow: /login/Disallow: /file.aspx
is requesting access to a wiki that you have locked: https://my.axerosolutions.com/spaces/5/communifire-documentation/wiki/view/22311/robots-txt
Your session has expired. You are being logged out.