The robots exclusion standard, also known as the robots exclusion protocol or simply robots.txt, is a standard used by websites to communicate with web crawlers and other web robots. The standard specifies how to inform the web robot about which areas of the website should not be processed or scanned. Robots are often used by search engines to categorize web sites. Not all robots cooperate with the standard; email harvesters, spambots, malware, and robots that scan for security vulnerabilities may even start with the portions of the website where they have been told to stay out. The standard is different from, but can be used in conjunction with, Sitemaps, a robot inclusion standard for websites.
For example, to exclude directories and a file in your site, add the below lines to the textbox on the robots.txt page:
Disallow: /search/Disallow: /login/Disallow: /file.aspx
is requesting access to a wiki that you have locked: http://my.axerosolutions.com/spaces/5/communifire-documentation/wiki/view/22311/robots-txt