Optimization of robots file website configuration instructions

Disallow:>

Disallow: /admin/ defined here is prohibited crawling the admin directory to the directory

with personal ability, I started by been responsible for some new projects, although these projects are ranked keyword optimization schedule is still relatively good, but in the process I met a lot of problems, these problems found, makes me realize the importance of Shanghai Longfeng work details, it is the details of the decision success or failure. Especially through the actual operation of the project, so I always recognize the importance of robots file in website optimization. So, Shanghai Longfeng children’s shoes, especially shoes at Shanghai dragon field, we must master the robots file to use items and skilled.

User-agent: (* * here on behalf of the all types of search engine, the * is a wildcard)

Robots file, strictly speaking is the robots.txt file, it really is how to understand? In fact, robots.txt is not a command, but an agreement. When the search engine to crawl a site within the page, will first grab the robots.txt file, so the file exists to tell search engines which pages can crawl, which pages can not be crawled. When the search engines crawl the site in a web page, it will first visit the website of the root directory of the robots file, if the file exists, the search engine will crawl in accordance with the definition of content files, if not, the search engine will be allowed to crawl all pages to catch it. In my understanding, the robots file is like tell the search engines what pages of the visit, which search engines should not access.

what is the real writing robots.txt file? In the operation of Rong Li site, because the site before the site is old, and is a dynamic page, now is transferred into a static page, so there will be a lot of the original files are gone, so the search engine will not crawl into, will be a lot of grab wrong, as many as two thousand, it is necessary to use the robots file to define these pages have been unable to find out, let the search engines no longer grab. The content involves the writing of the robots.txt file. There are two basic principles of the Robots file, the robots file is the most simple to write, the first is the rover User-Agent: apply the following rules; second is to intercept Disallow: ", and Allow is defined to allow search engines. So the robots file has the following wording:

Disallow: /require/ defined here is prohibited crawling the require directory to the directory

Previous Article
Next Article

Leave a Reply

Your email address will not be published. Required fields are marked *