On many websites you can find pages with content = "nofollow", which allows you to stop the search engine robot when clicking on links from this page.
This is a very important and indispensable "Robots" meta tag attribute directive. The website pages often contain content that should not be checked by search engines for many reasons. And the content may contain links which robot mustn’t follow. This is where the Robots meta tag comes in.
It allows the robot to access the content on the page content = and can prohibit - “noindex” indexing the page, allow its indexing - “index”, allow the robot to follow the links from the page - “follow” and prohibit the transition - “nofollow”. This is the easiest way to regulate the work of search engine robots on your website.
To prohibit the robot from following links, you just need to register the “Robots” meta-tag and add the necessary attributes, after which the robot will not be able to follow any links from the page with content = ”nofollow”. For all pages, the default is indexing - INDEX and following links FOLLOW. You can replace all actions at once with one value ALL - the page is completely open for indexing and clicking on links by robots, and NONE - indexing and clicking are not allowed. The content parameter is case insensitive. And keep in mind that duplicate and conflicting directives should not be included in the meta tag. Thus, you can control the behavior of robots directly from the page, in its HTML code, and protect your content and internal links from their visit for a variety of purposes.
Pages with content = ”nofollow” and other directives will help you with this. It is very important to control that this tag is only on the pages you need, because if you place it on the pages you are promoting, the page will not be promoted.
Monitor pages and page count with content = "nofollow" tag with Sonar.Network