Google updated the list of their official crawlers by adding the name and information for a relatively unknown crawler that publishers have been seeing now and then but no documentation for it previously existed.
Although Google added official documentation for this crawler the information provided seems to encourage more clarification.
Special Case Crawlers
Google has several kinds of crawlers (also known as bots and spiders).
The different forms of crawlers are:
- Common Crawlers
These bots are mostly used for indexing different kinds of content. But some common crawlers are also for search testing tools, internal Google product team use, and crawling related to AI.
- User-triggered fetchers
These are bots that are triggered by users. This includes uses such as for fetching feeds or site verification.
- Special-case crawlers
These are for special cases like for mobile ads webpage quality checks or for push notification messages via Google APIs. These bots do not obey the global user agent directives in robots.txt that are signaled with the asterisk (*).
The new crawler documentation is for the Google-Safety user agent. The crawler is not new but the documentation is new.
The documentation in the Special-case Crawlers Google-Safety crawler is one that’s used by Google’s processes for finding malware.
Unique among Special-case crawlers, the Google-Safety Crawler completely ignores all robots.txt directives.
The new documentation for the Google-Safety Crawler:
“The Google-Safety user agent handles abuse-specific crawling, such as malware discovery for publicly posted links on Google properties.
This user agent ignores robots.txt rules.”
The full agent string for the crawler:
Read the new documentation for the Google-Safety user agent on the Google Search Central page for crawlers in the section devoted to Special-case crawlers.
Overview of Google crawlers and fetchers (user agents) – Special-case crawlers