...

List of Google’s Special-Case Crawlers

Besides the common crawlers like Googlebot for web, images, and videos, Google also runs special-case crawlers. These are designed for specific purposes such as site verification, structured data testing, or Ads quality checks. They don’t crawl your whole site regularly, but they show up at key times to complete a task.

Why Special-Case Crawlers Matter

These crawlers may not affect daily indexing, but they can impact how your site is verified, how structured data is tested, or how ads perform. If you block them by mistake in your robots.txt, some Google services may not work correctly.

Google’s Special-Case Crawlers

Here are the most common ones you might see:

Crawler Name User-Agent String Purpose
APIs-Google APIs-Google (+https://developers.google.com/webmasters/APIs-Google.html) Used by Google APIs to access content.
FeedFetcher FeedFetcher-Google Fetches RSS/Atom feeds for Google services like Google News or Podcasts.
Google-Read-Aloud Google-Read-Aloud Fetches content for text-to-speech services (e.g., Google Assistant reading articles).
Duplex on the Web Google-InspectionTool Simulates user interactions (like booking a service) to test usability.
Google Site Verification Google-Site-Verification Used when verifying site ownership in Google Search Console.
AdsBot (Mobile & Desktop) AdsBot-Google-Mobile / AdsBot-Google Checks landing page quality for Google Ads.
Other Testing Tools Google-InspectionTool (used by Lighthouse & Rich Results Test) Crawls when you run tests in Google tools.

Key Things to Remember

  • Special-case crawlers usually appear only when triggered (e.g., when verifying a site or running a structured data test).

  • They don’t index your site like Googlebot, but they do ensure features and tools work properly.

  • Blocking them in robots.txt could break important Google services (like site verification or ads checks).

If you’re not sure which crawler is which, check our guide on the List of Google’s Common Crawlers.

How to Verify a Special-Case Crawler

Sometimes, you’ll see unusual bots in your server logs and wonder if they’re really from Google or just fake bots pretending to be Google. Relying only on the user-agent string (like AdsBot-Google or FeedFetcher-Google) isn’t enough, because spammers can copy those names.

That’s why Google recommends verifying crawlers by their IP address. Here’s how you can do it:

Step 1: Find the crawler’s IP address

  • Check your server logs to see the IP address of the bot request.

Step 2: Do a reverse DNS lookup

  • Run a reverse DNS lookup on that IP.

  • The result should point back to a domain ending in googlebot.com or google.com.

Step 3: Confirm with a forward DNS lookup

  • Take that domain name and look it up again to see if it resolves back to the same IP address.

  • If it matches, the crawler is a genuine Google crawler.

Example (simplified)

  1. You see a visit from IP 66.249.66.1.

  2. Reverse DNS lookup → shows crawl-66-249-66-1.googlebot.com.

  3. Forward DNS lookup on that hostname → gives back 66.249.66.1.

Verified as a real Google crawler.

Rocket

Automate Your SEO

You're 1 click away from increasing your organic traffic!

Start Optimizing Now!

SEO Academy

  1. AMP