By the end of this lesson, you’ll understand what Google coverage statuses mean and why they matter, the difference between errors, warnings, excluded, and valid pages, how to read and interpret your Coverage report, step-by-step solutions for common coverage issues, and how to monitor your site’s indexing health regularly.
What Are Coverage Statuses?
Coverage statuses show whether Google can find, crawl, and index your website pages. They appear in Google Search Console’s Coverage report.
Simple Explanation
Think of Google as a librarian organizing books. Coverage statuses tell you:
- Which books (pages) are properly shelved and available to readers (indexed)
- Which books have problems and can’t be shelved (errors)
- Which books might have minor issues (warnings)
- Which books you told the librarian not to shelve (excluded)
Understanding these statuses helps you ensure your content appears in Google search results.
Why Coverage Statuses Matter
Finding Problems: Coverage statuses reveal indexing issues preventing your pages from ranking.
Traffic Impact: Pages with errors don’t appear in search results, meaning zero organic traffic from those pages.
Site Health: Coverage report acts as your site’s health dashboard, showing overall indexing status.
Quick Fixes: Identifying exact issues lets you fix problems fast before they hurt rankings.
Understanding the Coverage Report
Access your Coverage report in Google Search Console.
How to Access Coverage Report
Steps:
- Go to search.google.com/search-console
- Select your property (website)
- Click “Coverage” in left sidebar (or “Pages” in newer interface)
- View coverage summary
Coverage Report Overview
The report shows four status categories:
Error (Red): Pages Google tried to index but couldn’t due to problems.
Valid with warnings (Orange): Pages indexed successfully but with minor issues.
Excluded (Gray): Pages Google discovered but chose not to index (intentionally or unintentionally).
Valid (Green): Pages successfully indexed and appearing in search results.
Reading the Graph
Top Graph: Shows trend over time. You want:
- Green (valid) line going up
- Red (error) line going down
- Gray (excluded) line stable or decreasing
Sudden Changes: Big spikes or drops indicate issues needing attention.
Healthy Site: Most pages in “Valid” category, few errors, intentional exclusions only.
Error Status (Red)
Errors prevent pages from being indexed. These require immediate attention.
Server Error (5xx)
What it means: Your server returned an error when Google tried to access the page.
Common causes:
- Server overloaded or down
- Database connection problems
- Server misconfiguration
- Hosting issues
How to fix:
Step 1: Check if site is down Visit your website. Can you access it?
Step 2: Check server logs Review error logs for 500, 502, 503, or 504 errors.
Step 3: Contact hosting provider If server errors persist, contact your host. They can:
- Increase server resources
- Fix configuration issues
- Resolve database problems
Step 4: Test and request reindexing Once fixed:
- Use URL Inspection tool
- Click “Test live URL”
- If it passes, click “Request indexing”
Redirect Error
What it means: Redirect chain too long or redirect loop detected.
Common causes:
- Multiple redirects in sequence (A→B→C→D)
- Redirect loop (A→B, B→A)
- Incorrect redirect setup
How to fix:
For redirect chains:
Bad:
page-a → page-b → page-c → page-d
Good:
page-a → page-d (direct redirect)For redirect loops: Remove conflicting redirects. Keep only one direction.
Testing: Use redirect checker tool to verify:
- Only one redirect hop
- No loops
- Correct destination
Submitted URL Returned 404
What it means: You submitted this URL in sitemap, but it returns 404 (page not found).
Common causes:
- Page was deleted
- URL typo in sitemap
- Page moved without redirect
- Permissions blocking access
How to fix:
If page should exist:
- Check if page actually deleted
- Restore page if accidentally removed
- Fix URL typo in sitemap
- Verify file uploaded correctly
If page should be gone: Remove URL from sitemap and submit updated sitemap.
For moved pages: Set up 301 redirect to new location.
Submitted URL Blocked by robots.txt
What it means: URL in your sitemap is blocked by robots.txt file.
Why it’s a problem: Conflicting signals confuse Google. You’re saying “index this” (sitemap) and “don’t crawl this” (robots.txt) simultaneously.
How to fix:
Option 1: Remove from robots.txt If you want page indexed:
- Edit robots.txt
- Remove disallow rule for that URL
- Save and upload
- Request reindexing
Option 2: Remove from sitemap If you don’t want it indexed: Remove URL from sitemap and submit updated sitemap.
Soft 404
What it means: Page shows content suggesting it’s missing, but returns 200 status code instead of 404.
Examples:
- “Page not found” content with 200 status
- Nearly empty page
- Generic error message page
Why it happens: Custom 404 pages not configured properly.
How to fix:
For truly missing pages: Return proper 404 status code:
<?php
header("HTTP/1.0 404 Not Found");
?>For existing pages: Add more content so page isn’t considered “empty.”
Submitted URL Marked Noindex
What it means: URL in sitemap has noindex tag preventing indexing.
Conflicting signals: Sitemap says “index this” but page says “don’t index this.”
How to fix:
If you want it indexed: Remove noindex tag from page:
<!-- Remove this: -->
<meta name="robots" content="noindex">If you don’t want it indexed: Remove from sitemap. Keep noindex tag.
Crawled – Currently Not Indexed
What it means: Google crawled the page but decided not to index it.
Common reasons:
- Low-quality content
- Thin content (very short)
- Duplicate content
- Page has little value
- Too similar to other pages
How to fix:
Improve content quality:
- Add more detailed information (aim for 500+ words)
- Make content unique
- Add value users can’t find elsewhere
- Include images, examples, data
Consolidate duplicates: If multiple similar pages exist, combine into one comprehensive page.
Check technical issues:
- Ensure page loads properly
- Fix any errors
- Verify mobile-friendliness
Be patient: Sometimes Google needs time. If content is good, request reindexing and wait 2-4 weeks.
Valid with Warnings (Orange)
These pages are indexed but have issues worth fixing.
Indexed, Though Blocked by robots.txt
What it means: Page is in Google’s index, but robots.txt blocks crawling.
Why it’s problematic: Google can’t recrawl to update content or see changes.
How to fix:
To keep indexed with full access: Remove robots.txt block for this page.
To remove from index:
- Temporarily remove robots.txt block
- Add noindex tag to page
- Wait for Google to recrawl and remove
- Then re-add robots.txt block if desired
Indexed Without Content
What it means: Page indexed but Google couldn’t see main content.
Causes:
- JavaScript rendering issues
- Content loaded dynamically
- Blocked resources (CSS, JS)
- Access restrictions
How to fix:
Enable JavaScript rendering: Use URL Inspection tool’s “View crawled page” to see what Google sees.
Unblock resources: Allow Googlebot to access CSS and JavaScript:
# In robots.txt, allow CSS and JS
User-agent: Googlebot
Allow: /*.css$
Allow: /*.js$Server-side rendering: For dynamic content, implement server-side rendering or pre-rendering.
Excluded Status (Gray)
Pages Google found but chose not to index. Not always bad.
Excluded by Noindex Tag
What it means: Page has noindex tag, so Google intentionally didn’t index it.
When it’s expected:
- Admin pages
- Thank you pages
- Login/account pages
- Cart and checkout pages
- Search result pages
When it’s a problem: Important content pages accidentally have noindex.
How to fix (if unintentional):
Check page source for:
<meta name="robots" content="noindex">If found and you want it indexed, remove the tag.
Common causes:
- Forgot to remove after development
- Plugin setting enabled noindex
- Theme default setting
Duplicate, Google Chose Different Canonical
What it means: You have duplicate pages, and Google picked one to index (not the one you wanted).
Example:
Your page: yoursite.com/product-blue
Canonical Google chose: yoursite.com/products/blue-itemHow to fix:
Option 1: Set proper canonical tag On all duplicate versions, point to your preferred URL:
<link rel="canonical" href="https://yoursite.com/product-blue">Option 2: Use 301 redirects Redirect all duplicate URLs to the one main version.
Option 3: Improve preferred page Make your preferred version clearly the best:
- Add more content
- Get more backlinks
- Improve user experience
Crawled – Currently Not Indexed
What it means: Google visited the page but decided not to add it to index.
Why excluded:
- Content quality too low
- Too similar to other pages
- Page provides little unique value
- Site has quality issues overall
How to improve:
Enhance content:
- Expand to 1,000+ words
- Add unique insights
- Include examples and data
- Make it comprehensive
Differentiate: Make content clearly different from similar pages.
Build authority:
- Get backlinks to the page
- Promote on social media
- Generate engagement
Discovered – Currently Not Indexed
What it means: Google knows about the page (through links or sitemap) but hasn’t crawled it yet.
Why it happens:
- New page recently added
- Low priority page
- Crawl budget limitations
- Page deep in site structure
How to fix:
Request indexing:
- Go to URL Inspection tool
- Enter URL
- Click “Request indexing”
Improve internal linking: Link to page from important pages like homepage or main categories.
Add to sitemap: Include in XML sitemap and submit to Search Console.
Wait: New pages can take days to weeks for initial crawl. Be patient.
Page with Redirect
What it means: Page redirects to another URL.
When it’s expected:
- Intentional redirects (301, 302)
- Moving content to new URLs
- Domain migrations
When it’s a problem: Accidental redirects or redirect chains.
Action needed:
For intentional redirects: This is normal. No fix needed.
For redirect chains: Simplify to single redirect from source to final destination.
For accidental redirects: Remove redirect if page should be directly accessible.
Alternate Page with Proper Canonical Tag
What it means: Page correctly points to another version as canonical. This is good.
Example:
Mobile version: m.yoursite.com/page
Canonical tag points to: yoursite.com/pageGoogle indexes the canonical version, not the alternate. This is expected behavior.
No action needed: This is working correctly.
Duplicate Without User-Selected Canonical
What it means: Multiple identical pages exist, but you didn’t specify which is primary.
How to fix:
Add canonical tag to all versions pointing to preferred URL:
<link rel="canonical" href="https://yoursite.com/preferred-version">Or use 301 redirects to consolidate to one URL.
Valid Status (Green)
These pages are successfully indexed. This is what you want.
Submitted and Indexed
What it means: Page in your sitemap and successfully indexed. Perfect status.
Meaning: Page can appear in search results for relevant queries.
Monitoring: Track to ensure number of indexed pages stays stable or grows.
Indexed, Not Submitted in Sitemap
What it means: Google found and indexed page through links, even though not in sitemap.
Is it bad? Not necessarily, but adding to sitemap is better for:
- Faster discovery of updates
- Clear communication with Google
- Better crawl budget management
Action: Add important pages to sitemap.
How to Fix Coverage Issues
Follow this process for any coverage problem.
Step 1: Identify the Issue
In Coverage Report:
- Click on error or excluded category
- Click on specific issue type
- Click “View examples”
- See which URLs affected
Step 2: Investigate Root Cause
Use URL Inspection Tool:
- Copy affected URL
- Paste into URL Inspection tool
- Review:
- Crawl status
- Coverage status
- Mobile usability
- Indexing allowed/blocked
Check the page:
- Visit URL in browser
- View page source
- Check for noindex tags
- Verify content loads
Step 3: Implement Fix
Based on issue type, apply appropriate fix from sections above.
Step 4: Validate Fix
After fixing:
- Return to Coverage report
- Find the issue
- Click “Validate fix”
- Google will recheck affected URLs
Validation process:
- Takes days to weeks
- Google checks URLs progressively
- Shows validation status
- Confirms when issue resolved
Step 5: Monitor Results
Check weekly:
- Validation progress
- New issues
- Error trends
- Index coverage changes
Goal: All validations pass, errors reduce to zero.
Preventing Coverage Issues
Stop problems before they start.
Regular Monitoring
Weekly checks:
- Review Coverage report
- Check for new errors
- Monitor excluded pages
- Track indexing trends
Monthly audits:
- Full site crawl
- Sitemap verification
- Robots.txt review
- Canonical tag check
Quality Content
Content standards:
- Minimum 300-500 words per page
- Unique, valuable information
- Proper formatting and structure
- Regular updates
Avoid:
- Thin content pages
- Duplicate content
- Auto-generated pages
- Low-value pages
Technical Best Practices
Site structure:
- Clear hierarchy
- Good internal linking
- Shallow site depth (3 clicks max to any page)
- Clean URL structure
Configuration:
- Correct robots.txt
- Proper use of noindex
- Valid sitemaps
- Appropriate canonical tags
Testing Before Launch
Pre-launch checklist: ✓ Remove noindex from important pages ✓ Verify robots.txt allows crawling ✓ Test all pages load correctly ✓ Check mobile responsiveness ✓ Ensure proper canonical tags ✓ Submit sitemap
Monitoring Best Practices
Stay on top of your site’s indexing health.
Set Up Email Alerts
In Search Console:
- Go to Settings
- Enable email notifications
- Choose alert types:
- Coverage issues
- Manual actions
- Security issues
Benefits: Get notified immediately when problems occur.
Track Key Metrics
Important numbers:
- Total valid pages
- Total errors
- Error rate (errors/total pages)
- Excluded pages (review if intentional)
Create baseline: Document current status to measure changes.
Regular Reporting
Create monthly report:
- Valid pages trend
- New errors found
- Errors fixed
- Excluded pages status
- Actions taken
