The Google Search Console status “Crawled — presently not indexed” denotes that Google is aware of a certain URL, that it has been crawled and examined, but Google has decided not to index it.

According to Google’s literature, the status “Crawled” but not yet indexed means:

The page was crawled by Google, but not indexed. It may or may not be indexed in the future; no need to resubmit this URL for crawling.

Source: Google

The definition provided by Google is unclear as to what occurred and what you might do next. Nothing more than the fact that Googlebot accessed your page but, for whatever reason, chose not to index it is stated.
According to our study, the most often problem mentioned in the Index Coverage report is the Crawled – presently not indexed state. It indicates that you have either likely already experienced it or are likely to do so in the future.

The issue must be resolved as quickly as feasible. In the end, if your page isn’t indexed, it won’t show up in search results and won’t receive any natural Google traffic.

The potential causes of the Crawled – presently not index status are discussed in this article, along with solutions.

These are some potential causes of this problem:

  • Delays in indexing.
  • Content of poor quality.
  • De-indexing because the quality is insufficient.
  • Duplicate information

The status can be found in Google Search Console’s URL Inspection Tool and Index Coverage report.

Report on Index Coverage

Crawled but not yet indexed pages belong in the “Excluded” category, which shows that Google does not believe the page’s lack of indexation is an error.

You’ll get a list of the affected URLs after selecting the Crawled – not yet indexed state. You should look it through and focus on correcting the problem on the pages that are most important to you.

The report can be exported as well. However, you are only able to export up to 1000 URLs. By filtering sitemap-specific pages, you can increase the number of exported URLs if more pages are impacted. You can export both sitemaps separately, for instance, if each one contains 1000 URLs.

URL Inspection Tool

Google Search Console’s URL Inspection Tool can also let you know about URLs that have been crawled but are not yet indexed.

You can find out if a URL is searchable on Google in the tool’s top area. The URL Inspection Tool will state: “The page is not in the index, but not because of an error” if the inspected URL falls under the Excluded category in the Index Coverage report.

More detailed information about the current Coverage state of the URL that was examined is provided below; in the example above, the URL was Crawled but is not currently indexed.

Maybe have your page indexed

The first thing you should do after seeing the Crawled – presently not indexed status is check to see if your page is actually not indexed.

It’s not unusual to see a page categorised in the Index Coverage report as Crawled – currently not indexed even when the URL Inspection tool shows that the page is indeed indexed.

You can examine information about a particular URL using the URL Inspection tool, which includes:

  • Indexing problems, incorrect data in structures,
  • Usability on Mobile
  • View the loaded resources (e.g., JavaScript).

Additionally, you can ask that a URL be indexed or view a rendered version of a page.

During Google’s SEO Office Hours, John Muller addressed the issue with the disparities between the Index Coverage report and URL Inspection tool:

As John noted, it might just be a matter of data synchronisation and delay between these two tools, and as time passes, the Index Coverage report may update to reflect the current situation.

It’s not always just a delay, though. Sometimes it’s a bug with reporting.

Causes and remedies for the status “Crawled – currently not indexed status”

Let’s investigate the issue further to determine what triggers the status to emerge and what you can do to resolve it.

Google doesn’t explicitly explain why your page was crawled but not indexed, however there are a few potential explanations that could apply, such as:

  • Delay in indexing,
  • Page doesn’t meet expectations for quality,
  • Deindexed page, a problem with the website’s architecture
  • Difficulties with duplicate content.

Indexing delay

Google frequently views a page, however it takes some time to index it.
Due to the unlimited size of the Internet, Google must give priority to which pages are indexed first.

You might need to wait a little longer for Google to index your material if you recently launched your page and it hasn’t been indexed yet.

Indexing delay remedies

In the short term, you have little control over how your page is crawled and indexed, however there are several things you can do to benefit your website over time:

  • For Google to give the right pages on your site priority, develop an indexing strategy. To do this, you must choose which pages should be indexed and the most effective way to let Google know about them.
  • Make sure the pages you care about have internal links. It will make it easier for Google to find the sites and understand their context.
  • Make a sitemap that is optimised. Your important URLs are listed in a straightforward text file. It will serve as a guide for Google to locate the pages more quickly.

Page does not meet requirements for quality

Google is unable to index every page on the Internet. Because of its limited storage capacity, it must screen out low-quality content.

Google wants to deliver the web pages of the greatest calibre that best satisfy user intent. It implies that if a page is of poor quality, Google will probably disregard it in order to free up storage for information of higher quality. Furthermore, we may anticipate that quality standards will continue to tighten.

Page does not meet requirements for quality remedies

You should make sure your page has high-quality content as the website owner. Verify if it will likely suit your users’ needs, and if not, provide high-quality material. Google provides you with a set of questions to use in order to assess the worth of your content. Below are a few of them:

  • Does the content provide original information, reporting, research or analysis?
  • Does the content provide insightful analysis or interesting information that is beyond obvious?
  • Is this the sort of page you’d want to bookmark, share with a friend, or recommend?
  • If the content draws on other sources, does it avoid simply copying or rewriting those sources and instead provide substantial additional value and originality?

You can also make use of Google’s Quality Raters Guidelines for advice on creating high-quality content. Webmasters can use the document to gain some insights on how to enhance their own sites even if it is primarily intended for Search Quality Raters to evaluate the quality of a website.

User-generated content

In terms of quality, user-generated content might be a problem.

Let’s say you have a forum and someone posts a query there. Although there may be many insightful comments in the future, none were present at the time of crawling, so Google may consider the page as having low-quality material.

Page got deindexed

A URL may have the Crawled – currently not indexed status as a result of Google’s decision to deindex it over time after it has previously been indexed.

It’s possible that lower-quality content has simply been substituted for those items that have disappeared from the index, if you’re wondering why.

You should also keep an eye out for algorithm updates. It’s likely that a new algorithm was implemented and that it had an impact on your page.

Regrettably, a Google bug could potentially be the root of deindexing. For instance, Google once deindexed Search Engine Land after erroneously believing the site had been hacked.

Page got Deindexed remedies

The quality of the page has a direct impact on how to resolve deindexed pages. Always make sure your page is current and offers the highest-quality material. Do not presume that a page is done with your attention once it has been indexed. Continue to keep an eye on it and make any necessary adjustments or enhancements.

If you want Google to notice the changes more quickly after you address the problems, you can submit those URLs to Google Search Console.

Website architecture issue

When questioned about potential causes for a page’s Crawled – presently not indexed status, John Mueller said that a poor website’s structure could also be a factor.

Imagine that you have a high-quality page, but Google only noticed it because you included it in your sitemap.

Since there are no internal links, Google might crawl and examine the page but might conclude that it is less valuable than other pages. No semantic or structural data exist to aid in the evaluation of the page. That could have been a factor in Google’s decision to prioritise other pages while leaving this one out of the index after crawling it.

Website architecture issue remedies

Having a well-designed website will assist you increase your chances of being indexed.
It enables search engine bots to find your content and comprehend the relationships between pages better. Because of this, it’s essential to have a solid website architecture and make sure that the page you wish to be indexed has internal links.

Duplicate content

Google wants to provide people with interesting and worthwhile information. Therefore, it may only index one of the pages if it discovers during crawling that several of them are similar or almost identical.

The other one typically receives a “Duplicate” classification in the Index Coverage report. Google occasionally instead awards the Crawled – presently not indexed status, which isn’t always the case.

Why Google would select Crawled – presently not indexed over a dedicated status for duplicate material is unclear. One argument is that the status might change later once Google determines whether another option is better for the page.

How can I tell if a search result contains a duplicate page?
  • Visit the unindexed page and copy a random text section.
  • Insert the content within quotation marks in Google Search.
  • Examine the outcomes. If a different URL containing the copied text appears, it may indicate that Google did not index your page because it chose a different URL to index.

Duplicate content remedies

You should first and foremost make sure you write original pages. If required, include original content.

Sadly, it’s possible that duplicate content cannot be prevented (e.g., you have a mobile and desktop version). There isn’t much you can do to influence what shows up in search results, but you can provide Google some suggestions regarding the original version.

Examine the following components if you see a lot of duplicate information indexed:

  • HTML canonical tags let search engines know which versions are the originals.
  • Check your internal links to make sure they lead to your unique content. It can be a clue to Google as to which page is more important.
  • XML Sitemaps: Make sure your sitemap contains only the canonical version.

Conclusion

The following are the main lessons you can apply from this article to deal with the Crawled – presently not indexed status:

  • Include interesting and useful material on your pages. When you’re finished, add those URLs to Google Search Console. Google may catch modifications sooner if done this manner.
  • Review the internal links on your website and make sure they point to the important pages.
  • Select the pages that should and shouldn’t be indexed to assist Google in giving priority to the most important URLs.

Although crawled-not-indexed is typically connected with page quality, it can really point to a wide range of issues, including duplicate content or poor website architecture.

Write A Comment

Pin It