To get your content in front of the eyes of potential customers, you have to optimise it for search. There’s no point in creating valuable content on your website if Google isn’t going to recognise it. That’s why it’s so important to make your content as search engine friendly as possible. Search engines use a number of different algorithms to determine what content to show users and what content to ignore.
It’s no secret that Google is the king of search engines. Google processes over half of all web searches in the United States and is responsible for ranking website pages in its search results. Google uses a variety of signals to determine the relevance of a page in its search results, including the number of backlinks pointing to that page.
However, there are cases where Google will not index a website or its individual pages, regardless of the number of factors.
If you’ve ever taken a look at your website’s Index Coverage report in Google Search Console, you may have come across the “Discovered — currently not indexed” status. What does this mean for your website, and how can you fix it? In this article, we’ll explain what the “Discovered — currently not indexed” status means, and we’ll provide some tips on how to get your website out of this state.
Google’s documentation defines the “Discovered — currently not indexed” status in Google’s Index Coverage report as:
Discovered – currently not indexed: The page was found by Google, but not crawled yet. Typically, Google tried to crawl the URL, but the site was overloaded; therefore Google had to reschedule the crawl. This is why the last crawl date is empty on the report.
John Mueller said either it can be about auto-generating too many URLs by accident, a poor internal linking structure, or about reducing the number of pages to make the overall site stronger.
Using AI, I found that the main reasons for the appearance of the message “Discovered – currently not indexed”, are these:
- Non-original or thin content,
- Duplicated or similar content,
- Crawl issue,
- Poor internal linking structure,
- New site.
How to fix Discovered – currently not indexed?
The first potential problem is your robots.txt file.
Robots.txt files are a great way to manage your website’s content and structure. This file tells Google which pages on your website it should and shouldn’t crawl. But if you’re not using robots.txt correctly, you could be blocking your website from being indexed by Google.
If you use a robots.txt file to restrict which pages can be crawled, Google will not crawl those pages. This can make it harder for visitors, and for Googlebot, to find certain pages on your website.
SEO tip: Check disallow tags rules in the robots.txt file. You can check blocked URL resources with this tool. Pay attention to these issues:
Duplicated or non-original content
When you write or publish content on the Internet, you probably expect that version to be seen by your intended audience. But if your content is duplicated on multiple websites, or you have published non-original (thin content), you run the risk of being caught in the Google duplicate content penalty.
Google’s algorithms try to determine the “original” version of a given web page or website, and then penalize the “duplicate” version of the page. This can have a significant impact on a website’s search engine rankings, organic traffic, and site reputation.
John Mueller said if the content is essentially the same, folding the URLs together would be expected. With hreflang, the URLs can still be swapped out in the search results, they’re just not indexed or reported on individually.
SEO tip: It is one of the many ways to fix the “Discovered – currently not indexed” issue. It’d be better to implement <meta name=”robots” content=”noindex, follow” /> on such URLs. It is a necessary implementation for such pages because they can become the reason for DMCA abuse reports on your website.
Poor internal linking structure
One of the most important aspects of having a great website is having a great internal link structure. This is because the more internal links your website has, the more likely Google will index your content and bring new visitors to your website.
However, if your website has a poor internal link structure, that could be why Google isn’t indexing your site.
SEO tip: Check your URLs with SEO tools like Netpeak Spider or JetOctopus. You should find omitted URLs with no internal links, and build a great internal link structure.
Your website is new
If your website is relatively new, some of your pages may be marked as “Discovered – currently not indexed.” It can take Google a little while to find and index all the pages on your site.
In the meantime, if you want to increase the chances that your pages will show up in Google searches, you can use some other strategies.
SEO tip: Check such URLs with the URL Inspection tool in Google Search Console and press the Request indexing button. You may do the same using the API.
You can combine all URLs with the status “Discovered – currently not indexed” into a separate sitemap file, and send it to Google Search Console.
Google doesn’t index all pages on all websites.
Please read these Google Tweets about indexing and quality as well: