• author: Google Search Central


In these videos, we aim to answer your questions about SEO and Google search. These questions were submitted using the form linked below over the past month. To answer them, we have folks from the Google search quality team joining me today.

Google Indexing Issues

Wrongly Indexed www Version of the Website

Von asks why Google wrongly indexes the www version of their website instead of the correct https version without the www. Ron, from the search quality team, explains that upon reviewing Von's pages, it appears that the server automatically redirects from the non-www to the www version and sets the link Rel canonical element appropriately. Ron points out that if one is using Chrome, it might not initially show the www in the URL. However, if you click twice into the URL, it expands to the full URL with the www. Ron assures that both the www and non-www versions of a site are acceptable for Google search.

Data Filtering in Search Console

Gary is asked by Ornella about the higher filter data compared to overall data in Search Console. Gary explains that Google makes heavy use of something called Bloom filters to handle large amounts of data efficiently. Bloom filters are sets containing hashes of possible items in the main set, which allow for fast lookups but can result in some data loss. The missing data in the Bloom filters contributes to more accurate predictions about the existence of something in the main set. Gary emphasizes that Bloom filters help in speeding up lookups but come at the expense of absolute accuracy.

Indexing Issues with Google Sites

A question submitted in French inquires about the improper indexing of Google Sites web pages. Gary explains that websites created on Google Sites can indeed get indexed in Google search. However, the URLs used in Google Sites can be difficult to track since the public version may differ from the URL seen when logged in. He advises that although technically indexable, it might not be ideal for SEO purposes and can be complex to track in Search Console. He suggests exploring other options and considering the pros and cons before committing to Google Sites. For better performance tracking in Search Console, using your own domain name for Google Sites content is recommended.

Crawling Links on a Website

Sir Object is concerned about Google's ability to crawl links fetched through buttons on a website. Gary responds by stating that Google generally doesn't click on buttons to crawl links.

Title: Common Webmaster Questions Answered by Google

In the fast-paced world of digital marketing, webmasters often have several burning questions, yearning for answers and clarification. Thankfully, Google's team of experts have come forward to shed light on some of the most commonly asked questions. Below, we have compiled a list of these inquiries and provided detailed insights into each topic.

1. Disallowed Sources and JavaScript Blocking

Webmasters frequently wonder about the impact of robots.txt and JavaScript blocking on their website's visibility. If a source is disallowed by the robots.txt file or if content pulled in through JavaScript is also blocked by robots.txt, it will not be crawled by search engine robots. However, it is essential to note that complex measures to hide menus and elements through JavaScript should not be applied unnecessarily as it may lead to unexpected breakages. It is advisable to limit the use of these techniques only when absolutely necessary.

2. Infinite Scrolling Implementation and Organic Traffic

The question arises whether implementing infinite scrolling on web pages could have any implications on organic traffic or affect Google's bots. The answer depends on the implementation method. If each "virtual" page generated through infinite scrolling is accessible and discoverable with a unique URL, it should not pose any problems for indexing and organic traffic. However, it is crucial to ensure that the mobile version of the page contains complete content for effective indexing since Google now prioritizes the mobile version for indexing purposes.

3. Hidden Links Behind JavaScript Toggle

Ryan seeks clarity on the impact of hidden links behind JavaScript toggles on desktop versions of web pages. These links are visible on the mobile version but not included in the HTML unless clicked. Google's response states that with the shift towards mobile-first indexing, the mobile version of the page serves as the basis for indexing and link discovery. Therefore, if the mobile version contains the full content, there should not be any devaluation of these hidden links. Nonetheless, the idea of having less content on the desktop version is intriguing, and further exploration of the reasons behind such decisions should be considered.

4. Indexing PDF Files Saved on Google Drive

An anonymous user poses the question of whether Google indexes PDF files saved on Google Drive that are not hosted on a website. According to Google, public PDF files hosted on Google Drive can indeed be indexed as they are treated as any other URL. The indexing process may vary in speed, ranging from a few seconds to an extended period. Google's bots treat these files as separate URLs and index them accordingly.

5. Scroll Jacking and its Impact on Rankings

Matt's question highlights the increasing popularity of scroll jacking on websites. Scroll jacking refers to manipulating user scroll behavior, which is often perceived as a poor user experience. Google responds by stating that while scroll jacking is not considered abusive, technical issues arising from improper rendering may occur. As Google renders pages by simulating large mobile devices, failure to display content due to scroll events can lead the algorithms to assume the content is not properly visible. Hence, scroll jacking can potentially result in rendering issues rather than direct quality implications.

6. URL Indexing Despite Robots.txt Block

Denise Khan Aral seeks an explanation for URLs being indexed despite being blocked by the robots.txt file in Google Search Console (GSC). Google clarifies that although blocked URLs are generally not indexed, there may be exceptions for highly sought-after content. However, the number of such URLs in the index is negligible. Webmasters who want to prevent indexing of blocked URLs can allow crawling of the URLs and implement a noindex rule through HTTP headers or meta tags. Further information on noindexing is available in Google's documentation.

7. AI-generated Content and its Remediation

Sonya's query revolves around dealing with AI-generated content provided by content writers. Google suggests that blindly publishing external authors' content, including AI-generated content, without proper review is ill-advised. For low-quality content already published, webmasters have the option to either remove or improve it. However, a wiser approach would be to assess if the content adds unique value to users or merely duplicates existing content. Developing a clear content strategy and adhering to quality processes are pivotal for a successful website.

8. Spike in Indexed URLs and its Causes

Lorenzo seeks understanding behind a sudden spike in indexed URLs. Generating hypotheses, Google postulates that the increase could be due to various factors like additional hard drives, space freed up, or the discovery of new URLs. Since specific reasons cannot be ascertained without additional information, Google encourages webmasters to celebrate the growth with cautious optimism.

9. Multisize Favicon Files and Google's Recognition

Dave wonders if Google can utilize multiple file sizes from a single favicon file, particularly when the sizes are marked up with appropriate attributes. Google affirms that while the dot Ico file format allows multiple resolutions, specifying individual sizes and files is generally more reliable due to the increasing number of sizes used for different purposes. Google does support multiple favicon sizes in HTML, so it is recommended to provide specific sizes when necessary.

10. Website Evaluation Based on Different CMS

Anna inquires whether Google evaluates different parts of a website differently based on the Content Management System (CMS) used. Google clarifies that the evaluation process remains consistent regardless of the CMS employed. Google's algorithms focus on content quality, relevance, and factors that contribute to a positive user experience, all of which are CMS-agnostic.

This comprehensive list of questions and answers provides valuable insights for webmasters aiming to optimize their websites and understand Google's approach to various web development practices. However, it is important to remember that these responses are subject to updates and revisions, as Google continuously evolves its algorithms and guidelines.

Common Search Console Issues and Solutions


When it comes to managing a website and optimizing its visibility on search engines, various challenges and questions arise. In this article, we address some of the common issues that webmasters and SEO professionals encounter while using Search Console. By providing explanations and solutions, we aim to help you understand and resolve these issues effectively.

1. Home Page Not Indexed

Issue: Upon searching for their website on Google, some users find that a different page, such as a product page, is displayed instead of the desired home page.

Explanation: Google determines search results based on various factors, including user intent. If a particular page is perceived to be more relevant for a specific query, it may appear as the top result, even if it is not optimized for search engines.

Solution: To ensure that your home page is indexed and more likely to appear in search results, it is essential to understand the different ways users might arrive at your website and cater to their needs accordingly. By providing comprehensive and relevant content, you can enhance the overall user experience and increase the visibility of your desired landing page.

2. Improving Interactivity Performance (InP)

Issue: A website owner receives a search console alert regarding improving InP (Interaction Performance) issues and seeks guidance on calculating data and finding the easiest method to address it.

Explanation: InP, also known as Input Delay, measures the responsiveness of a webpage to user interactions. Google provides comprehensive documentation on InP and ways to enhance scores in terms of performance. However, it is important to note that while improving InP can enhance user experience, it may not directly impact search rankings.

Solution: To obtain detailed information and practical steps to boost your website's InP, we recommend referring to the documentation available on the web.dev site. By following the mentioned best practices, you can enhance the performance of your website and provide a seamless user experience.

3. Removing Hacked URLs from Search Console

Issue: A website owner discovers Japanese keyword hack on their site and seeks assistance in eliminating the hacked URLs from the Search Console.

Explanation: Getting hacked can be a distressing experience for any website owner. In the case of a Japanese keyword hack, the hacked content may be hidden from Google, making it vital to seek professional help to ensure its complete removal. Moreover, considering the quantity of hacked pages, it is advisable to focus on removing the more visible pages manually. The remaining URLs may naturally drop out of search results over time.

Solution: While the hacked content may still be discoverable by determined individuals, your goal should be to make your website's search results appear acceptable to the average user. For more information and guidance on dealing with this type of hack, please refer to the relevant content on web.dev, as linked in the transcript.

4. Pages Getting De-indexed after Indexing Requests

Issue: Website pages submitted for indexing through Search Console repeatedly get de-indexed.

Explanation: This issue suggests that Google's systems are not entirely convinced about the value and quality of the website's content. It is important to understand that not all pages from a website are indexed by search engines, and some fluctuation in indexing status is to be expected. Our systems continuously assess content and website quality, which could result in de-indexing certain pages.

Solution: Instead of continuously pushing for indexing of specific pages, it is advisable to focus on establishing overall website quality and uniqueness. By demonstrating the value your website adds to the web and aligning it with the expectations of users, Google's systems will naturally recognize and index relevant pages. Patience and a holistic approach to website optimization are crucial.


Effective management of Search Console can contribute to improved website visibility and user experience. By addressing common issues and providing solutions, we hope to empower webmasters and SEO professionals to overcome these challenges successfully. If you have any further questions or require additional assistance, we encourage you to engage with the Search Central Health Community or submit your inquiries using the provided forum link. We value your feedback and look forward to further discussions and improvements in website performance. May your site's traffic thrive, and crawl errors diminish!

Thank you for reading and stay tuned for more valuable insights and updates.

Previous Post

Google Ads: Maximizing Results with Recent Changes

Next Post

Counting in MySQL: Dispelling the Myth of Count Star

About The auther

New Posts

Popular Post