Post by amirmukaddas on Mar 11, 2024 0:32:21 GMT -7
The search console is the tool without which I would perhaps have the most difficulty in carrying out a good SEO analysis. In this new version Google offers us much more information on index coverage. Let's see exactly what this coverage is and how to frame the new information. What is index coverage If the indexing status provides us with the overall value of the pages that Google has registered and still has in its belly, index coverage can be understood as the way in which the contents are distributed within the index. For example, if your site has 10,000 pages in the indexing state, it is possible that of these 10,000 only 500 are considered useful for readers, while the others could present problems. Index coverage is the new feature of the search console that segments the pages of your website with respect to crawling problems and for each segment indicates which pages are affected by the individual problem. This is a very important help to read, or rather, to see the way in which Google serves our website, without using the tools and practices that we SEOs are forced to use to obtain the same information.
Index coverage While the "old" search console only shows the codes 4XX, 5XX and soft 404, the new one extends the analysis to pages in noindex , to 3XX, to pages with appropriate canonical , to those duplicated, but without canonical and to those scanned, but not indexed . As if that wasn't enough, it also tells us which pages are for each group, to the point of covering the entire Google index for our site. Data reliability Fantastic, wonderful, but no. At the moment all this information doesn't seem to be much more than a nice statement of intent. Google's intention is to offer a platform that simplifies the work of SEOs and of those who in general take care of their own web project, but in reality, as far as I have been able to ascertain by analyzing about ten new Denmark Telegram Number Data search console profiles, the pages reported in noindex are often 404 , as are those shown as redirected or duplicated without canonical. The pages scanned but not currently indexed are actually highly indexed, in fact they are visible in SERP and have a normal cache copy. In essence, Google reports things that do not correspond to the visual confirmation , attracting the ire of the most demanding on a technical level (if you update, do it well), but at the same time the indulgence of those who know that "search" is a process that requires time and continuous adjustments.
The future of the search console I believe that it will still take the whole of 2018 to refine the data segmentation process and it will probably never be precise, just as the old search console still isn't, which sometimes shows 404 errors due to pages that have been removed years ago, which in turn they no longer take links from anywhere. Well, it would be nice if Google were able to detach from the indexes the information on these ghost pages which instead remain stuck somewhere in the dark recesses of their data centers scattered across the planet (who knows which one). In short, more deep crawls, more resources, fewer half-baked innovations. We, however, continue inexorably to use Screaming frog and analytics in conjunction with the old and (more or less) healthy search console. The data obtained by crossing multiple filtered platforms remains the analysis tool that returns the most accurate data to grasp the health status of a website compared to scanning and indexing.
Index coverage While the "old" search console only shows the codes 4XX, 5XX and soft 404, the new one extends the analysis to pages in noindex , to 3XX, to pages with appropriate canonical , to those duplicated, but without canonical and to those scanned, but not indexed . As if that wasn't enough, it also tells us which pages are for each group, to the point of covering the entire Google index for our site. Data reliability Fantastic, wonderful, but no. At the moment all this information doesn't seem to be much more than a nice statement of intent. Google's intention is to offer a platform that simplifies the work of SEOs and of those who in general take care of their own web project, but in reality, as far as I have been able to ascertain by analyzing about ten new Denmark Telegram Number Data search console profiles, the pages reported in noindex are often 404 , as are those shown as redirected or duplicated without canonical. The pages scanned but not currently indexed are actually highly indexed, in fact they are visible in SERP and have a normal cache copy. In essence, Google reports things that do not correspond to the visual confirmation , attracting the ire of the most demanding on a technical level (if you update, do it well), but at the same time the indulgence of those who know that "search" is a process that requires time and continuous adjustments.
The future of the search console I believe that it will still take the whole of 2018 to refine the data segmentation process and it will probably never be precise, just as the old search console still isn't, which sometimes shows 404 errors due to pages that have been removed years ago, which in turn they no longer take links from anywhere. Well, it would be nice if Google were able to detach from the indexes the information on these ghost pages which instead remain stuck somewhere in the dark recesses of their data centers scattered across the planet (who knows which one). In short, more deep crawls, more resources, fewer half-baked innovations. We, however, continue inexorably to use Screaming frog and analytics in conjunction with the old and (more or less) healthy search console. The data obtained by crossing multiple filtered platforms remains the analysis tool that returns the most accurate data to grasp the health status of a website compared to scanning and indexing.