How Smart is Googlebot?

The question has often been raised within the world of search about just how smart Google’s website crawler “bot” actually is. The simple fact is that only Google really knows but some believe that the fact that Google developed the Chrome browser is no co-incidence.

After all, why would the worlds largest advertising platform and search company bother to enter the competative world of the web browser? On top of that why would they also spend so much to develop something they then just give away free?

Of course if you look at the bigger picture you can see that controlling the web browser market makes it that much easier to track peoples habits, grab their data and ultimately serve adverts. The timing of Chrome suggests that this wasn’t the only reason.

It makes sense that Chrome is in fact a search crawler with added use-ability and a fancy front-end and it makes even more sense that Google’s web crawler is a web rendering engine. This is especially true when Google’s patents are taken into account – such as the “Reasonable Surfer” patent, the “Document segmentation based on visual gaps” patent and the “Ranking documents based on user behavior and/or feature data” patent.

This way the crawler can take advantage of rendering a web page to find out which elements are header, body, footer and navigation given adjustments for CSS and JavaScript.

How does this information actually matter in the real world? Well for one any developer who uses tricks such as absolute positioning to have elements appear in the source code higher than in the actual rendered page are on to a hopeless streak.

More importantly though it re-enforces Google’s motto of “create for the user first, the search engine second”.