In the past Google has always crawled your site in much the same fashion as the Lynx browser would display it – no CSS, no Javascript – just text and links. Over the years this has caused many webmasters to specifically block the crawling of CSS and JS files by Googlebot in an attempt to make sure that Google picks up all of the actual content without any chance of something hiding it – like some CSS. This however has all changed now with Google’s latest update to the Webmaster Guidelines.

Google’s Pierre Far wrote, “Disallowing crawling of Javascript or CSS files in your site’s robots.txt directly harms how well our algorithms render and index your content and can result in suboptimal rankings.

The new guideline is as follows:

To help Google fully understand your site’s contents, allow all of your site’s assets, such as CSS and JavaScript files, to be crawled. The Google indexing system renders webpages using the HTML of a page as well as its assets such as images, CSS, and Javascript files. To see the page assets that Googlebot cannot crawl and to debug directives in your robots.txt file, use the Fetch as Google and the robots.txt Tester tools in Webmaster Tools.

So to be super clear here – if Googlebot can’t access your CSS and/or JS you’re website won’t be ranking very well. Period. Luckily this is an extremely easy fix and probably the easiest SEO work you’ll do this year.

Published by Michael Boguslavskiy

Michael Boguslavskiy is a full-stack developer & online presence consultant based out of New York City. He's been offering freelance marketing & development services for over a decade. He currently manages Rapid Purple - and online webmaster resources center; and Media Explode - a full service marketing agency.

Leave a comment

Your email address will not be published. Required fields are marked *