About Me

header ads

Google Guidelines: Blocking CSS Or JavaScript Directly Can Harm Your Rankings

Google Guidelines: Blocking CSS Or JavaScript Directly Can Harm Your Rankings


GoogleBot - click for full sizeGoogle announced they made a change to their Webmaster Guidelines specifically telling webmasters, what they've been sayingfor years, but more strongly the past few months. Do not block us from crawling your CSS or JavaScript!
Google's Pierre Far wrote, "Disallowing crawling of Javascript or CSS files in your site’s robots.txt directly harms how well our algorithms render and index your content and can result in suboptimal rankings."
The old guideline that was changed was from:
Use a text browser such as Lynx to examine your site, because most search engine spiders see your site much as Lynx would. If fancy features such as JavaScript, cookies, session IDs, frames, DHTML, or Flash keep you from seeing all of your site in a text browser, then search engine spiders may have trouble crawling your site.
To the new guideline:
To help Google fully understand your site's contents, allow all of your site's assets, such as CSS and JavaScript files, to be crawled. The Google indexing system renders webpages using the HTML of a page as well as its assets such as images, CSS, and Javascript files. To see the page assets that Googlebot cannot crawl and to debug directives in your robots.txt file, use the Fetch as Google and the robots.txt Tester tools in Webmaster Tools.
In May, Google introduced a new fetch and render GoogleBot feature and told us they arefully rendering your HTML web pages, just as a user would render them.
To quote Pierre Far from Google via Google+:
Let me be super clear about what this means: By blocking crawling of CSS and JS, you're actively harming the indexing of your pages. It's the easiest SEO you can do today. And don't forget your mobile site either!
The mobile reference is super important as well, read more on that over here.
Forum discussion at Google+.

Post a Comment

0 Comments