You guys have totally missed the meaning of the article, which is understandable because the article is written by someone who doesn't really understand what they're talking about. Hell even the quote from the official blog post is being used in the wrong context.
Google is indexing facebook comments for blogs, news sites and other things. Many sites no longer use comment systems built into their own sites but instead use third-party systems such as Facebook and Disqus. These are embedded with iFrames and are loaded using asynchronous requests after the page has finished loading (AKA AJAX). Previously, Google has not been able to index content that is loaded via AJAX without the use of ugly hacks to serve up static snapshots of a fully-loaded page to search engines. This is how Facebook and Twitter (the actual sites) have been indexed for a long time.
Googlebot is now able to selectively crawl content that is loaded as part of an AJAX request without workarounds. It is able to do this for both GET and POST requests where it could only execute GET requests before. My understanding of the blog post indicates that Googlebot will (at least initially) only be executing AJAX requests that happen automatically and not as a result of user interaction. In other words, it will not "click" on buttons to see what else it can find, it will just let the page render by itself.
This change really has nothing to do with facebook and more to do with the fact that so many sites now use AJAX. Because Google provides those "instant previews" (the screenshot of the page that shows up to the right of a highlighted search result) it needs to be able to accurately render the page as the user would see it. Now that it can execute AJAX this is possible unless the developer has used cloaking techniques (accidental or deliberate).
I recommend ignoring the article and reading this: http://googlewebmastercentral.blogsp...g-more-of.html