The Google Panda
By: Steve Ellias – May 27, 2013
Google Panda is a change to Google’s search results ranking algorithm that was first released in February 2011. Soon after the Panda rollout, many websites, including Google’s webmaster forum, became filled with complaints of scrapers/copyright infringers getting better rankings than sites with original content. Google’s Panda has received several updates since the original rollout in February 2011, and the effect went global in April 2011.
1 The Panda process
2 Significant differences between Panda and previous algorithms
3 Panda recovery
The Panda process
Google Panda was built through an algorithm update that used artificial intelligence in a more sophisticated and scalable way than previously possible. Human quality testers rated thousands of websites based on measures of quality, including design, trustworthiness, speed and whether or not they would return to the website. Google’s new Panda machine-learning algorithm was then used to look for similarities between websites people found to be high quality and low quality.
Many new ranking factors have been introduced to the Google algorithm as a result, while older ranking factors like PageRank have been downgraded in importance. Google Panda is updated from time to time and the algorithm is run by Google on a regular basis. On April 24, 2012 the Google Penguin update was released, which affected a further 3.1 % of all English language search queries, highlighting the ongoing volatility of search rankings.
On September 18, 2012, a Panda update was confirmed by the company in its official Twitter page, where it announced, “Panda refresh is rolling out– expect some flux over the next few days. Fewer than 0.7 % of queries noticeably affected”.
Another Panda update began rolling out on January 22, 2013, affecting about 1.2 % of English queries.
Significant differences between Panda and previous algorithms.
Google Panda affects the ranking of an entire site or a specific section rather than just the individual pages on a site.
In March 2012, Google updated Panda and stated that they are deploying an “over-optimization penalty,” in order to level the playing field.
The Panda recovery.
Google says it only takes a few poor quality, or duplicate content, pages to hold down traffic on an otherwise solid site. Google recommends either removing those pages, blocking them from being indexed by Google, or re-writing them. Matt Cutts, head of webspam at Google, warns that re-writing duplicate content so that it is original may not be enough to recover from Panda– the re-writes must be of sufficient high quality.Share