On May 20, 2014, Google evidently rolled out yet another major change to the algorithm(s) supporting its search engine. This major update, dubbed “Panda 4.0”, changes the Search Engine Results Pages (SERPs) served in response to web search queries.
Matt Southern commented on the Panda 4.0 update in an article titled Who Was Hit Hardest By Panda 4.0? The Answer May Surprise You, which was published on the Search Engine Journal web site the day after the changes were implemented, May 21, 2014.
According to Mr. Southern, Searchmetrics published some findings soon after Google implemented these new tools. These include noting “. . . eBay was among the top losers, as well as competing search engine Ask.com. Former SEO darling, RetailMeNot.com, also lost a substantial amount of traffic.” (quoted from Mr. Southern’s article on Search Engine Journal)
In contrast, sites Mr. Southern claims are composed of ” . . . good quality content that’s useful to searchers saw huge gains in traffic thanks to Panda 4.0.” He provides several examples of this good quality content: “[t]he biggest winner was Glassdoor, a site that provides useful information for job seekers. Next on the list is eMedicineHealth, which provides health information for consumers, and Medterms, a medical dictionary.” (ibid)
Unfortunately, he doesn’t support his claim about content quality with any attempt to demonstrate why a job rating service like the Glassdoor web site ranks higher than an entire community of seller sites like eBay, or a search engine (which, after all, merely applies its own algorithm against the information its crawler identifies online, and happens to compete with Google).
But leaving aside, for a moment, whether Mr. Southern’s claims make any sense, at all, readers may benefit more by considering the significance of the Panda 4.0 update, given Google’s position as, clearly, the dominant search engine for the Internet. Perhaps the need to provide yet another update to this platform for ranking results, together with Google’s familiar practice of changing quality ranking methods, piecemeal, on a daily basis, speaks more to the flaws in Google’s search engine product than anything else.
As I’ve written earlier to this blog, Google’s search engine method, in my opinion, actually did more to limit the exposure of otherwise obscure web content than anything else. By applying quality metrics to web sites, Google provided a rationale for condensing its index of web content, so as to provide greater efficiency. After all, why index web pages no one looks at, anyways?
But a key component of Google’s quality ranking tool kit is, of course, its ability to collect enough personal information about just who is submitting a query to its search engine, to ensure the results it returns meet or exceed user expectations. This personalization method positions Google squarely for, in my opinion, a head on collision sooner or later with regulatory bodies in the U.S. and Europe established to protect individual privacy, online.
Once the inevitable happens, if readers choose to follow my reasoning, the usefulness of Google’s search engine is almost certain to be negatively impacted, which will likely give a big boost to competitors, and open the door wider for an entirely different approach to curating web pages and monetizing the effort.
Disclaimer: I have no position in Google, or any other publicly traded business mentioned in this post
© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved