If you have ever wondered how the evolution of search engine manipulation came about.

Search Engine Manipulation

Back in the day the search engines had very few indicators to work with to determine the popularity and topicality of a website.  Due to the limited data it was working with search engines had to trust the site at its word.

Meta Keywords

Search engines said, “ok, tell me what you are all about” and looked to us to describe our site using HTML meta tags such as description and keywords.  Some webmasters wrote fact, some wrote fiction, the search engines believed both until they realized they had been tricked.


I am trying to keep my example “G” rated, but a typical example was XXX porn as the bait and then arriving on a page of Viagra ads.  In anycase, this example site was selling Cat Food but claiming to be Cat Photos. The search engines put out an update where if the meta description did not match the content it would be ignored. The blackhats responded:


By placing white colored text on a white background you could include tons of keywords tricking the search engines into believing your meta data. This is just one example of the different techniques used, but at the end of the day the ultimate response from Google was to completely ignore the meta keywords tag and to put nearly no weight on the meta descriptions tag.

Inbound Links

When the search engines finally realized they couldn’t trust the self description of a website it had to look to other sites (much in the way we look to others for recommendations) as a 3rd party validation. Although this brought a more evolved approach through the PageRank algorithm this was still a very easy to manipulate formula. All of a sudden, blogs popped up everywhere with no other purpose than to create inbound links. Blackhat techniques such as blog comment spamming ruined the experience for bloggers on the web and article directories once filled with quality content became wastelands of poorly written trash containing links back to websites now in top search position.

Search engines looked for different ways to identify if the source of the recommendation (inbound link) was credible or not. It started off by placing higher weight on .EDU .GOV websites because they cannot be purchased like a .com or .org could be purchased; this was counteracted by overly aggressive SEO’s paying students for their .edu/~name/ web space or etc. Google also placed higher authority on links coming from websites with lots of links to them (with a high “PageRank”) but this could also be gamed. With new algorithm updates, its not just about a high PageRank anymore, its also about relevance/topicality; a site may be an authority, but they need to be an authority of your topic to pass the strongest recommendation.

The struggle to determine credible and authoritative sources for the recommendations of websites has been the baine of the search engines.

Social Indicators

The struggle to determine credible and authoritative sources for the recommendations of websites has been the baine of the search engines. With the advent of the social web the search engines are now able to put names to recommendations. As of the date of this post, social indicators are already playing a role in the short term ranking of websites but have not replaced traditional linkbuilding methods. Google +1’s, Tweets, Facebook Likes

Then we manipulated that too (Search Engine “Optimization”) now it realizes it can use social indicators.

Search Engine Optimization

Each year I say that the “Off-Page” Manipulation through Linkbuilding portion of SEO will be dead in a year, Google has has been slower than expected but the end of manipulating Google and instead focusing on producing quality content with the end user in mind.