By  at WebProNews

Google Penguin Update: Don’t Forget About Duplicate ContentThere has been a ton of speculation regarding Google’s Penguin update. Few know exactly what the update specifically does, and how it works with Google’s other signals exactly. Google always plays its hand close to its chest.

“While we can’t divulge specific signals because we don’t want to give people a way to game our search results and worsen the experience for users, our advice for webmasters is to focus on creating high quality sites that create a good user experience and employ white hat SEO methods instead of engaging in aggressive webspam tactics,” Google’s Matt Cutts said in the announcement of the update.

He also said, “The change will decrease rankings for sites that we believe are violating Google’s existing quality guidelines.”

“We see all sorts of webspam techniques every day, from keyword stuffing to link schemes that attempt to propel sites higher in rankings,” he said. To me, that indicates that this is about all webspam techniques – not just keyword stuffing and link schemes, but also everything in between.

So it’s about quality guidelines. Cutts was pretty clear about that, and that’s why we’ve been discussing some of the various things Google mentions specifically in those guidelines. So far, we’ve talked about:

Cloaking
Links
Hidden text and links
Keyword stuffing

Another thing on the quality guidelines list is: “Don’t create multiple pages, subdomains, or domains with substantially duplicate content.”

Of course, like the rest of the guidelines, this is nothing new, but in light of the Penguin update, it seems worth examining the guidelines again, if for no other reason than to provide reminders or educate those who are unfamiliar. Duplicate content seems like one of those that could get sites into trouble, even when they aren’t intentionally trying to spam Google. Even Google says in its help center article on the topic, “Mostly, this is not deceptive in origin.”

“However, in some cases, content is deliberately duplicated across domains in an attempt to manipulate search engine rankings or win more traffic,” Google says. “Deceptive practices like this can result in a poor user experience, when a visitor sees substantially the same content repeated within a set of search results.”

Google lists the following as steps you can take to address any duplicate content issues you may have:

  • Use 301s: If you’ve restructured your site, use 301 redirects (“RedirectPermanent”) in your .htaccess file to smartly redirect users, Googlebot, and other spiders. (In Apache, you can do this with an .htaccess file; in IIS, you can do this through the administrative console.)
  • Be consistent: Try to keep your internal linking consistent. For example, don’t link to http://www.example.com/page/ and http://www.example.com/page and http://www.example.com/page/index.htm.
  • Use top-level domains: To help us serve the most appropriate version of a document, use top-level domains whenever possible to handle country-specific content. We’re more likely to know that http://www.example.de contains Germany-focused content, for instance, than http://www.example.com/de or http://de.example.com.
  • Syndicate carefully: If you syndicate your content on other sites, Google will always show the version we think is most appropriate for users in each given search, which may or may not be the version you’d prefer. However, it is helpful to ensure that each site on which your content is syndicated includes a link back to your original article. You can also ask those who use your syndicated material to use the noindex meta tag to prevent search engines from indexing their version of the content.
  • Use Webmaster Tools to tell us how you prefer your site to be indexed: You can tell Google your preferred domain (for example, http://www.example.com or http://example.com).
  • Minimize boilerplate repetition: For instance, instead of including lengthy copyright text on the bottom of every page, include a very brief summary and then link to a page with more details. In addition, you can use the Parameter Handling tool to specify how you would like Google to treat URL parameters.
  • Avoid publishing stubs: Users don’t like seeing “empty” pages, so avoid placeholders where possible. For example, don’t publish pages for which you don’t yet have real content. If you do create placeholder pages, use the noindex meta tag to block these pages from being indexed.
  • Understand your content management system: Make sure you’re familiar with how content is displayed on your web site. Blogs, forums, and related systems often show the same content in multiple formats. For example, a blog entry may appear on the home page of a blog, in an archive page, and in a page of other entries with the same label.
  • Minimize similar content: If you have many pages that are similar, consider expanding each page or consolidating the pages into one. For instance, if you have a travel site with separate pages for two cities, but the same information on both pages, you could either merge the pages into one page about both cities or you could expand each page to contain unique content about each city.

 

Don’t block Google from duplicate content. Google advises against this, because it won’t be able to detect when URLs point to the same content, and will have to treat them as separate pages. Use the canonical link element (rel=”canonical”).

Note: there are reasons why Google might skip your Canonical link elements.

It’s important to note that Google doesn’t consider duplicate content to be grounds for penalty, unless it appears that it was used in a deceptive way or to manipulate search results. However, that seems like one of those areas, where an algorithm might leave room for error.

Here are some videos with Matt Cutts (including a couple of WebProNews interviews) talking about duplicate content. You should watch them, if you are concerned that this might be affecting you:

 Read More at http://www.webpronews.com/