What are the (Seven) 7 Common Duplicate Content Issues and How to Fix Them?

Duplicate content is one of the most misunderstood ranking factors because it can hurt or even kill a website’s rankings. A common reason for this claim is the amount of information surrounding duplicate content. So, let’s break down the (seven) 7 common duplicate content issues and how to fix them in this post to help the naïve bloggers or website owners get started in this field.

OhL0Ygf r2osGeBHZhyFeO2 lQTdIIv BnJp1r9zV3eiV2g6rdn0MUKrlJwKr8A0 E E7flDUGBHLkXrrNbg1U2Fi 2nmTCyrZli7BpMrywDPhLfIqpEtdqGL7

The (Seven) 7 Common Duplicate Content Issues and How to Fix Them?

  1. WWW vs. Non-WWW (Naked Domain):

In the old times, when a company purchased a domain name and set up its server, it used to create separate folder names for its different services. To reach each folder, the company created sub-domains.

zkhJ5TymKwsqelDsDJBnwgBvHKy3GjMXwo FZ0bDpGR8PdKpLfJCE aaAZyB1B561lg9OSLugjJ 8PAvOlgjZcNSBuO49sfk sOQnHYP8MpgqJLlQdcR szJBWlWn4gdTyZcsajZWuktzd9HSg

*** Note: For proof, users can check the address of Gmail after reaching the inbox page ***

The Issue:

Nowadays, if a person visits a naked website by typing the ‘www’ in its address (or vice versa), browsers and hosting servers will automatically redirect that person to the correct website. So, according to the SEO perspective, both the naked domain and the ‘www’ domain name have equal benefits.

But suppose a person uses the naked domain in the website and the ‘www’ domain name in social media or backlinks citations. In that case, he will divide the backlinks of his website into two separate pages, which means the search engine will index both versions and double up the search results.

The Workaround for this Issue:

To prevent the www and non www duplicate content issue, a person should stick with the same version of the domain in backlinks, social media, or the website. However, suppose a person has zero knowledge about content duplication from the www vs. non-www (naked domain) issue and accidentally used both versions. In that case, he can fix this problem with the help of two techniques.

  1. Fixing through Setting the Preferred Domain in Google Search Console:

The easiest way to fix this content duplication issue is by configuring the preferred domain in the Google Search Console.

*** Note: Before trying the following steps, users will need to verify both versions (www and non-www) of their domains ***

  1. Search and open the ‘Google Search Console’ on any web browser.
wo6JYNN1v0xHgT2C4m136EhFz9ER93WzUNX3mQRact dNG2TfMS5rlUmjMVZCdkNeQam3fNouk0D31lDuCcalBB1XFr6aVPglZc5OgDKz5FP1qtZGpZeQB74mj53njhMEoaCph41cM3bYj1oHA
  1. Then, click on the ‘Start now’ button to reach the new homepage.
yzT50oCn77p753ByCRibepSr Yp ufiv755wWicQUKlYHtRecHeySrLgZ42FVHNL3pfkIrq1sWgG9IersVVXBOAuFJnwWNL HmNVHVPlqg2b4wBaJ5Oky0NSgGtIIykoBFPjtiGvPsN BB0fBg
  1. Once the browser has finished loading, hover over the left options and scroll down to find and choose the ‘Go to old version’ option.
1Cn1melD8P36BkMaLbl30fRtUlbXaEaz9cKTXHq2 dKY59xDg3mlXD53JKAqEsqHOvqgXRo2RxWd6jUXLpocrpPUf8u5TjLk0CQ HyxFCz65tBtt aotA5UKTCdmY6Hov7Ru
  1. Then, click on the ‘cogwheel’ icon and select the ‘Site Settings’ option.
o5OqsLOB2qnIHG2v4xqRPXwt9NnIgnobxT64Pmth2iE8Dfy1ZXvXpRxfzHFgRDvaGsqv 2ln9knsRbWGipb2Qx
  1. After that, check the ‘Preferred domain’ option and click on the ‘Save’ button to save the changes.
vEzzomVQmCut9qfsUBP5gHw1uPkBrzDltwDrTYgYu7CCtyek wE6JoA10xzFxISBpHY2BAotD69Rsd9X5UW1qjoDDeosm9um8G8PeL giJEStUaqil AgbkW8snD6ZDoSEcQAqU93KhW1Wnejw
  1. Solving by Setting Up 301 Redirects:

This technique is for those who use the ‘apache webserver’ and have installed the ‘mod_rewrite.

  1. Download the ‘.htaccess’ file from the webserver.
  2. Then, add the following code to inform the webserver to redirect any page at abc.com to the www.abc.com version

RewriteEngine On

RewriteCond %{HTTP_HOST} ^abc.com

RewriteRule (.*) http://www.abc.com/$1 [R=301,L]

*** Note: Here, ‘abc.com’ is the sample domain name ***

  1. HTTP vs. HTTPS:

HTTP is the old HyperText Transfer Protocol, whereas HTTPS is a secure version of the old HyperText Transfer Protocol.

The Issue:

In 2018, Google started marking HTTP websites as unsafe. Therefore, now, HTTPS websites have a clear advantage in terms of SEO compared to the non-secure versions of websites (HTTP).

But if a user converts his HTTP website to HTTPS, Google doesn’t automatically transfer the backlinks and other respective link benefits of the website (from HTTP) to HTTPS. Instead, the user will have to verify his website again in the Google Search Console because both versions (HTTP and HTTPS) are different. So, if a person uses both (HTTP and HTTPS) versions, he will experience content duplication.

The Workaround for this Issue:

To prevent content duplication issues from this mistake, a person should decide whether he wants to use HTTP or HTTPS before moving on. As mentioned earlier, search engines like Google prefer using the encrypted version of the HyperText Transfer Protocol (nowadays). So, it’s the best practice.

However, if someone has interchangeably used both HTTP and HTTPS, he can use the 301 redirects to point all the variations of URLs to the preferred domain. Similarly, users can also read Google’s help regarding site migration.

*** Note: Users can take help from the workaround of the first issue to implement 301 redirects ***

  1. Forward Slash (/):

Forward slash (/) doesn’t create any issues for the homepage of a website. But it is a problem for internal pages. For example, when a user puts a forward-slash (/) at the end of an address, the browser will open the same page. This example indicates that forward-slash (/) does not affect browsers. But that’s not the case with search engines because forward-slash (/) is the directory separator in computer terminologies.

TKFB TY8lAFkQEHVmblAYpmvS2x kX0bhS w00GNocbdSTsaJcMWXKYASTKQfvbuDcKldmykzG2ySt9S wby1IXtANTCEzeZmUWwN1QAtlOb4h4Gkz jMsFY13g 7QsUc18LAPCDSJfyDbG0g

The Issue:

Forward slash (/) or the directory separator indicates the difference between the two folders. Thus, its absence or presence at the end of the URL doesn’t mean the betterment of SEO. Instead, both are the same from an SEO perspective. However, if a person uses different style URLs throughout the website, backlinks citations, or social media, he will experience content duplication.

The Workaround for this Issue:

A person should use the same internal linking and site mapping style to prevent content duplication. However, if someone has inconsistently used or not used forward-slash, he can use 301 redirects to set up according to his preferred version.

*** Note: Users can take help from the workaround of the first issue to implement 301 redirects ***

  1. Thin Content:

The fourth content duplication issue in this “7 common duplicate content issues and how to fix them” guide is the most common one. It refers to producing thin content.

The Issue:

Thin content refers to a scenario when there is not enough unique content on a webpage. This issue is most common among the new and e-Commerce websites because writers take help from the existing materials while writing content or product descriptions for their websites, which leads to content duplication.

For example, when a reader searches for a topic on the search engine, he takes help from different sources. However, if the reader finds the same content on various web pages, he will get annoyed. This situation will destroy the reputation of that website and most probably, the reader will never return to the same website in the future (because of a bad experience).

The Workaround for this Issue:

When it comes to the quality of content, the uniqueness of content plays a key role. However, producing unique content doesn’t mean that a content writer needs to write everything from scratch. Undoubtedly, he can take help from the existing materials and paraphrase the current material using online paraphrasing tools. But a webpage should have at least 300+ unique words (excluding infographics) to avoid the claims of content duplication from Google duplicate content checkers.

  1. Boilerplate Content:

The fifth duplication content problem in this guide directly links with the previous one because it also refers to the duplication of typography. It is boilerplate content.

The Issue:

Boilerplate content refers to the type of content where the website owner has duplicated the same content on different web pages of his website. For example, this type of content can include return policies and shipping details of an e-Commerce website.

The more boilerplate content a website contains, the more percentage of unique content it will require to avoid content duplication issues. That is why boilerplate content directly links with the thin content.

The Workaround for this Issue:

According to its explanation, boilerplate content mainly occurs through the tabs of an e-Commerce website. So, to avoid this type of content duplication, website owners can put the same content in a lightbox or pop-up window. Similarly, they can use bullet points instead of tabs and provide links to separate web pages to get more information.

*** Note: Users can check plagiarism of website content to determine the percentage of content uniqueness and avoid cross domain duplicate content ***

  1. Junk Pages:

The second last (common) content duplication issue occurs from having junk pages on a website.

The Issue:

When most naïve e-Commerce website owners add the option of writing product reviews, they use different URLs for each product. For example, if a website has 500 products, it will have 500 review pages with the same content. As the (under-discussed) website will gain the attention of search engines, its search results will block with similar pages, which will trigger content duplication.

The Workaround for this Issue:

Most SEO experts will recommend using no-follow links in this rare scenario to avoid this type of content duplication. However, unlike existing materials, the website owners should include a ‘noindex’ tag in the document’s ‘<head>’ section, informing the search engine to exclude this particular page from its index. So, the code of the ‘noindex’ tag in the ‘<head>’ section of the document will look like this:

<meta name=“robots” content=“noindex, follow”>

  1. Content Syndication:

The last common content duplication issue in this “7 common duplicate content issues and how to fix them” guide is content syndication.

The Issue:

Suppose a company creates localized content for its overseas clients or expands its business globally. In that case, the likelihood of duplicate content on different sites increases. This situation is known as content syndication because the company intentionally or unintentionally uses the same content from its root domain.

The Workaround for this Issue:

There is no point in creating a regional website if the website owner can’t produce unique content instead of translating the content for the target market. So, if a website owner wants to drive higher conversion rates for localized versions of his website, set up language and location targeting, including hreflangs.

Wrapping Up – the Conclusion:

Providing unique content is essential for establishing a reputation as a source of high-quality content. Therefore, it appears to be a cornerstone of most strategies for ranking higher in SERPs. Unfortunately, it can be easy to publish duplicate or similar content due to simple mistakes accidentally. By being proactive about avoiding content duplication issues and mitigating the damage when they occur, website owners will substantially improve the likelihood of their websites by achieving higher rankings for their valuable content.

FAQs:

  1. Which of these is one possible solution for dealing with the duplicate content issue?

There are different types of content duplications. So, there is no universal solution for dealing with different kinds of content duplication.

  1. Why is having duplicate content an issue for SEO?

Search engines prefer those web pages and websites that have unique content. So, if a specific website or web page violates the terms and conditions of a search engine in terms of content duplication, it will harm its SEO score.

  1. How much duplicate content is acceptable?

As mentioned earlier, a web page should have at least 300+ unique words (excluding infographics) to avoid the claims of content duplication.

  1. What is the most common fix for duplicate content?

The most common fix for content duplication is setting up the 301 redirects because it fixes the content duplication issues from www vs. non-www, HTTP vs. HTTPS and forward-slash (/) (mistakes).

0/5 (0 Reviews)

Like this article?

Related Posts