Problem #1: Non-www and www Versions of Site URLs

Problem #1: Non-www and www Versions of Site URLs

23.Oct.2021

For example, if you’ve got a blog or ecommerce site, chances are that the homepage is going to be the most valuable page on your site. If you have a non-www version of your homepage and a www version, this means that 50% of the link value for that page will go towards driving traffic to each respective URL.

 

If you’re not getting any links yet, there’s nothing to worry about, but once you start acquiring inbound links, it’s important to ensure that you only pass credit for one URL when crawling it. Otherwise, when Google crawls both URLs at the same time (which it will do by default), they won’t know which one to pass link credit for.

 

URLs that consist of random numbers, letters and characters are likely not real URLs, but instead will be treated as “not provided” by Google Analytics. This means that any links that result in one of these kinds of URL won’t be counted towards your organic traffic totals because the referral information is classified as “not provided.” Given how many links there are on the web now (there were over 1 trillion links indexed by Google in January), I think it would be safe to assume that this is a reasonably significant volume of links not being counted. One scenario where you can expect this to come into play is with sitewide footer or sidebar links; these types of links are often seen as unimportant, so they aren’t receiving the same level of scrutiny that other links may receive.

 

Whenever you create a new page (i.e., for each blog post), make sure it gets indexed with its own unique URL. If you don't do this, Google might continue treating your old URLs as if they were separate pages, even though you want them to be treated as one cohesive page by linking together (canonicalizing) them with rel=”next” and rel=”prev” link attributes on the header/footer of your paginated series of posts.

If not done properly, proxy traffic can dilute any data you're looking at across websites that share the same IP address.

When you crawl the web, Google is essentially making guesses as to what your site is about based on which pages link to yours. If it looks like one page on your site should link over another, that's an easy way for Google to figure out where your site’s authority lies—and use that information to help determine how high up you're listed in the SERPs. For example, if you have a non-profit organization and want donations from visitors who land on your blog, make sure its homepage is linking back with dofollow links (i.e., not using nofollow) to this blog post before sending any traffic here; otherwise, Google might consider all of those folks as self-referrals and not count the links to our blog post in their rankings.

 

If you’re providing links on your site in exchange for content, it’s best practice to use rel=”nofollow” so that any link equity won’t pass through when users click on those links. This is especially true for any paid links, such as sponsored posts or ad networks, but can also apply to other types of content exchanges (e.g., you mention a product in one of your articles hoping readers will go through).

If you really want all search engines to send traffic only to one version of your URL, 301 redirects are the most effective way of achieving this. If done properly, this means that all traffic will be sent to the new version of your URL (with https://www.example.com/new-version), and any link juice for incoming links would go with it. Note: 301 redirects are not cached, meaning that search engine bots may crawl your old URLs before you've made them available at the new location; this could diminish some link equity if they happen to list one domain higher than another in their SERPs.

 

directing Google’s crawler to index all of your site’s new content, but just do it more slowly. You can usually accomplish this by opening up your robots.txt file and adding the following line of text:

<code>Sitemap: http://example.com/sitemap_location</code>.

Here are two different kinds of 301 redirects that point one URL to another:

-Code 301 redirect- Content is actually being moved from one location on the web to another. For example, if you have a page with incorrect information on it, you'd want visitors who land on the page to be automatically redirected to the correct version so they aren't seeing wrong or outdated information.

-Code 302 redirect- Content isn't actually moving, but you're still telling Google to treat it as if that were the case (e.g., when you want to test out a new page without removing the old one completely). If your site’s HTTPS page is ranking higher than its HTTP equivalent in search results, 301 redirecting all of your HTTP pages over to HTTPS would help both versions receive credit for incoming links.

The Point: When creating an article about any topic, describe only 1 thing per URL domain name. This way you are not splitting link value between two URLs. The result will be that each article will have 100% of the total possible links pointing directly at it, instead of having 50% of the possible links pointing at each of them.

Summary:

Non-www and www version of site URLs split link value between two URLs. To prevent this, make sure to use only one URL per article (no matter what content you are writing about), or 301 redirect non-www to www; alternatively, create a Sitemap which specifies the locations of all URLs if your server does not allow for .htaccess DNS changes. This is especially important when linking to other pages on your own website in order to avoid dilution of ranking power due to self-referrals (which will be counted as spam by search engines). Keep in mind that Google may not index all new content immediately after creating it, so do not depend on them to crawl all of your pages immediately. They are just crawling them slower, not not at all.

We are social