I had a client recently ask about what SEO risks they faced with potentially creating subdomains as part of a new site/domain launch. Before firing off my typical response, I decided to look at whether the sentiment had changed around the web on this issue. Sadly, it has not. There’s still an efficient assembly line of fear-mongering from the search marketing community about the alleged dangers of subdomains.
As it’s been for many years, the lemmings typically warn against subdomains because they are treated as a separate entity from your main domain and will not get the authority benefits the same way a subdirectory would.
That almost sounds logical if you’ve been properly brainwashed, but it’s quite the opposite. First of all, that myth is based on the assumption that a domain by itself has a value to search engines. Essentially, it’s tied to the “domain authority” metric, or any other valuation created by the various SEO software companies built on an attempt to sell people on paying for link-building intel.
The major problem with that, is any sort of domain-level rank is completely arbitrary. Rank is at the URL level, not domain. So claiming an entire domain has a given rank/value is generally the result of some lazy confirmation bias. For example, say you get linked to in an article on CNN.com. Is that link great because CNN.com as a domain has a high rank? No. CNN.com the URL (their home page) has an incredible rank, as there are countless links to that page. If the CNN page linking to you is also linked from CNN’s home page, that rank will pass to you to a degree. It’s amplified by all the “you may also like” links from other articles, sections, etc.
But what if CNN were able to create a page with no internal links, and that page linked to you? Would that link be worthwhile? Moz and others would have you believe yes. But somewhere in the back of your mind, you know it’s not true. If it were so, then every obscure followed link from facebook, bit.ly, about.com and others would send your site soaring higher into search results.
It doesn’t work that way in part because links aren’t the only thing that matters, but also because it’s about links, not domains.
So thinking internally, what are some reasons subdomains end up not performing well?
Let’s look at one of the most typical examples, a blog. So many times, you see a site create a blog via subdomain because they may use blog/cms software that is different from their main website. That makes it easier. So the blog is blog.somewebsite.com. When the blog gets little traffic, instead of saying “are you writing anything worthy of getting traffic?” some bright SEO will say “Oh it’s the subdomain!”. If the blog manages to get integrated into the main website more, and moves to a subdirectory, traffic may increase. Yay confirmation bias.
But what’s really happening? Assuming the content remains static and all that changes is how the blog is represented in the site/domain hierarchy, and search traffic increases. Is it really because subdomains are inferior? Certainly not.
Here’s the most typical example of a website IA, albeit extremely simplified.
In this example, Pages 1-4 are in a persistent top navigation. Thus they get a TON of links, as every page of the site links to them. Page 5 is slightly diminished depending on the site, but at worst, every page links to Page 1, and Page 1 links to Page 5… so a good amount of rank is still passed. If there is a drop down menu that reveals page 5, then every page links to page 5. Yay.
Now let’s look at a typical subdomain blog example.
For some people, the problem may be instantly recognizable. Hopefully you’re not distracted by my poor looking graphic. But in our example here, where let’s say your main site is an ecommerce platform and your blog is a blog platform, you have two different navigations. So that ends up limiting the number of links to pages within your blog. Assuming your blog actually is in your main site top navigation (not putting it there is a common problem that further complicates this), your blog categories now at best are like Page 5 in the first example. But that rank is getting split across all of your blog posts, and your blog posts are often only found via crawling archives and are not linked to in any large fashion. This all gets worse when people block crawling/indexing of tag pages, date/author archives, and other “best practices” for blog SEO, but that’s another discussion.
The problem outlined exists regardless of subdomain/subdirectory. If there are limited links on your site pointing to your blog posts, and if your blog navigation is disconnected from your primary navigation, your blog will suffer. If you have blog categories that are say, in a dropdown menu in your primary navigation – that makes things far better. That means all of your blog categories are linked to on every page of your site. Including all your blog posts. You still have a problem of limited links to individual blog posts, but you’re upping your total this way.
So what often happens when a blog is just part of the main site, usually presented as a subdirectory? We end up with navigation like this:
Now each of our blog categories are linked to by every page of the site (assuming we have a dropdown menu that shows the blog categories). And Blog Post 1 is linked to by one of those categories.
Just count how many links are pointing directly to our blog categories in the 2nd graphic vs 3rd. And look at the individual blog post, how many links are pointing to the pages linking to that post? In the 2nd graphic, there’s a whole extra level of distance in those links. That is the major reason subdomain blogs fail (aside from shitty content that isn’t promoted), and it has nothing to do with the subdomain and everything to do with navigation.
I’ll leave you with a piece straight from the mouth of the horse, Matt Cutts talking about this very issue. Do you want to judge how Google treats sites based on what Google says? Or what some company trying to sell you SEO software says?