Posts tagged they

IPv6, C-Blocks, and How They Affect SEO

Posted by Tom-Anthony

You have probably heard about IPv6, but you might remain a bit confused about the details of what it is, how it works, and what it means for the future of the Internet.
This post gives a quick introduction to IPv6, and discusses the SEO implications that could follow from the IPv6 roll-out (touching specifically on the concept of C-Blocks). A quick caveat: This stuff is hard, so let me know if you spot any missteps!

A very brief intro to IP addresses (v4) & c-blocks

You’re likely familiar with IP addresses; they are usually written in the following format:


Example IP address (IPv4).

This format of an IP address is the common format in use everywhere, and is called IPv4. There are four bytes in an IP address like this, with each byte separated by a period (meaning 32 bits in total, for the geeks). Every (sub)-domain resolves to at least one such IP address (it might be several, but lets ignore that for now). Nice and simple.

Now a main SEO concept that comes out of that is the idea of C-Blocks (this shouldn’t be confused with Class C IPs; a different thing people often confuse for C-Blocks), which is a concept that has been around in the SEO space for a decade or more. Very simply, the idea is that if the first 3 bytes of the IP address are identical, then we consider the two IP address to be in the same C-Block:

Two example IP addresses in the same C-Block (blue).

So why is this interesting to us? Why is this important to SEO? The old-school logic is that if you have two IPs that are in the same C-Block, then the sites are quite likely related and thus the links between these sites (on average) should not count as strongly in terms of PageRank. My personal opinion is that nowadays there are many many other signals available to Google to make these same sorts of connections and so the C-Block issue is far less important than it once was.

So, as it turns out (surprise!) the two IP addresses above are indeed related:

Disney and ABC have a near identical IP address, both in the same C-Block.

Sure enough they are both companies in the Disney family. It makes some sense that links between these two domains probably shouldn’t indicate as much trust as links from similarly large, but unrelated, sites.

Introducing IPv6

So, there is a problem with IP addresses in the format above (IPv4); there are “only” 4 billion of them, and we have essentially exhausted the supply. We have so many connected devices nowadays, and the creators for IPv4 never envisioned the vastness of the Internet 30 years from when it was released. Luckily enough, they saw the problem early on andstarted working on a successor, IPv6 (IPv5 was used for another unreleased protocol).

IPv6 address format:

IPv6 addresses are much longer than IPv4 addresses, the format looks thus:

An example IPv6 address.

Things just got serious! There are now 8 blocks rather than 4, and rather than each block being 1 byte (which were represented as a number from 0-255), each block is instead 2 bytes represented by 4 hexadecimal characters. There are 128 bits in an IPv6 address, meaning instead of a measly 4,000,000,000 like IPv4, IPv6 has
around 340,000,000,000,000,000,000,000,000,000,000,000,000 addresses.

In the next few years we’ll be entering a world where hundreds of devices in homes will all be capable of networking and needing an IP address and IPv6 will help make that a reality. However, we are also going to see websites starting to use IPv6 addresses more and more commonly, and a few years from now we’ll start to see website that only have an IPv6 address.

CIDR Notation

Before we go any further, it is important to introduce an important concept for understanding IP addresses, which is called CIDR notation.

IPv6 exclusively uses CIDR notation (e.g. /24), so the SEO community will need to understand this concept. It is really simple, but normally really badly explained.

As we mentioned, IPv4 IP addresses are 32 bits long, so if we were sick and twisted we could look at the IP address as binary:

Example IPv4 IP address shown in dot decimal format and as binary.

Colloquially, CIDR notation could be described as a format to describe a group of closely related IP addresses, in a similar fashion to how a C-Block works. It is represented by a number after a slash appended to a partial IP address (e.g. 199.181.132/24) which states how many of the initial bits (binary digits) are the identical. CIDR is flexible and we could use it to describe a C-Block would be /24 because the first 24 bits (3 groups of 8 bits) of the address are the same:

Two IP addresses in the same C-Block. The first 24 bits (3 blocks of 8 bits) are identical.

This can be represented in this case as 199.181.132/24.

Now CIDR notation is more refined and more accurate than the concept of C-Block; in the example above the two IP addresses are not just in the same C-Block they are even more closely related as 6 bits in the last block are also identical. In CIDR notation we could say both these IP addresses are in the 199.181.132/30 block to indicate that the 30 leading bits are identical.

Notice that with CIDR the smaller the number after the slash, the more IP addresses in that block (because we’re saying fewer leading bits must be identical).

IPv6 & C-Blocks?

Now CIDR /24 is not exactly catchy and so someone made up the name “C-Block” to make this easier to talk about, but it doesn’t extend so easily to IPv6. So, the question is, can we generalise something similar?

The point of a C-Block from Google’s perspective and the perspective of our SEO is solely to identify whether links are originating on the same ISP network. So that should obviously remain the focus. So my best guess would be to focus on how these IPs are allocated to ISPs (ISPs normally get large continuous blocks of IP addresses they can then use for their customers’ websites).

In IPv4 ISPs would own bunches of C-Blocks, and so if you could see multiple links originating from the same C-Block it implied the sites were hosted together, and there was a far greater chance they were somehow related.

Illustration of an “ISP Block” (/32); the blue part of the address is stable and

indicates the ISP. The red part can change and represents addresses at that ISP.

With IPv6, I believe that ISPs will be given /32 blocks (the leading 32 bits will be the same, leaving 96 bits to create addresses for their customers), which they will then assign to their users in /64 blocks (I asked a few people, this tends to be what is happening, but I have read that this might sometimes be /48 blocks instead). Notice that ISPs now have an order of magnitude more IP addresses (each) than the whole internet had before!

This also means each end user will get more IP addresses for their own network than there are in total IPv4 IP addresses. Welcome to the Internet of things!

These ISPs may be serving home users so each house gets a block of IPv6 addresses (for the techies: IPv6 does away with NAT for the most part, I believe – all the devices in your house will get a ‘real’ IP) for their devices. In the other scenario the ISP is for servers, and here the servers get assigned a /64 block; this is the case we are interested in.

Illustration of a “Customer Block” (/64); the blue part indicates a particular customer.

 The red part can change and represents addresses belonging to that customer.

So, I think the equivalent of a C-Block in IPv6 land would be a /32 block because that is what an ISP will usually be assigned (and allows them to then carve that up into 4 billion /64 blocks for their users!).

Furthermore, in IPv6 the minimum allocation is /32 so a single /32 block cannot run across multiple ISPs as I understand it, so there is no way two IPs in the same /32 could belong to two different ISPs. If our goal is to continue to examine whether sites are more likely related than two random sites, then knowing they are on the same ISP (which is what C-Blocks do) is our goal.

Also, if you chose /64 then each ISP has 4 billion of these blocks to give away, and that is way too sparse to identify associations between sites in different blocks.

However, there is a counter argument here. Note that a single server having a /64 block of IPs means that every website should have a different IPv6 address (even if it shares an IPv4 address).

Geek side note: indeed, the “host” http header accepts an IPv6 address to distinguish which site on the server you want.

So now a single server with multiple sites will have a separate IP for each of those sites (it is also possible that the server has multiple IPv6 blocks assigned, one for each different customer – I think this is actually the intention and hopefully becomes the reality).

So, if I am running a network of websites I’m interlinking with one another then it is quite likely that if I just have a single hosting account that all these are in the same /64 block of IPv6 addresses. That should be a very strong signal that that sites are linked closely. However, I’m fairly sure that those trying to be manipulative will try to avoid this scenario and end up trying to get in another block of addresses for each site. But if they are with the same ISP then they’ll still be in the same /32 block.

My recommendation on an IPv6 C-Block

So, if you followed all that then I’d suggest:
  • Sites in the same /32 block as before would be equivalent to the same C-Block as previously.
  • Sites in the same /64 block either are on the exact same server, or belong to the same customer, so are even closer related than C-Block level.
These need easier more accessible names, how about:
  • “ISP Block” for /32 blocks.
  • “Customer Block” for /64 blocks.
Then we would be able to say things like:
  • In IPv6 IP addresses in the same ISP Blocks most closely resemble the relationship of IPs in the same C-Block in IPv4.
  • In IPv6 IP addresses in the same User Block are likely very closely related, and probably belong to the same person/organisation.

What should I take away from all this?

As I mentioned further up, I’m not convinced that IPv4 C-Blocks are as important from Google’s perspective as they once were, as they can likely access multiple other signals to tie sites together. Whilst still useful as a substitute for those signals for SEOs, who don’t have all Google’s resources, they aren’t something that should guide your decision making. If you are running legitimate sites, you shouldn’t be concerned about hosting them on the same C-Block. In fact, I’d advise against that as it could look manipulative to Google (who will likely work it out anyway).

With IPv6, I think the “Customer Blocks” could be a very important SEO feature, as it is an even closer relationship than C-Blocks were, and this is something that Google will likely make use of. It is still going to take a while until IPv6 becomes prevalent enough that all of this is important, so for the moment this is just something to have on your radar as it will begin to increase in importance over the next couple of years.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

View full post on Moz Blog

Facebook To Give Users More Control Over The Ads They See by @mattsouthern

In an announcement made today, Facebook is taking a cue from its users and taking two major steps to make ads better. In the first step, Facebook will be introducing interest-based advertising to users in the US: When we ask people about our ads, one of the top things they tell us is that they want to see ads that are more relevant to their interests. Today, we learn about your interests primarily from the things you do on Facebook, such as Pages you like. So, for example, if you’re in the market for a new TV and start shopping […]

The post Facebook To Give Users More Control Over The Ads They See by @mattsouthern appeared first on Search Engine Journal.

View full post on Search Engine Journal

SEO and Content Marketing, and How They Work Hand in Hand – iMedia Connection (blog)

SEO and Content Marketing, and How They Work Hand in Hand
iMedia Connection (blog)
When it comes to online marketing, it is important to grab the attention of people and search engines. In order for businesses to do that right, it takes a concerted focus on both search engine optimization (SEO) and content marketing, and it is

and more »

View full post on SEO – Google News

PPC & SEO: How They Work Together to Maximize Results – Dealer Marketing Magazine

PPC & SEO: How They Work Together to Maximize Results
Dealer Marketing Magazine
With the ongoing battle to beat the competition and climb to the top of Google's first page, a common question we hear is: “Which is more important for my dealership, PPC or SEO?” To answer this question, it's important to understand PPC and SEO have
Partial PPC Management is an Epidemic and it's Killing Your Businesses RevenueBusiness 2 Community

all 3 news articles »

View full post on SEO – Google News

5 Googley SEO Terms – Do They Mean What You Think They Mean? – Search Engine Watch

5 Googley SEO Terms – Do They Mean What You Think They Mean?
Search Engine Watch
Many terms are used within the search industry. These words all have very specific meanings when it comes to search engine optimization (SEO) and how you implement your strategy. However, many of these terms are often used incorrectly. Sometimes this …

View full post on SEO – Google News

5 Googley SEO Terms – Do They Mean What You Think They Mean?

Robots.txt. Google DNS. Penguin, Panda, and penalties. Duplicate content filter. PageRank. You’ve heard all these terms, but these words have very specific meanings and are among the most commonly misunderstood terms when it comes to Google and SEO.

View full post on Search Engine Watch – Latest

Link Building ≠ Content Marketing. But Here’s How They Fit Together

Link building has gone through a lot in the past 18 months — and when content marketing became the buzzword of the day, there were troves of articles saying that content marketing is the new link building or that content marketing has replaced link building. That couldn’t be more false….

Please visit Search Engine Land for the full article.

View full post on Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Penguin Penalties: Do Webmasters Respond the Way They Should?

Posted by russvirante

Penalization has become a regular part of the search engine optimization experience. Hell, it has changed the entire business model of Virante to building tools and services around penalty recovery and not just optimization. While penalties used to be a crude badge of honor worn by those leaning towards the black-hat side of the SEO arts, it is now a regular occurrence that seems to impact those with the best intentions. At Virante, we have learned a lot about penalties over the last few years—discerning between manual and algorithmic, Panda and Penguin, recovery methodologies and risk mitigation—but not much study has been done on the general response of websites to penalizations. We have focused more on what webmasters
ought to do without studying what webmasters actually do in response to various penalties.

How webmasters respond matters

As much as we often feel a communion among other SEOs in our resistance to Google, the reality is that we are engaged in a competitive industry where we fight for customers in a very direct manner. This duality of competition—with Google and with each other—plays out in a very unique way when Google penalizes a competitor. We learn a great deal in the following months about the competition, such as the sophistication of their team (how quickly they respond, how many links they remove, how quickly they recover), their financial strength (do they increase ad spend, how much and on what terms), and whether they eventually recover.

It is also important from a wider perspective of understanding Google’s justifications for particular types of penalties that seem sweeping and inconsistent. Conspiracy theories abound regarding Penguin updates; I can’t count how many times I have heard someone say that penalties are placed to encourage webmasters to switch to AdWords.

So, I decided to investigate the behavior of webmasters post-Penguin from a macro perspective to determine what kinds of responses we are likely to see, and perhaps even answer some questions about Google’s motivations in the process.

The methodology

  1. Collect examples: I collected a list of 100 domains that were penalized by Penguin 2.0 last year and confirmed their penalization through SEMRush.
  2. Establish controls: For each penalized site, I identified one website that ranked in the top 10 for their primary keyword that was not penalized.
  3. Get rankings and AdWords data: For each site (both penalized and control), we grabbed their historical rankings and AdWords spend from SEMRush for the months leading up to and following Penguin 2.0
  4. Get historical link data: For each site (both penalized and control), we grabbed their historical link data from Majsetic SEO for the months leading up to and following Penguin 2.0.
  5. Analyze results: Using simple regression models, we identified patterns among penalized sites that differed significantly from the control sites.

Do webmasters remove bad links?

After a Penguin 2.0 update, it is imperative to identify and remove bad links or, at minimum, disavow them. While we can’t measure disavow data, we can measure link acquisition data quite easily. So, do webmasters in general follow the expectations of link removal following a penalty?

Aggressive link removal: It appears that aggressive link removal is a common response to Penguin, as expected. However, we have to be careful with the statistics to make sure we correctly examine the degree and frequency with which link removal is employed. The control group on average increased their root linking domains by 41 following Penguin 2.0, but that could best be explained by a few larger sites increasing their links. When looking at an average of link proportions, only about 22% of the control sites actually saw an increase in links in the three months post-Penguin. The sites that were penalized saw a drop of 578 root linking domains. However, once again, this statistic is impacted by the link graph size of the individual penalized sites. 15% of those penalized still saw an increase in links in the three months following Penguin.

So, approximately 22% of domains not impacted by Penguin 2.0 had more root linking domains three months after the penalty, while only 15% of those penalized had more root linking domains post-Penguin. Notice how small the discrepancy is here. Webmasters responded differently only by 7% depending on whether or not they were penalized. While certainly those penalized removed more links, the practice of link building in general was very similarly affected. In the three months following Penguin, 78% of the control websites either dropped links or at least stopped link building and lost them through attribution. This is remarkable. There appears to be a deadening effect related to Penguin that impacts all sites—not just those that are penalized. While many of us expected Penguin to have a profound impact on link growth as webmasters respond to fears of future penalties, it is still amazing to see it borne out in the numbers.

Deadening Link Growth

What I find more interesting is the variation in webmaster responses to Penguin 2.0. Some penalized webmasters actually doubled down on link building, likely attributing their rankings loss to having too few links, rather than being penalized. We can tease this type of behavior out of the numbers by looking at the variances in percentage link change over time.

The variance among link fluctuations for sites that were not penalized was .08, but the variance among sites that were penalized was .38. This means that the behavior of websites after being penalized was far more erratic than those that were not. Some penalized sites made the poor decisions to greatly increase their links, although more sites made the decision to greatly decrease their links. If all webmasters responded uniformly to penalties, one would not expect to see such an increase in variance.

As SEOs, we clearly have our work cut out for ourselves in teaching webmasters that the appropriate response to a penalty is very much NOT adding more and more links to your profile, because this behavior is actually more common than link removal post-penalty. It is worth pointing out that it is possible that the webmasters disavowed links rather than removing them. We do not have access to that data, so we cannot be certain regarding that procedure. It is possible that some webmasters chose to disavow while others removed, and that the net impact on link value was identical, thus making the variance calculation false.

Do webmasters increase their ad spend?

I’ll admit, I had my fingers crossed on this one. Honestly, who doesn’t want to show that Google is just penalizing webmasters because it helps their bottom line? Wouldn’t it be great to catch the search quality team not being honest with us about their fiduciary independence?

Well, unfortunately it just doesn’t bear out. The evidence is fairly clear that there is no reason to believe that webmasters increase ad-spend following a Penguin 2.0 penalty. Let’s look at the numbers.

Ad Traffic Increase

First, across our data set, no one who was an advertiser prior to Penguin 2.0 stopped advertising in AdWords in the three months after. Of the sites that were not advertisers prior to Penguin 2.0, 10% of those not penalized ended up becoming advertisers in AdWords, while only 4% of those penalized became advertisers. Sites that weren’t penalized were far more likely to join the AdWords program than those that were.

It wasn’t only true that those unaffected by Penguin 2.0 were more likely to sign up for AdWords; they increased their average Ad-spend, too. There was a 78% greater increase in ad-spend by those unaffected by Penguin 2.0 than those who were. Moreover, bidding shifts for those not impacted by Penguin remained similar in two month intervals across multiple randomly selected three-month differences, meaning that there appeared to be no related impact whatsoever.

We can safely conclude from this that there does not appear to be a direct, causal relationship between Penguin penalties and increased AdWords spending. Now, one could of course make the argument that better search results might increase ad revenue in the future as Google attracts more users to a better search engine, but accusations of a fiduciary motivation for releasing updates like Penguin 2.0 cannot be substantiated with this data.

Do they recover?

By the 5th month, approximately 24% of sites that were penalized were at or above their pre-Penguin 2.0 traffic. This is an exciting outcome because it does show recovery from Penguin is possible. Perhaps most important, sites that were penalized and removed links on average recovered 28% more traffic in the five months after Penguin than those that did not remove links. We have good evidence to suggest at least a correlation between post-penalty link removal and traffic recovery. Of course, we do have to take this with a grain of salt for a number of reasons:

  • Sites that removed links may have been more likely to use the disavow tool as well.
  • Sites that removed links may have been more SEO-savvy in general and fixed on-site issues.
  • Sites that did not remove links may have had more intractable penalties, thus their lack of removal was a conscious decision related to the futility of a removal campaign.

These types of alternate explanations should always be entertained when using correlative statistics. What we do have good evidence of is that traffic recovery is possible for sites hit by Penguin, although it is by no means guaranteed or universal. Penguin 2.0 needn’t be a death sentence.


So, in a few weeks, we are likely to see another Penguin update, assuming Google follows its late-spring release date. When Penguin hits, be ready—even if you aren’t going to be penalized. Here are some things you should be doing…

  1. Know your bad links already. There is no reason to wait to be prepared for removal or disavowal. While I personally think that preemptive disavowal is likely the best practice, there is no excuse to just wait.
  2. Don’t worry about AdWords. There is no statistical evidence that your competition will surge post-Penguin in any meaningful fashion. The competitors who might come to depend move on AdWords also have less organic revenue to invest in the first place. At best, these even out.
  3. Don’t double down. While we can’t be certain that link removal gets you out of penalties (it is merely correlated), we can be certain that even a correlation doesn’t exist for increasing links and earning recovery post-Penguin penalties.
  4. Never assume. The behavior of your competitors and of Google itself is far more complex than off-the-cuff assumptions like “Google just penalizes sites to force people into AdWords” or that your business will know intuitively to remove or disavow links post-Penguin.

Hopefully, this time around we will all be more prepared for the appropriate response to Google’s next big update—whether we are hit or not.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

View full post on Moz Blog

LinkedIn Announces They Have Reached 300 Million Members Worldwide

LinkedIn announced today  they have reached the milestone of over 300 million members worldwide, more than half of […]

Author information

Matt Southern

Matt Southern is a marketing, communications and public relations professional. He provides strategic digital marketing services at an agency called Bureau in Ontario, Canada. He has a bachelors degree in communication and an unparalleled passion for helping businesses get their message out.

The post LinkedIn Announces They Have Reached 300 Million Members Worldwide appeared first on Search Engine Journal.

View full post on Search Engine Journal

6 Changes We Always Thought Google Would Make to SEO that They Haven’t Yet – Whiteboard Friday

Posted by randfish

From Google’s interpretation of rel=”canonical” to the specificity of anchor text within a link, there are several areas where we thought Google would make a move and are still waiting for it to happen. In today’s Whiteboard Friday, Rand details six of those areas. Let us know where you think things are going in the comments!

For reference, here’s a still of this week’s whiteboard!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. Today, I’m going to tackle a subject around some of these changes that a lot of us in the marketing and SEO fields thought Google would be making, but weirdly they haven’t.

This comes up because I talk to a lot of people in the industry. You know, I’ve been on the road the last few weeks at a number of conferences – Boston for SearchLove and SMX Munich, both of which were great events – and I’m going to be heading to a bunch more soon. People have this idea that Google must be doing these things, must have made these advancements over the years. It turns out, in actuality, they haven’t made them. Some of them, there are probably really good reasons behind it, and some of them it might just be because they’re really hard to do.

But let’s talk through a few of these, and in the comments we can get into some discussion about whether, when, or if they might be doing some of these.

So number one, a lot of people in the SEO field, and even outside the field, think that it must be the case that if links really matter for SEO, then on-topic links matter more than off-topic links. So, for example, if I’m linking to two websites here about gardening resources, A and B, both about gardening resources, and one of those comes from a botany site and the other one comes from a site about mobile gaming, well, all other things being true, it must be that the one about botany is going to provide a stronger link. That’s just got to be the case.

And yet, we cannot seem to prove this. There doesn’t seem to be data behind it or to support it. Anyone who’s analyzed this problem in-depth, which a number of SEOs have over the years — a lot of people, who are very advanced, have gone through the process of classifying links and all this kind of stuff — seem to come to the same conclusion, which is Google seems to really think about links in a more subject/context agnostic perspective.

I think this might be one of those times where they have the technology to do it. They just don’t want to. My guess is what they’ve found is if they bias to these sorts of things, they get a very insular view on what’s kind of popular and important on the Web, and if they have this more broad view, they can actually get better results. It turns out that maybe it is the case that the gardening resources site that botanists love is not the one with mass appeal, is not the one that everyone is going to find useful and valuable, and isn’t representing the entirety of what the Web thinks about who should be ranking for gardening resources. So they’ve kind of biased against this.

That is my guess. But from every observable input we’ve been able to run, every test I’ve ever seen from anybody else, it seems to be the case that if there’s any bias, it’s extremely slight, almost unnoticeable. Fascinating.

Number two, I’m actually in this camp. I still think that someday it’s coming, that anchor text influence will eventually decline. Yet it seems to be that, yes, while other signals have certainly risen in importance, and there have been lots of other things, it seems that anchor text inside a link is still far more important and better than generic anchor text.

Getting specific, targeting something like “gardening supplies” when I link to A, as opposed to on the same page saying something like, “Oh, this is also a good resource for gardening supplies,” but all I linked with was the text “a good resource” over to B, that A is going to get a lot more ranking power. Again, all other things being equal, A will rank much higher than B, because this anchor text is still pretty influential. It has a fairly substantive effect.

I think this is one of those cases where a lot of SEOs said, “Hey, anchor text is where a lot of manipulation and abuse is happening. It’s where a lot of Web spam happens. Clearly Google’s going to take some action against this.”

My guess, again, is that they’ve seen that the results just aren’t as good without it. This speaks to the power of being able to generate good anchor text. A lot of that, especially when you’re doing content marketing kinds of things for SEO, depends on nomenclature, naming, and branding practices. It’s really about what you call things and what you can get the community and your world to call things. Hummingbird has made advancements in how Google does a lot of this text recognition, but for these tough phrases, anchor text is still strong.

Number three, 302s. So 302s have been one of these sort of long-standing kind of messes of the Web, where a 302 was originally intended as a temporary redirect, but many, many websites and types of servers default to 302s for all kinds of pages that are moving.

So A301 redirects to B, versus C302 redirecting to D. Is it really the case that the people who run C plan to change where the redirect points in the future, and is it really the case that they do so more than A does with B?

Well, a lot of the time, probably not. But it still is the case, and you can see plenty of examples of this happening out in the search results and out on the Web, that Google interprets this 301 as being a permanent redirect. All the link juice from A is going to pass right over to B.

With C and D, it appears, with big brands, when the redirect’s been in place for a long time and they have some trust in it, maybe they see some other signals, some other links pointing over here, that yes, some of this does pass over, but it is not nearly what’s happening with a 301. This is like a directive, and this is sort of a nudge or a hint. It just seems to be important to still get those 301s, those right kinds of redirects right.

By the way, there are also a lot of other kinds of 30X status codes that can be issued on the Web and that servers might fire. So be careful. You see a 305, a 307, 309, something weird, you probably want a 301 if you’re trying to do a permanent redirect. So be cautious of that.

(Number four): Speaking of nudges and hints versus directives, rel=”canonical” has been an interesting one. So when rel=”canonical” first launched, what Google said about rel=”canonical” is rel=”canonical” is a hint to us, but we won’t necessarily take it as gospel.

Yet, every test we saw, even from those early launch days, was, man, they are taking it as gospel. You throw a rel=”canonical” on a trusted site accidentally on every page and point it back to the homepage, Google suddenly doesn’t index anything but the homepage. It’s crazy.

You know what? The tests that we’ve seen run and mistakes — oftentimes, sadly, it’s mistakes that are our examples here — that have been made around rel=”canonical” have shown us that Google still has this pretty harsh interpretation that a rel=”canonical” means that the page at A is now at B, and they’re not looking tremendously at whether the content here is super similar. Sometimes they are, especially for manipulative kinds of things. But you’ve got to be careful, when you’re implementing rel=”canonical”, that you’re doing it properly, because you can de-index a lot of pages accidentally.

So this is an area of caution. It seems like Google still has not progressed on this front, and they’re taking that as a pretty basic directive.

Number five, I think, for a long time, a lot of us have thought, hey, the social web is rising. Social is where a lot of the great content is being shared, a lot of where people are pointing to important things, and where endorsements are happening, more so, potentially, than the link graph. It’s sort of the common man’s link graph has become the social web and the social graph.

And yet, with the exception of the two years where Google had a very direct partnership with Twitter and those tweets and indexation, all that kind of stuff was heavily influential for Google search results, since that partnership broke up, we haven’t seen that again from Google. They’ve actually sort of backtracked on social, and they’ve kind of said, “Hey, you know, tweets, Facebook shares, likes, that kind of stuff, it doesn’t directly impact rankings for everyone.”

Google+ being sort of an exception, especially in the personalized results. But even the tests we’ve done with Google+ for non-personalized results have appeared to do nothing, as yet.

So these shares that are happening all over social, I think what’s really happening here is that Google is taking a look and saying, “Hey, yes, lots of social sharing is going on.” But the good social sharing, the stuff that sticks around, the stuff that people really feel is important is still, later on at some point, earning a citation, earning a link, a mention, something that they can truly interpret and use in their ranking algorithm.

So they’re relying on the fact that social can be a tip-off or a tipping point for a piece of content or a website or a brand or a product, whatever it is, to achieve some popularity, but that will eventually be reflected in the link graph. They can wait until that happens rather than using social signals, which, to be fair, there’s some potential manipulation, I think that they’re worried about exposing themselves too. There’s also, of course, the case that they don’t have direct access. Well, they don’t have API-level access and partnerships with Facebook and Twitter anymore, and so that could be causing some of that too.

Number six, last one. I think a lot of us felt like, as Google was cleaning up web spam, for a long time they talked about cleaning up web spam, from ’06, ’07 to about 2011, 2012, it was pretty sketchy. It was tough.

When they did start cleaning up web spam, I think a lot of us thought, “Well, eventually they’re going to get to PPC too.” I don’t mean pay-per-click. I mean porn, pills, and casino.

But it turns out, as Matt Brown, from Moz, wisely and recently pointed out in his SearchLove presentation in Boston, that, yes, if you look at the search results around these categories, whatever it is — Buy Cialis online, Texas hold-’em no limit poker, removed for content, because Whiteboard Friday is family-friendly, folks — whatever the search is that you’re performing in these spheres, this is actually kind of the early warning SERPS of the SEO world.

You can see a lot of the changes that Google’s making around spam and authority and signal interpretation. One of the most interesting ones that you probably observed, if you study this space, is a lot of those hacked .edu pages, or barnacle SEO that was happening on sub-domains of more trusted sites that had gotten a bunch of links, that kind of stuff, that is ending a little bit. We’re seeing a little bit more of the rise, again, of like the exact match domains and some of the affiliate sites and getting links from more creative places, because it does seem like Google’s gotten quite a bit better at which links they consider and in how they judge the authoritativeness of pages that might be hanging on or clinging onto a domain, but aren’t well linked to internally on some of those more trusted sites.

So, that said, I’m looking forward to some fascinating comments. I’m sure we’re going to have some great discussions around these. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

View full post on Moz Blog

Go to Top
Copyright © 1992-2014, DC2NET All rights reserved