- Joined
- Sep 3, 2014
- Messages
- 6,246
- Likes
- 13,132
- Degree
- 9
Okay... now we're getting somewhere!
Backstory & Advice
I've complained at length about something I've seen in the SERPs that drives me nuts. The basic explanation is that some sites seem to be "blessed" and absolutely rock the space that they exist in. They'll rank for everything and anything as long as they produce content for it. The worst part is, these sites blow. They're low grade, low usefulness, crappy content, low effort trash.
The same happens with good sites and we don't complain because it's justified. The sites are nice.
Because I've never been able to figure out why or how they're blessed, my advice (which I don't myself take due to the workload and infrastructure required) would be to create 5 versions of any authority site you want to build and see which one Google randomly "favors" as being better, even though they're all pretty much the same. Then roll with that one.
Webmaster Hangouts
In the most recent Google Webmaster Hangout, someone asked John Mueller:
It's a nice question because it's phrased in such a way to place Mueller (the new Matt Cutts) in a corner live on air. To paraphrase it's basically, "You guys insist content is the most important ranking factor, but we only see new content ranking for big terms if they're published on old and powerful domains. That means time is a ranking factor. Change my mind." They don't want to admit too much about time being how they fight spammers, because it means they can't actually handle it and it'll also incentivize spammers to just scale harder and wait.
John buys some time at the start and throws in some plausible deniability by saying "this is all theoretical and a lot of things could be at play." Then he breaks character and says "But in practice things aren't so theoretical." Then he's right back to the game, "So I don't really know if there's a good answer that would be useful to give for something like this."
I think Mueller and Cutts and any other of the psyop spokespeople for Google are good people and want to be helpful. Mueller basically proves this by dropping some new hot fire information on us.
How Google Ranks New Websites
Mueller needs to explain away time as a ranking factor. He ends up doing so by being generous to those of us paying attention. Explaining away time means explaining that brand new sites can rank for competitive terms nearly immediately. This circles us back around to my huge complaint about some sites being blessed. He ends up saying:
If you read between the lines there and do some deductive reasoning, if a site is brand new, time is definitely not a factor in the initial considerations of 1) crawling, 2) indexing, and 3) ranking. That means it has to boil down to two things: 1) on-page SEO, and 2) technical SEO (so speed, code efficiency, bandwidth needs, information architecture, etc).
So basically there's something going on with brand new sites where something matches up in terms of on-page SEO and tech SEO that makes Google think the site will be a good performer in the future, and they then go ahead and rank that site accordingly. If it doesn't perform, the site loses rankings. If it does perform, it sticks pretty highly.
Back in the day we called this the honeymoon period. We knew what it was. You index high, they get some data, and then you slip back (hopefully not very far). But now we're getting to exactly which pages get chosen for a honeymoon period. It doesn't seem like as many as it used to be, because if it was, with the exponentially growing size of the web, then the SERPs would be trash. You can't test every page out.
Mueller says exactly that. He says they make estimates and test pages and sites out based on those estimates, giving them more visibility. They can then win or lose. Losing can mean being doomed, winning can mean being blessed. This lines up perfectly with my observation. Some sites get doomed or blessed, but not all sites, just some. This seems to be why.
What it ultimately means is you can become blessed if you can figure out what on-page matters and what technical SEO matters compared to data from similar sites in your niche and how they went on to perform, and then choose the right search terms to optimize for so you perform well once given extra visibility.
Mueller also goes on to say:
The question then becomes, for the majority of sites, how can you give Google some signals to work from if you can't rank? I'd guess that having Google Analytics is one way, using Google Fonts might be a way, having visitors that use Chrome is one way. Asking people to search specifically for your site through Google is one way. Doing marketing and traffic leaks and PPC definitely always kicks starts things.
They can't only be using data they get through their search engine or nobody new would ever get exposure and they'd rarely ever get a link to get exposure, while those with existing exposure would keep getting all the natural links.
Anyways, back to the honeymoon period. We know why and the results of how they do it, but how do they actually, mechanically do it? There's a patent called "Ranking Search Results" that explains they have a score modification engine (which I believe is how Panda and Penguin work, as side-algorithms or layers on top of the core algo). They generate an initial score and then tweak the scores respective of the query and based on a "plurality of resources."
This is food for thought, made more difficult by my rambling. But if anyone was so inclined, they could, given the time and resources, probably determine which sites in their niche are the baselines by rolling out sites based on all of the successful ones and seeing which get blessed or doomed, then repeat to confirm. And then go to town setting up winners.
Backstory & Advice
I've complained at length about something I've seen in the SERPs that drives me nuts. The basic explanation is that some sites seem to be "blessed" and absolutely rock the space that they exist in. They'll rank for everything and anything as long as they produce content for it. The worst part is, these sites blow. They're low grade, low usefulness, crappy content, low effort trash.
The same happens with good sites and we don't complain because it's justified. The sites are nice.
Because I've never been able to figure out why or how they're blessed, my advice (which I don't myself take due to the workload and infrastructure required) would be to create 5 versions of any authority site you want to build and see which one Google randomly "favors" as being better, even though they're all pretty much the same. Then roll with that one.
Webmaster Hangouts
In the most recent Google Webmaster Hangout, someone asked John Mueller:
“Theoretically, could a website that’s only one to two weeks old rank for top positions on Google for ultra-competitive head keywords say for example for shoes with significantly better content only, considering it’s the most important part of the core algorithm? If not then clearly time is a factor when it comes to ranking pages for highly competitive areas no matter how good they are unless new pages are created on already established websites.”
It's a nice question because it's phrased in such a way to place Mueller (the new Matt Cutts) in a corner live on air. To paraphrase it's basically, "You guys insist content is the most important ranking factor, but we only see new content ranking for big terms if they're published on old and powerful domains. That means time is a ranking factor. Change my mind." They don't want to admit too much about time being how they fight spammers, because it means they can't actually handle it and it'll also incentivize spammers to just scale harder and wait.
John buys some time at the start and throws in some plausible deniability by saying "this is all theoretical and a lot of things could be at play." Then he breaks character and says "But in practice things aren't so theoretical." Then he's right back to the game, "So I don't really know if there's a good answer that would be useful to give for something like this."
I think Mueller and Cutts and any other of the psyop spokespeople for Google are good people and want to be helpful. Mueller basically proves this by dropping some new hot fire information on us.
How Google Ranks New Websites
Mueller needs to explain away time as a ranking factor. He ends up doing so by being generous to those of us paying attention. Explaining away time means explaining that brand new sites can rank for competitive terms nearly immediately. This circles us back around to my huge complaint about some sites being blessed. He ends up saying:
“We use lots of different factors when it comes to crawling, indexing and ranking. And sometimes that means that completely new websites show up very visibly in search. Sometimes it also means that it can take a bit of time for things to settle down.”
If you read between the lines there and do some deductive reasoning, if a site is brand new, time is definitely not a factor in the initial considerations of 1) crawling, 2) indexing, and 3) ranking. That means it has to boil down to two things: 1) on-page SEO, and 2) technical SEO (so speed, code efficiency, bandwidth needs, information architecture, etc).
So basically there's something going on with brand new sites where something matches up in terms of on-page SEO and tech SEO that makes Google think the site will be a good performer in the future, and they then go ahead and rank that site accordingly. If it doesn't perform, the site loses rankings. If it does perform, it sticks pretty highly.
Back in the day we called this the honeymoon period. We knew what it was. You index high, they get some data, and then you slip back (hopefully not very far). But now we're getting to exactly which pages get chosen for a honeymoon period. It doesn't seem like as many as it used to be, because if it was, with the exponentially growing size of the web, then the SERPs would be trash. You can't test every page out.
Mueller says exactly that. He says they make estimates and test pages and sites out based on those estimates, giving them more visibility. They can then win or lose. Losing can mean being doomed, winning can mean being blessed. This lines up perfectly with my observation. Some sites get doomed or blessed, but not all sites, just some. This seems to be why.
What it ultimately means is you can become blessed if you can figure out what on-page matters and what technical SEO matters compared to data from similar sites in your niche and how they went on to perform, and then choose the right search terms to optimize for so you perform well once given extra visibility.
Mueller also goes on to say:
"And it can also be that maybe you’re shown less visibly in the beginning and as we understand your website and how it fits in with the rest of the web then we can kind of adjust that."
That sounds like an example of how the majority of new sites are treated. It's not that they understand how the website fits in with the others. It's simply that they don't test them out and don't have any signals on them yet, so you have to do the slow crawl up the rankings (because time is, in fact, a ranking factor). We've always called that the sandbox.The question then becomes, for the majority of sites, how can you give Google some signals to work from if you can't rank? I'd guess that having Google Analytics is one way, using Google Fonts might be a way, having visitors that use Chrome is one way. Asking people to search specifically for your site through Google is one way. Doing marketing and traffic leaks and PPC definitely always kicks starts things.
They can't only be using data they get through their search engine or nobody new would ever get exposure and they'd rarely ever get a link to get exposure, while those with existing exposure would keep getting all the natural links.
Anyways, back to the honeymoon period. We know why and the results of how they do it, but how do they actually, mechanically do it? There's a patent called "Ranking Search Results" that explains they have a score modification engine (which I believe is how Panda and Penguin work, as side-algorithms or layers on top of the core algo). They generate an initial score and then tweak the scores respective of the query and based on a "plurality of resources."