Back from SMX London 2010 and to say the last two days has been great would be a complete understatement. I had a great time, meeting a lot of great people – so thanks to all who accepted my handwritten cards! What can I say, I prefer the personal touch ;P
I had started planning a write-up for each and every seminar I attended but two things stop me from doing this. Firstly, I lack the time to actually finish it at a reasonable date. Secondly I gave up on trying to remember EVERYTHING that went on. So I decided to outline my notes and videos here, almost exactly as I’ve written them (with some annotation of my thoughts.)
SEO Ranking Factors 2010
The expo kicked off on day 1 with the SEO Ranking Factors 2010 talk.
“Links that are more likely to be clicked pass on more value”
This theory from Rand suggests that, in an attempt to reduce the number of poor quality link building techniques, Google are placing value on clicks that attract attention. This means that there will be richer benefits for quality links with a high click value, and much less weight for those obtaining links from mediocre sources such as new profiles that never see any clicks. My advice is to keep it real and don’t be afraid to actually speak to someone. An email will do OK, but a phone call will really set you apart because you can guarantee not many others are doing it.
In response to his question; does Twitter’s results have an influence on the SEs? “Twitter literally cannibalizes all this data…”
Rand mentioned that all the real-time goodness from Twitter is getting recognised by the two major SEs. This is not conclusive, but with Google and Bing both signing deals with the social media giant, it would suggest that they are using the recency of data to display current events. This has of course been evident in the real-time search results/feeds as well as the recent UI update to promote filters such as ‘latest’. Why people didn’t put their hand up I don’t know.
Rand also brought up the debate about keyword weighting within classes. I was quite intrigued by the study on H1′s vs IMG ALTs, that suggested the latter has a slightly better correlation in terms of impact. I hope this is the case to be truthful… anything that keeps the web aimed at the user suits me. Having said this, should we be concerned that the shift of keyword stuffing abuse will really hurt those who rely on the accessibility most?
Another discussion was on the placement of keywords within the title tag. It’s long been argued that the closer your keyword phrase is to the front of the title, the better for SEO. A study found this to be correct, suggesting a trend like the one below (please excuse my poor curve!):
I think this reinforces the need to think creatively about your META stuff. After all it is your first touch point to ‘sell’ your website to the browsing visitor. I do reserve my thoughts that creativity [in this instance] is as important, and that appealing to your audience first and foremost, is the most valuable factor.
You can download Rand’s presentation slides here:
301′s Are Dying!
Perhaps the most controversial delivery came from Rob Kerry (Ayima), who’s research has suggested that 301′s are no longer being used by Google as they once were. We all know about those abusing 301′s to maximize domain/page authority to pass on ‘false’ credibility, but so far not much had been done about it.
Rob delivered his research that suggested 301′s were not passing on all the juice they used to, and that the way round this problem is to painstakingly re-point all links within an old site, to the new locations – essentially parking the old domain on top of the new. There were differing opinions on rel=canonical tags, with Rand suggesting that he’d seem them be “over-respected in many cases”, and Mikkel saying that he had seen them be “completely ignored.” What would SEO be without the lack of clarity, eh?
Outcomes for this concluded that niche sites would see more benefits, and that larger sites using deep-level landing pages would suffer. Rob’s main advice was,
“from now on, don’t rely solely on your domain authority. Target your key phrases on your homepage and look to increase the overall keyword ‘noise’ throughout your site.”
Get Rid Of All That Cr*p!
In his flash suit, Mikkel deMib Svendson entertained with his frank delivery and insistence that we all clean ourselves up! Ridding of excess code clutter (yes, that means moving away from .NET) and checking sites periodically for malware that might have been installed by hackers. He suggested you look out for the following:
- Iframes pulling in data from a remote site.
Mikkel also gave away a handy website to check for vulnerabilities within many CMS’ and software versions. Here are some improvements you can make to your website today:
- Delete excess code
- Compress images and objects further
- Make use of GZip compression on larger sites
- Place all CSS in one external file
- Get rid of ‘view_state’ if using .NET
- Remove line-breaks from your source code. Remember that 1 space = 1 byte of data to read
- Remove unnecessary META data
- Remove comment code
The purpose of this was not only to promote good housekeeping, but to align your website with Google’s measurement that looks at site speed as a ranking factor.
This got the expo off to a great start and the debates really proved what a healthy industry we’re all in. If you have an opinion, or like this post, please comment and share with friends/colleagues/strangers.
Part 2 coming soon!