How Google Plus Could Change SEO

Google's new social network attracted attention from SEOs and the media before it went live. Now that Google+ has been in use for a couple of weeks, some of these early adopters have been wondering how it might change the face of SEO and Google's search results. Let's take a closer look.

The key point about Google+ is that its various functions and services collect information from users. Google can use this information to fine tune things so that it delivers results more in line with what you'd like to see when you use its search engine. Not coincidentally, Google can also use this information to deliver more relevant, targeted ads, which will lead to more profits both for the search engine and its advertisers.

To better understand this picture, let's take a look at the kind of data Google might garner from Google+ users. Brian Chappell covers this topic well. He mentions seven data points from Google+ that could help Google with its search algorithm.

Chappell starts with Google Circles. These are a way to sort your contacts and put them in particular groups. It's a great addition to social networking, as it allows you to designate certain people as co-workers, family, friends, etc. You can create new circles and name them yourself. So if you belong to a hiking club and create a circle you've labeled “hiking club,” you've indirectly indicated to Google that these people are interested in hiking. Too specific? Chappell actually takes a more general view of Google Circles, seeing them as a vote for a person, just like links are a vote for a website. He thinks it could give Google a better understanding of “the influencers within its network.”

The second item Chappell points to is the Google+1 button. As with Facebook, you can apparently +1 a lot of things. When a status update, image, web page, or what have you has received a lot of pluses from visitors, it would be natural to assume it's trusted and authoritative in some way. As Chappell rightly points out, however, the feature could easily fall prey to manipulation, as so many other potential metrics have in the past.

It's the third item Chappell mentions, though, that might affect Google's algorithm the most. It's called Google Sparks. Sparks basically lets you add interests and delivers links related to those interests. You can then share those links with one (or presumably more) of your circles and even chat about them. In reporting on Sparks, Barry Schwartz thought it was fairly limited, as it didn't contain much information in which he was interested, and seemed to mirror Google News. Hopefully, that will change as time goes on. Chappell sees Google Sparks as giving the search engine another level of targeting. “If Google can understand your interests then they can interpret the weight of your voting abilities on given subject matters.” All of a sudden, Google knows how much a +1 from you means when you give it to a hiking site – and that it probably means more than if you give it to, say, a musical instrument store.

Blog SEO Begins on the Home Page

It's easy to get so caught up in posting fresh content for your blog that you forget to take care of one very important page: your blog's home page. You might be surprised by how much authority and how many links – and visitors – this page can attract if you optimize it properly.

Scott Cowley brought this to my attention in a post for Search Engine Journal. Many blog home pages don't have much content of their own beyond each day's new posts, which makes optimization a challenge. So what can a blogger do?

Well, let's start with the title of your blog. Cowley noted that most blogs just use some variation of “Company X Blog” as their title. That might be okay if you're focused on branding, but it's not very descriptive; in fact, it's kind of boring, which may be the last thing you want if you're trying to attract visitors from the search engines. Picking a highly-competitive title like “SEO Blog” isn't necessarily the right answer, either.

So what should you put in your title? Michael Martinez uses a very simple description of what his blog is all about in his title: SEO Theory and Analysis. You can take the same approach, but you need to do some keyword research. What topic do you want to make the focus of your blog? Do you even want to call it a “blog”? If you're writing a blog that gives step-by-step descriptions of how to code smartphone applications, for instance, you could use words like “tips,” “hacks,” “tools,” “how-tos,” “tutorials,” and more. Cowley encourages you to “Get creative with a thesaurus” to find “less-competitive, more attainable words.”

Next, plan to optimize everything on your blog's home page that you'd ordinarily optimize on every other page. This means paying attention to your blog's title tag, H1 tag, body content, and internal links pointing to the page. You might have to add a few code tweaks to pull this off, because, as Cowley observes, most blogs aren't set up to optimize the home page as you would a normal page.

The title tag and meta description should be a breeze; you can easily use keywords there. But what about the H1 tag? You'll probably need to add one above your regular posts. You can give it something very short and descriptive that won't detract from the rest of the page. Likewise, internal links shouldn't be too much of an issue. Any writer worth their salt can come up with a sensible and creative way to link an article to a blog's or site's home page, especially when they're covering the field of SEO.

The sticking point for blog home page optimization, however, is content. Most blog home pages don't feature much static content. Post pages, on the other hand, give you hundreds of words with which to work SEO magic. Regular bloggers often put their static content in an About Me page. There's nothing wrong with that, of course, but that's not where your posts will get read, and hardly anybody will link to it. You need to get some static content on your blog home page, where it will do some good.

So how do you accomplish this? Cowley notes two different techniques. One way is to build a sidebar into which you incorporate static content. If you do that, however, that content will show up on every page, not just your blog's home page. That amounts to an unacceptable dilution of your SEO effort.

So forget about the sidebar. Instead, consider adding a block of text that comes just before or just after your most recent posts on your home page. Make sure you code it so that the text shows up ONLY on your blog home page, and not on any other page of your site. Cowley observes that not many bloggers are doing this, “but it enhances the SEO in a way that an optimized title tag alone can't.”

You can use this static content to talk about the topics you plan to cover. For example, “This blog will document my journey as I attempt to learn one new craft every week for a year and use every one of them to embellish one dress.” (Okay, I get crazy ideas sometimes). You can talk about your background, dreams, hopes, approach to your blog...anything that's relevant.

Don't go on for too long, however. You really hope that your visitors will want to read and keep up with your new posts, so the point of this static content on your blog (beyond the obvious SEO purpose) is to whet their appetite for your posts. Cowley linked to one example he described as “awkward.” I checked it; at almost 300 words, it seemed overly long and set off my “keyword stuffing” meter. But you can look at it as a starting point of sorts, upon which you can improve.

You might want to try various different lengths and phrasing to see what works best. Having read Cowley's example, if I were doing this for my own blog, I'd shoot for around 200 words in two to three short paragraphs, and try to use my chosen keywords no more than twice per paragraph. (Cowley's example used their chosen keyword a minimum of 10 times, and I'm not counting all of the phrases that were clearly derivations of the keyword). I'd keep it for at least a month or two, do some analytics, and then decide if I want to tweak things. That's one of the truths about SEO: nothing is static forever, not even static content. Still, this is one piece of static content that should help the rest of your (dynamic) blog.

List of HTTP status codes

HyperText Transfer Protocol (HTTP) response status codes. This includes codes from IETF internet standards as well as unstandardised RFCs, other specifications and some additional commonly used codes. The first digit of the status code specifies one of five classes of response; the bare minimum for an HTTP client is that it recognises these five classes. Microsoft IIS may use additional decimal sub-codes to provide more specific information, but these are not listed here. The phrases used are the standard examples, but any human-readable alternative can be provided. Unless otherwise stated, the status code is part of the HTTP/1.1 standard.

The following is a list of HTTP status codes
100— Continue
101— Switching Protocols
200— OK
201— Created
202— Accepted
203— Non-Authoritative Information
204— No Content
205— Reset Content
206— Partial Content
300— Multiple Choices
301— Moved Permanently
302— Found
304— Not Modified
305— Use Proxy
307— Temporary Redirect
400— Bad Request
401— Unauthorized
402— Payment Required
403— Forbidden
404— Not Found
405— Method Not Allowed
406— Not Acceptable
407— Proxy Authentication Required
408— Request Timeout
409— Conflict
410— Gone
414— Request-URI Too Long
500— Internal Server Error
502— Bad Gateway
503— Service Unavailable
504— Gateway Timeout
505— HTTP Version Not Supported

Compare Paid and Organic Search Clicks

Everyone knows that search engine optimization and search engine marketing are two different animals. Some companies even have separate teams in charge of SEO and SEM. But if you do, and your two teams don't communicate, you could be leaving money on the table.

Matt Lawson explains this point well in an article for Search Engine Land. It's not too unusual that some keywords which perform well for organic search aren't tapped into for search ads – and likewise, some keywords that get great click-through rates with AdWords ads are nowhere to be seen in stats for organic search clicks. What's going on here?

It could be that one team is thinking in a slightly different direction. It's possible that some outside event unknown to either team affected searches. Any one of a number of variables could explain the difference in keyword performance. But the point is, this kind of disparity can indicate a missed opportunity. So how can you tell if this is happening with your search engine campaigns?

You'll need to do some heavy data crunching. If your company is really big enough for two separate teams, you're probably targeting millions of keywords. It would take forever to go through every single one and compare statistics. So Lawson recommends focusing on “the high-volume and top converting search queries in each channel.” Once you've limited your universe of data to the top performing search queries for the SEO team and the SEM team, you need to look at their performance against each other.

To evaluate the performance of these keywords for SEO and SEM, Lawson recommends a metric he calls “Paid Click Percentage.” To get this number for each of your keywords, “match raw query search terms across paid and organic search results, sum the total clicks, and calculate the paid clicks as a percentage of that total,” Lawson explains.

For example, let's take the search term “suede jacket.” Say you're running an AdWords campaign that uses that keyword, and you get 1,000 click-throughs in a month on your ads that utilize that keyword. You also get about 200 click-throughs in the same time frame from searchers who use that term and go to your site from the organic results rather than an ad. Add the two together, and you get 1,200 clicks – which is a little more than 83 percent.

Using the paid click percentage, you'll be able to tell at a glance which keywords are performing well in AdWords, but not attracting organic clicks – and vice versa. Ideally, if you have the data in a form that can be manipulated, you should start by filtering for a paid click percentage higher than a certain number. Lawson's example uses 75 percent as the cutoff.

Now here's an interesting point worth considering: most searchers still lean a little more toward organic results than ads. What does this mean? If you find that a particular keyword shows up with a high paid click percentage, that means it's probably nowhere in the organic search results. At the very least, it probably isn't on the first page; that's a near-certainty, in fact, if you're getting no organic clicks on the term. This tells your organic search team that there are terms for which they might consider adding content or otherwise optimizing, so that your website gets a stronger presence for those keywords in the organic results.

This can also work the other way, however. Lawson gave an example in which the term “acme marathon jacket” received hundreds of clicks from organic search, but none from paid search ads. What was going on here? Your first thought might be that the SEM team hadn't considered targeting that keyword with ads, but it's potentially more complicated than that. As Lawson notes, “the paid search campaign might be missing the keyword 'acme marathon jacket,' the keyword bid might be below the minimum first page bid, or the keyword may have a low quality score.” Whatever the case, the SEM team will need to figure out what's going on and correct the situation.

Now that you see how crunching the data and getting a meeting of minds between your SEO and SEM teams can help you spot holes in either campaign, it's time for the next steps. These involve action plans on the parts of both teams to plug those holes, and setting up the next meeting between the two teams. You can't hold this meeting as a one-time thing, any more than you can do SEO just once and forget about it. You need to get these two teams communicating and working together to get the most out of both your SEO and SEM campaigns. Good luck!

Google Takes Social Search Worldwide

Google announced that it has expanded the availability of its Social Search. Launched officially in October 2009 in just the United States, now searchers worldwide will be able to use it. The search giant noted that it will start making Social Search available in 19 languages, with more to come.

So what exactly is Social Search? It's one of Google's answers to Facebook – or at least a way to introduce social factors into search. It only works when you're actually signed in to your Google account. Basically, it's a way to make online content from your friends more visible to you in the search results.

The example Google likes to use to show how it works involves a searcher planning his next vacation with Google searches. He decides he wants to take a camping trip, and when he searches on camping trips, he finds a tweet partway down the page from a friend of his who just came back from Yosemite. He knows it's from his friend, because he can see a thumbnail image next to a sentence under the search listing that identifies who sent it.

So our vacation planner's next search is Yosemite. This time he's looking for campsites, and he finds a link that takes him to a friend's Flickr account, where she's posted images from the place she camped when she went to Yosemite last year. He decides that it looks like a great place to camp...and coincidentally remembers that he needs to get a new camera for the trip.

So he searches for a good camera for outdoor photography, and finds a Blogger blog result. He doesn't recognize the name at first, but hovering over the person's name reveals that he's been following her Twitter feed; she's a professional photographer. So he visits her blog to find out which camera she recommends.

Google notes that if you're not seeing very many Social Search results, you can expand and improve them in a number of ways. You can create a Google profile and connect your other public profiles from social websites, such as Twitter and Flickr, to your Google account and profile. You can also add links to your own public content, such as your Blogger blog. Subscribing to interesting content and following interesting people in Google Reader will also enrich your Social Search experience.

Social Search results are unique to the searcher, because every searcher has a slightly different constellation of contacts. These results may rank anywhere on the page; Google places them according to their relevance to your search. As Google explains in a blog entry, “Social search results are only visible to you and only appear when you choose to log in to your Google Account. If you’re signed in, Google makes a best guess about whose public content you may want to see in your results, including people from your Google chat buddy list, your Google Contacts, the people you're following in Google Reader and Buzz, and the networks you’ve linked from your Google profile or Google Account. For public networks like Twitter, Google finds your friends and sees who they’re publicly connected to as well.”

Google News Lets Users Drop Blogs

If your blog depends on traffic from Google News, you need to know about some changes the search engine made to its popular service. With the new options it gives to Google News readers, you might experience a serious decrease in visitors.

Danny Sullivan noted the change over at Search Engine Land, complete with pictures. Google News users can go to their settings page and control the quantity of results they get from certain sources. These sources are Blogs and Press Releases.

By default, Google set everyone using the service to see a “Normal” quantity of results from blogs and press releases. But users can choose to see “none,” “fewer,” or “more” results from each of those sources. What does this mean for publishers?

That's a good question, but it's difficult to answer without asking others. The first one that comes to mind is, “what is a blog?” Sullivan pointed out that Google started classifying some news sources as blogs more than a year and half ago. But what rules do they follow to determine that a particular source is a blog?

If the rules for Google News and Google Blog Search are consistent with each other, then anything with an RSS feed would count as a blog. That can't be right, though, because lots of newspapers have taken to using RSS feeds to get the word out about new articles – and not just opinion pieces, either. The New York Times boasts an RSS feed, and permits comments on many of its items. That doesn't make the site a blog, though some of its pieces do fall under that classification (and Google designates them as such).

There's a larger concern that goes with the potential for misclassification – one that Sullivan implies but never explicitly states. It's the assumption that Google News readers, given a choice, will opt to see fewer news sources that are blogs and press releases. They may even opt to see no blogs or press releases. While that may not turn out to be true, it's a valid concern. When I want the basic facts, I read news; when I want analysis and entertainment, I read blogs. If I'm trying to catch up with what's happening in the world, seeing lots of blogs can get in the way.

On the other hand, this doesn't mean that I would actively opt to see fewer blogs when I'm browsing Google News. Google actually labels blogs as such in Google News results, with the word “blog” in parentheses next to the name of the story's source. Both of these appear discreetly under the headline. That lets me decide on the spot which way I want to experience a story: as if it came from Dan Rather, or Jon Stewart.

Despite the stories we've all seen about some blogs getting to the heart of a news item or scandal that the regular press declined to report on (or got wrong), and the praised sometimes heaped on “citizen journalists,” many bloggers still fight for respect. Really, that's as it should be; not all bloggers hold themselves to the high standards of journalists (not all journalists do, either, but I digress). If Google calls you a blog on its Google News service, then, does that make you a second-class news source in the eyes of your potential audience? Worse, with this new option, will they not even see your content at all when they read Google News?

That's certainly possible. Fortunately, there may be something you can do about it, if you think your site has been erroneously labeled a blog in error. Sullivan pointed to a form that publishers can use to report an issue with how the search engine has classified their content. If you're not happy with how Google views your content, it's certainly worth a try. Sullivan noted that Google has long classified his site as a blog, but this latest move makes him not want to be painted with that brush any longer, “especially when we are arguably also a news source.”

Whatever you decide to do, if you're a publisher, you may want to pay closer attention to your traffic over the next few months. Watch both the level of traffic and from where it's coming. If Google labels you a blog, and you see a decline in traffic from Google News, the new option could be playing a role.

What PPC and SEO Have in Common

If you've been doing website promotion for a while, you know that organic search and pay-per-click search ads are two different things. The techniques you use to get to the top of organic search (SEO) are not the same ones you use to get your PPC ad displayed at the top of the results. Or are they?

Mike Moran, writing for Search Engine Guide, noted that the differences used to be much greater than they are today. I don't have his depth of experience; he's been working on search engine technology since the 1980s, while I've only been covering Internet-related technology since 1997. Still, that's long enough to have seen – and reported on – a number of major changes, to say nothing of the gradual evolution of pay-per-click ads.

Like Moran, I was around when the first pay-per-click search ads came out. Not the first ones from Google, but the first ones, period. They were created by a now-defunct search engine whose marketing model was based entirely around paid search; in other words, it offered no organic results. At the time, most observers thought it was crazy. Clearly, results that had been bought and paid for by advertisers would be inferior to those that had to be earned, and no searcher would want that when better alternatives sat a click or two away!

But then a funny thing happened. Yahoo bought the paid search engine, and built a marketing department around it. Google saw what was happening, and built their own version of pay-per-click marketing, called AdWords (and AdSense, too, but we're focusing on advertisers here, not publishers). They visually separated the paid listings from the organic ones, so all searchers would know which ones were which. And at first, the paid search results fulfilled expectations – that is, they appeared to be less relevant to searches than the organic listings. Over time, though, that changed.

How did that happen? Well, companies seeking to promote their websites learned what search engines were looking for to put them at the top of the results. Early SEO practices gamed the system, almost to the point that paid results might be more relevant. Think about it: if an advertiser is paying for every visitor who clicks on their ad, they won't want that ad to show unless the traffic clicking on it is likely to convert. That means whatever the site is offering had better be relevant to the search term and the ad.

That's only one element at work, however. Google changes the rules regularly by tweaking its algorithm, so the same old SEO tricks don't keep working. But it also changed the rules for pay-per-click ads. Oh, you still need to bid on what you're willing to pay for each click. But now Google looks at how often your ad is clicked, and if it isn't clicked often enough, it might not place your ad in the number one slot – even if you bid high enough. Google also looks at the landing page for your ad, to judge its relevance. As Moran observes, what was on this page once made no difference; now it matters.

What does this mean? It means that promoting your site with PPC has gotten very similar to doing it with SEO. Consider this: if you want your site to appear well in the organic search results, and you're using white hat practices, then you're trying to create the right content to appear for the right keywords. You want searchers to believe, when they click through to your site, that they've landed in the right place to solve whatever problem inspired them to do the search.

Thanks to the series of changes Google has made to pay-per-click search ads in recent years, you need to do the same thing with your PPC campaign. You need to make sure that your ad and your landing page match the keywords you're aiming for, regardless of the bid you place. In fact, matching very closely could actually save you some money. As Moran notes, “If you've figured out how to put the searcher first in organic search, you can apply that same lesson for paid search. That's far more likely to pay off than increasing your bids.”

How Navigation Labels Improve SEO

If you think of your website as a map, navigation labels name the streets, features, shopping districts, and even major buildings like libraries and community centers. Does that description sound too expansive for something so humble? Maybe you need to rethink your definition of navigation labels.

As Shari Thurow explains on Search Engine Land, most people think of the text placed on a navigation button as a navigation label. Keep in mind, however, that most site visitors and searchers use multiple cues to orient themselves, and to make sure they ended up where they intended to go. Thurow's list of navigation labels includes the common definition I described, plus titles; headings and subheadings; breadcrumbs; embedded text links (in context); and URLs.

What is the point of expanding this definition? It gets you thinking about all of these different page elements at the same time. If you can see what they have in common, and think of them as belonging to the same group, you'll give them a more consistent structure.

Consistency helps anyone trying to navigate anywhere; it creates and fulfills expectations, and enables visitors to predict what they'll find when they click through or read something just by looking at the navigation label. Or as Thurow puts it, “When navigation labels contain keywords and are used consistently throughout a website, they effectively communicate aboutness of both page and site content, as well as provide a clear information scent to content that is not available on the web page.”

So now you understand how treating these very important page elements as aids to navigation can make your human visitors happy. They can also make the search engines happy. When spiders crawl your web pages, if they see a consistent structure to your navigation labels, with a predictable usage of keywords, you've made it easier for them to figure out your site's relevant topics. To put it bluntly, using navigation labels correctly can help your site's SEO.

The key point, however, is to use navigation labels correctly. This goes beyond simply putting keywords in your URLs. Fortunately, there are a number of prevailing conventions on the Internet for structuring your navigation labels.

You've probably noticed that nearly every business website online includes certain kinds of pages, such as an About Us page. The usual URL for such a page resembles the form Likewise, a page that shows visitors how to contact the business might use the URL

Nesting your pages can also be pretty straightforward. Say you include press releases on your site. You could set up a category page for press releases: Under that category, you might list a URL like, which links to a press release published in 2009 that details your company's release of an application that always tells the user exactly where they are.

As Thurow explains, “URL naming conventions should at least be partially based on how people locate, discover, and label desired content.” This can be difficult if you're using a CMS that doesn't let you rewrite or customize your URLs easily. In fact, I'd suggest you avoid those; if you don't, you'll be stuck making awful, expensive workarounds.

Sometimes building a workaround, or a better URL, is necessary. As you've probably noticed from my description, good websites feature a hierarchical structure. Take the fictitious press release URL, for example. It indicates that on, there is a category page named “Press Releases” with a subcategory page for the year 2009, on which there is a press release about the company releasing a new application. That's not as bad as some URLs, but it's already getting a little long.

Now take a look at an example Thurow gives, that one might find on a real estate web site: It's easy enough to break that down if you're a human searcher. The content concerns vacation rentals in the United States, in the state and city of New York, in the Chelsea neighborhood – and it specifically covers apartments in that area. But it's 132 characters long! What human is going to remember that? And what spider is going to crawl through all those levels to find that page?

Thurow suggests a different URL structure: This is much shorter, easier to remember, and better for both your human visitors and the search engines. It keeps your most important keywords, which also helps your site's SEO. The only problem is that it does not reflect the site's primary hierarchical structure – but in some cases, where doing so would lead to particularly long URLs, this may not be avoidable. “URL names do not have to be long and unwieldy in order for both searchers and search engines to comprehend them,” Thurow notes.

Sometimes, with a big site, you simply need to balance issues of length, navigation, and keywords. Take this recent URL from Microsoft's website: It tells you that it's from their section aimed at the press; that it's a press release; the year and date; and, to some extent, what it concerns (in this case, a webcast relating to Microsoft's acquisition of Skype).

Does the lack of certain keywords in the URL mean Microsoft's SEO will suffer? Not likely. The item's actual headline reads “Microsoft to Host Financial Community Webcast to Recap Skype Acquisition.” This is another reason you should consider all of your navigation labels at the same time; doing so lets you strike a balance, so if you need to pull back on keywords in one, you can supply them in another.

Now that you've seen how navigation labels can work, you may look at your website with new eyes. Move slowly; if your website is performing well, you don't want to make changes that might have a negative effect on your standing in the SERPs. But if it's already difficult to manage, you may need to build a structure that's easier to navigate, easier to maintain, and easier to rank.

Choosing Keywords with Google Wonder Wheel

Google has so many great tools you can use for choosing keywords that it seems almost impossible to know about all of them. Recently, I learned about one called Wonder Wheel. It's intended to give a searcher more options to consider, but it can be used for other purposes as well.

To use Google Wonder Wheel, start by putting a fairly general search term into the Google search engine, and hit “search.” In addition to the search results, you'll see a column on the left hand side. Click on “Search Tools” or “More search tools,” depending on how often you use search tools. You'll see one that says “Wonder wheel.” That's the one we want. Click on that, and watch a bit of magic.

For purposes of example, we'll start with a search on “square foot gardening.” Clicking on Wonder Wheel after doing that search divides the main search result area in half. On the right side, you'll see a column of search results not too dissimilar to the list that took up most of the screen before. But you'll notice a big change on the left.

What happened? Wonder Wheel took “square foot gardening” and put it in the middle of a shaded circle. Spokes radiate out from this circle like rays out from a sun. Each spoke leads to a related key phrase. This particular search yields eight spokes:

Container gardening
Lasagna gardening
Square foot gardening forum
Square foot gardening spacing
Intensive gardening
Square foot gardening vermiculite
Square foot gardening tomatoes
Square foot gardening layout

Each of these terms is related to the search phrase “square foot gardening.” I've just gotten interested in square foot gardening myself, so I recognize some of these terms. But if I were starting a blog on my square foot gardening adventures (or misadventures, depending on how next season turns out), I might not have thought to describe it as “container gardening” or “intensive gardening.” And I know I never would have thought of “lasagna gardening”!

In fact, the term “lasagna gardening” is a little too interesting to leave alone, so let's investigate it a little further. Every term on the Wonder Wheel can be clicked. Clicking “lasagna gardening” creates another wheel. The original wheel with “square foot gardening” at the center glides below it, while remaining connected to the new wheel. The old wheel takes on a lighter color, but you can still click on every keyword in it.

And what do we have in the new keyword wheel – excuse me, Wonder wheel? Well, “lasagna gardening” sits at the center, and eight new keywords encircle it in the same way eight key phrases encircled “square foot gardening” earlier. I can click on every single one of them. There are only two major differences (aside from the new keywords) from when the original wheel dominated the left hand side of screen. First, there's a line connecting the new wheel to the old wheel. And second, as you would expect, the right hand column, which lists the search results, has changed to list results for “lasagna gardening.”

And here's a nice touch to satisfy my curiosity: the very first result includes a one-sentence definition for the phrase “lasagna gardening,” so I'm no longer in the dark about what it is. Now I know how it relates to “square foot gardening,” and can fit that bit of knowledge into the larger picture of what I know about the more general subject.

I can go even deeper if I want. Clicking on “lasagna gardening plants” gives me a third Wonder wheel. The first one, with “square foot gardening” in the center, remains visible only as a circle – though interestingly, it shows up as part of a keyword in the new circle: “square foot gardening plants.” And once again, the search results on the right hand side change. If I want to go back to my original Wonder wheel, I need only click that circle on the bottom, and I'm right back where I started.

For just a few minutes of effort, I discovered about two dozen key phrases that are related to “square foot gardening.” A number of these are terms I might not have come up with on my own. Many of them don't even contain the phrase “square foot gardening,” but clearly deal with related, relevant topics. I'm sure you can see how this tool can help you come up with new keywords to aim for on your own or your clients' websites. Good luck!

Canonical Issues & Duplicate Content

How specifying a canonical can help in solving your duplicate content Issues

We are going to talk about www vs. non-www,

An example of this duplicate content from a domain that should know better is the same page

This is just to give you an example of what this is about.

If you don’t see anything wrong with this you need to read this artical

Sever side fixes

Canonical Issues & Duplicate Content

What’s happening here is the search engine bots are seeing duplicate content because the URL

And the pages are usually same content in almost all websites but technically all of these URLS are different address and could be different pages. Just adding to the confusion the and the also should return the same page. So the search engine has to decide which is the best page to return, by picking what it believes is the best URL when there are several choices and which webpage page is the duplicate content.

To make matters even worse when you have other sites linking to your domain with more then one of these URLs it is splitting your page rank.

Search engines can not just disregard any of the URLS as some domains do have different content on them. Some people believe this can be sorted out in Google’s webmasters tools, which is not really true at the time of writing this Google webmasters tools it asks you how you would like your URLS to be displayed in the SERP with the www, Or without that’s all, not how it should index your URLs into their database

It’s hard to believe the number of sites this affects

To fix this on an apache web server you need a .htaccess file you can edit on your server just copy the text below to the top of your .htaccess file and replace the yourdomain with your own and change the .com if necessary

RewriteEngine on

Options +FollowSymlinks

RewriteCond %{HTTP_HOST} ^yourdomain

RewriteRule (.*)$1 [R=301,L]

RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /index\.html\ HTTP/

RewriteRule ^index\.html$ [R=301,L]

And that’s it all the search engine bots and in coming links will be redirected to domain address in so removing any duplicate content issues and boosting your page rank by redirecting all incoming links to your domain address.

Google has come out with a Meta tag fix which is:

What is Google Dance

The name "Google Dance" is often used to describe the index update of the Google search engine. Google's index update occurs on average once per month. It can be identified by significant movement in search results and especially by Google's cache of all indexed pages reflecting the status of Google's last spidering. But the update does not proceed as a switch from one index to another at one point in time. In fact, it takes several days to complete the index update. During this period, the old and the new index alternate on At an early stage, the results from the new index occur sporadically. But later on, they appear more frequently. Google dances.

Technical Background on Google

The Google search engine pulls its results from more than 10,000 servers which are simple Linux PCs that are used by Google for reasons of cost. Naturally, an index update cannot be proceeded on all those servers at the same time. One server after the other has to be updated with the new index.

Many webmasters think that, during the Google Dance, Google is in some way able to control if a server with the new index or a server with an old index responds to a search query. But, since Google's index is inverse, this would be very complicated. As we will show below, there is no such control within the system. In fact, the reason for the Google Dance is Google's way of using the Domain Name System (DNS).

Google Dance and DNS

Not only Google's index is spread over more than 10,000 servers, but also these servers are, as of now, placed in eight different data centers. These data centers are mainly located in the US (i.e. Santa Clara, California and Herndon, Virginia), indeed, in June 2002 Google's first European data center in Zurich, Switzerland went online. Very likely, there are more data centers to come, which will perhaps be spread over the whole world. However, in January and April 2003 Google has put two data centers on stream which are again located in the US.

In order to direct traffic to all these data centers, Google could thoeretically record all queries centrally and then send them to the data centers. But this would obviously be inefficient. In fact, each data center has its own IP address (numerical address on the internet) and the way these IP addresses are accessed is managed by the Domain Name System.

Basically, the DNS works like this: On the Internet, data transfers always take place in-between IP addresses. The information about which domain resolves to which IP address is provided by the name servers of the DNS. When a user enters a domain into his browser, a locally configured name server gets him the IP address for that domain by contacting the name server which is responsible for that domain. (The DNS is structured hierarchically. Illustrating the whole process would go beyond the scope of this paper.) The IP address is then cached by the name server, so that it is not necessary to contact the responsible name server each time a connection is built up to a domain.

The records for a domain at the responsible name server constitute for how long the record may be cached by a caching name server. This is the Time To Live (TTL) of a domain. As soon as the TTL expires, the caching name server has to fetch the record for a domain again from the responsible name server. Quite often, the TTL is set to one or more days. In contrast, the Time To Live of the domain is only five minutes. So, a name server may only cache Google's IP address for five minutes and has then to look up the IP address again.

Each time, Google's name server is contacted, it sends back the IP address of only one data center. In this way, Google queries are always directed to different data centers by changing DNS records. On the one hand, the DNS records may be based on the load of the single data centers. In this way, Google would conduct a simple form of load balancing by its use of the DNS. On the other hand, the geographical location of a caching name server may influence how often it receives the single data centers' IP addresses. So, the distance for data transmissions can be reduced. In order to show the DNS records of the domain, we present them here by the example of one caching name server.

How data centers, DNS and Google Dance are related, is easily answered. During the Google Dance, the data centers do not receive the new index at the same time. In fact, the new index is transferred to one data center after the other. When a user queries Google during the Google Dance, he may get the results from a data center which still has the old index at one point im time and from a data center which has the new index a few minutes later. From the users perspective, the index update took place within some minutes. But of course, this procedure may reverse, so that Google switches seemingly between the old and the new index.

The Google Dance Test Domains www2 & www3

The beginning of a Google Dance can always be watched at the test domains and Those domains normally have stable DNS records which make the domains resolve to only one (often the same) IP address. Before the Google Dance begins, at least one of the test domains is assigned the IP address of the data center that receives the new index first.

Building up a completely new index once per month can cause quite some trouble. After all, Google has to spider some billion documents an then to process many TeraBytes of data. Therefore, testing the new index is inevitable. Of course, the folks at Google don't need the test domains themselves. Most certainly, they have many options to check a new index internally, but they do not have a lot of time to conduct the tests.

So, the reason for having www2 and www3 is rather to show the new index to webmasters which are interested in their upcoming rankings. Many of these webmasters discuss the new index at the Google forums out on the web. These discussions can be observed by Google employees. At that time, the general public cannot see the new index yet, because the DNS records for normally do not point to the IP address of the data center that is updated first when the update begins.

As soon as Google's test community of forums members does not find any severe malfunctions caused by the new index, Google's DNS records are ready to make resolve the the data center that is updated first. This is the time when the Google Dance begins. But if severe malfunctions become obvious during this test phase, there is still the possibility to cancel the update at the other data centers. The domain would not resolve to the data center which has the flawed index and the general public could not take any notice about it. In this case, the index could be rebuilt or the web could be spidered again.

So, the search results which are to be seen on and will always appear on later on, as long as there is a regular index update. However, there may be minor fluctuations. On the one hand, the index at one data center never absolutely equals the index at another data center. We can easily check this by watching the number of results for the same query at the data center domains listed above, which often differ from each other. On the other hand, it is often assumed that the iterative PageRank calculation is not finished yet, when the Google Dance begins so that preliminary values exert influence on rankings at that point in time.

source : -

Important HTML Tags

It is necessary for you to highlight certain parts of your website that you want your readers to look at. There are several tags in html which allows you to do so. For instance – the header tags [h1] [h2] [h3], Bold [strong], Italic [em] etc. The text inside your header tags (e.g. [h1]) is given very high importance by the search engine. Usually you can use them to define the page/post titles or the important sections of your website.

Header Elements:

Header 1: Header 1 should be used to define the most important section of your page. Usually Header 1 is used for Site’s title or header text.
Header 2 & 3: Header 2 and 3 can be used for Page/Post titles or important sections of your pages. Separating your content with headers is a good practice as it makes your site more readable and easy to navigate.

Text Styles:

Bold: You can bold (e.g. [strong]) certain words which are of high importance. Sometimes it’s good to bold your keywords where appropriate. However overdoing this may get you penalized.
Italic: You can use the [em] tag to emphasize certain words which will appear in italic.
Quote: This is very useful when you are quoting from someone.

Meta Tags

Way, way back when that wall was still upright search engine algorithms were so dumb they couldn’t work out what a page was about just from the content. So some bright spark had the ingenious idea to create a set of tags (meta tags) that inferred information about a pages content to the search engines.

Great idea, except there was nothing to stop a webmaster stuffing or spamming their Meta Tags with irrelevant, but very high traffic keywords and keyword phrases. Which of course they did with enthusiasm, you would find adult sites using words like Disney and Pokemon in their Keywords Meta Tag for the traffic!!

Today the vast majority of meta tags are worthless and those that are still considered by search engines aren’t worth that much. For example Google confers no benefit from any meta tags, so if you expect a high Google ranking from perfectly optimised keywords in your meta tags, don’t hold your breath.
Which Meta Tags Should You Use?

For Google adding the Description Meta Tag won’t result in a boost in the Search Engine Results Pages (SERPs), but the description might be used for the description for your SERP listings in Google. So though you won’t get a ranking boost, if your write an interesting Description Meta Tag and Google uses it (not guaranteed), you might get a higher click through rate compared to a random snippet of text from your pages. All other meta tags (including the Keywords Meta Tag) are either completely ignored or won’t result in a SERPs boost.

Yahoo says they use the Keyword Meta Tag when it ranks a page. So it makes sense to add one for Yahoo and any other minor search engines that still use. Also there are directories and other websites that automatically take this information to create a listing to your site. Don’t fret over it though, add the main phrase for that page to the Keywords Meta Tag and user friendly description of the page to the Description Meta Tag and forget about it.
Example Meta Tags

Below you will find an example set of meta tags. This is for a page you want fully indexed in all search engines.


SEO Tutorial – Meta Tags Optimization

Is the DOCTYPE, it’s not a meta tag and it’s not essential you add this to a page for good search engine placement, but if you want a page to validate in a HTML validator (i.e. you’ll need to add the right one.

SEO Tutorial – Meta Tags Optimization
Again this isn’t a meta tag, but it’s sometimes referred to as a meta tag by those who don’t fully understand meta tags. The title element is very, very important to a pages optimisation, which is why we have an entire page dedicated to Title Optimization. The title should include the most important phrase for that page and possibly one or two highly relevant keywords to create related phrases. The one above helps several important phrases including Meta Tags Optimization, Meta Tags, Meta Tags Tutorial, SEO Tutorial, SEO Meta Tags etc…. Don’t go over the top with adding lots of keywords, keep it short and sweet so each word gets a reasonable boost and don’t forget potential visitors (they have to read it).

As covered earlier in Google the contents of the description meta tag will not have an impact on the pages search engine rankings, but may be used as the description in the search results. So be descriptive, think about what a potential visitor might click on not keyword stuffing.

Of no value to Google and probably little value to other major search engines. Easiest way to fill this meta tag is by pasting the same contents as the TITLE minus anything you added for visitors to read. In the example above we removed “SEO Tutorial – ” because that is there for visitors who read the TITLE element as part of a search engine listing.
The Character Set and links external files-

Not meta tags and will have no impact on a pages search engine placement, the Character set is used by browsers so the right set of characters are used to display your page. Don’t add one and the browser will use its default (it will guess), this might mean your page doesn’t look right to a visitor. External style sheets (CSS files) and external javascript (JS files) are referenced here. Again no impact on SEO, but if you can push some javascript or markup off your page you should. It saves bandwidth and means your pages load faster.
Robots Meta Tag

There are dozens of other meta tags, but hardly any of them are any use to us for improving search engine ranking. The most important one you may need sometime is the Robots Meta Tag which looks like this-

It can be used to prevent (but not encourage) search engine spiders access to individual web pages. We pasted the above piece of code from a page from a Lingerie Shop. The page is part of the shopping basket and we don’t want visitors to enter the site via these pages (the site wouldn’t work correctly). By including the above search engine spiders won’t index or follow links from this page. Be aware that bad spiders ~(ones that harvest email addresses etc… won’t adhere to this tag, so don’t expect to much from it). The robots meta tag can be mimicked via a robots.txt file which we’ll deal with on another SEO Tutorial page.

Here’s what you can put in the robots meta tag-

Index the page and follow all links from it. If this is what you want don’t use the robot meta tag at all since by default search engine spiders do this anyway.

Don’t index (cache) this page, but do follow the links. Some webmasters who are using black hat SEO techniques use this to try to hide their shady techniques (i.e. cloaking)!

Index this page, but don’t follow the links.

Don’t index the page and don’t follow the links. Use this on pages you don’t want the search engines to have anything to do with.

more details : -

I made this widget at

Powered by Blogger.

In this Blog i am providing some useful tips for off page seo optimization as well as on page seo optimization. In Off page Optimization i am providing useful data for all works and providing some useful url, which helps you to do all the thing practically.

Popular posts


Blogger news