Friday, November 8, 2013

Google Webmaster Tools Adds Manual Action Penalty: Image Mismatch

Google rolled out manual action viewer in Google Webmaster Tools this past summer to let site owners know when a manual penalty action had been taken against their site. This week, reports of a new type of manual action alert have surfaced in manual action viewer – “image mismatch.” 
From Google
If you see this message on the Manual Actions page, it means that some of your site’s images may be displaying differently on Google’s search results pages than they are when viewed on your site.
As a result, Google has applied a manual action to the affected portions of your site, which will affect how your site’s images are displayed in Google. Actions that affect your whole site are listed under Site-wide matches. Actions that affect only part of your site are listed under Partial matches.
If you see this message, Google advises you to ensure that your site displays exactly the same images to users whether viewed directly on your site or within Google image search results. Google said this behavior may be caused by “anti-hotlinking” tools, and that may require looking through your site’s code on the server.
After you’ve remedied the problem, Google says to submit for reconsideration of your site, and to be patient.
“Watch for a message in your Webmaster Tools account — we’ll let you know when we’ve reviewed your site. If we determine your site is no longer in violation of our guidelines, we’ll revoke the manual action,” Google said in its help files. 
Manual actions are penalties resulting from a human reviewer at Google who has determined a website violates Google's guidelines, and includes common spammy practices like user-generated spam, hidden text, unnatural links, and so on. 
If your site is on the up and up, you’ll see the following in GWT's manual action viewer … and that’s a good thing:
Google Webmaster Tools Adds Manual Action Penalty: Image Mismatch

Thursday, October 31, 2013

What Are Wiki links?

What Are Wiki links?
What Are Wiki links? - Wiki links are links from Wikipedia type websites. Wiki is basically a kind of website used by a  community where all have the ability to add or edit content. These wiki sites are usually used by universities, why they are useful for us as:
  • Not just they are powerful backlinks, but also
  • .EDU Wiki links are very very important and powerful in terms of search engines eyes.
What I am surprised is that I rarely see someone mentioning this method on any forums even when almost all top SEO experts including me are using it for a long time. It’s still effective even after Panda and Penguin update. Why they don’t talk openly regarding to this method as they want it to remain as effective as possible. Therefore, please don’t abuse this method that you are just going to learn.
Wikipedia is most well known wiki out there, hence whenever there is a talk about getting wiki links, many people plans starts and stops on that site. I personally had lots of links with Wikipedia, but it has got NEAR IMPOSSIBLE to create and keep a link on Wikipedia for long, now I don’t even bother getting a link from there.
What you need to realize is that there are hundreds, if not thousands of high page rank, highly trusted Wikis (including EDU and GOV wikis) present there that are a lot easier to get links from and and more powerful than a single Wikipedia link.
WIKI FOOTPRINTS Now you just need to copy and paste any of these footprints below to Google search. Make sure you copy and paste exactly the same footprints below:
  • allinurl:”.edu/mediawiki/index.php”
  •  allinurl:”.gov/mediawiki/index.php”
  • allinurl:”.com/mediawiki/index.php”
  • allinurl:”.net/mediawiki/index.php”
  • allinurl:”.org/mediawiki/index.php”
  • allinurl:”.info/mediawiki/index.php”
  • allinurl:”.com/wiki/index.php”
  • allinurl:”.net/wiki/index.php”
  • allinurl:”.org/wiki/index.php”
  • allinurl:”.info/wiki/index.php”
  • allinurl:”.edu/wiki/index.php”
  • allinurl:”.gov/wiki/index.php”
  • site:.edu inurl:wiki
  • site:.gov inurl:wiki
  • allinurl:”http://wiki.”
  • allinurl:”http://mediawiki.”
  • allinurl:”http://wikka.”
Some additional Wiki footprints:
  • Inurl:wiki
  • Keyword Inurl:wiki
  • Inurl:MediaWiki_talk
  • keyword “wiki” (site:.edu)
  • inurl:”/groups/*/wiki/” “keyword”
  • “The wiki, blog, calendar, and mailing list”
  • “Log in to my page” “wikis”
  • inurl:groups “log in to my page”
  • “updates” “wikis” “blogs” “calendar” “mail”
  • “Mac OS X Server – Wikis”
  • “first” “prev” “1-20 of” “next” inurl:groups
  • “What’s Hot” “Recent Changes”
  • “What’s Hot” “Recent Changes” “Upcoming Events”
  • “What’s Hot” “Recent Changes” “Upcoming Events” “Tags”
  • “What’s Hot” “Recent Changes” “Upcoming Events” “Tags” “Edited”
  • “powered by WikkaWiki”+inurl:”wakka=” “keyword
  • “WikkaWiki” “1..50000 comments on this page” “keyword”
  • “Powered by WikkaWiki” “[Add comment]” “keyword”
  • “Log in / create account” inurl:index.php?title=Main_Page
  • “powered by mediawiki”
  • inurl:wiki site:.edu
  • inurl:”special:userlogin”
  • “This page was last modified on” inurl:wiki
  • “main page” “random page” inurl:wiki
  • This page was last modified on “wiki”
  • “This page was last modified on”
  • “Log in / createaccount”
  • “isa registered trademark of the Wikimedia Foundation, Inc.,”
  • “wiki inurl:.edu”
  • wiki inurl:.edu
  • “Toolbox”"This page was last modified
  • inurl:”wiki/index.php?title=”
  • “Login required to edit”
  • “wiki/index.php?title=Special:Userlogin&returnto”
  • inurl:wiki/index.php?title=Special:Userlogin&returnto
  • “Main Page”"discussion”"edit”
  • “This page has beenaccessed”"Privacy policy”
  • “This page has beenaccessed”"Privacy policy”"wiki”
  • “Wiki:About”
  • “There is currently no text in this page, you can search for this page title in other pages or edit this page”
Usually you’ll be able to find Wikis for most of the topics. You might be wondering that you must restrict yourself to just related sites. I say NO! Sure, getting links from related sites gives you a little more power.
But you should not restrict yourself to it. If you want to find niche specific wikis, then simple use this: “footprint(above)” + keyword These are few useful Wiki footprints that we are going to use later. In the next article you’ll learn how to get High PR backlinks using the footprints that we just learned.

Tuesday, October 29, 2013

The Value of Referrer Data in Link Building

The Value of Referrer Data in Link BuildingBefore we get into this article let me first state, link building is not dead.  There are a lot of opinions floating around the web on both sides; this is just mine.  Google has shut down link networks and Matt Cutts continues to make videos on what types of guest blogging are OK.  If links were dead, would Google really put in this effort?  Would anyone get an “unnatural links” warning?
The fact is, links matter.  The death is in links that are easy to manipulate.  Some may say link building is dead but what they mean is, “The easy links that I know how to build are dead.” 
The Value of Referrer Data in Link Building
What does this mean for those of us who still want high rankings and know we need links to get them?  Simply, buckle up, because you have to take off your gaming hat and put on your marketing cap.  You have to understand people and you have to know how to work with them, either directly or indirectly.
I could write a book on what this means for link building as a whole, but this isn't a book, so I'll try to keep focused.  In this article, we're going to focus on one kind of link building and one source of high quality link information that typically goes unnoticed: referrer data.
I should make one note before we launch in, I'm going to use the term loosely  to provide additional value.  We'll get into that shortly but first, let's see how referrer data helps and how to use it.

The Value Of Referrer Data

Those of you who have ignored your analytics can stop reading now and start over with “A Guide To Getting Started With Analytics.”  Bookmark this article and maybe come back to it in a few weeks.  Those of you who do use your analytics on at least a semi-regular basis and are interested in links can come along while we dig in.
The question is, why is referrer data useful?  Let's think about what Google's been telling us about valuable links: they are those that you would build if there were no engines.  So where are we going to find the links we'd be happy about if there were no engines?  Why, in our traffic, of course.
Apart from the fact that traffic is probably one of, if not the best, indicator of the quality and relevancy of a link to your site, your traffic data can also help you find the links you didn't know you had and what you did to get them. Let's start there.

Referrers To Your Site

Every situation is a bit different (OK – sometimes more than a bit) so I'm going to have to focus on general principles here and keep it simple. 
When you look to your referrer data, you're looking for a few simple signals.  Here's what you're looking for and why:
  1. Which sites are directing traffic to you?  Discovering which sites are directing traffic to you can give you a better idea of the types of sites you should be looking for links from (i.e. others that are likely to link to you, as well). You may also find types of sites you didn't expect driving traffic. This happens a lot in the SEO realm, but obviously can also happen in other niches.  Here, you can often find not only opportunities, but relevancies you might not have predicted.
  2. What are they linking to?  The best link building generates links you don't have to actively build. The next best are those that drive traffic.  We want to know both. In looking through your referrer data, you can find the pages and information that appeal to other website owners and their visitors.  This will tell you who is linking to you and give you ideas on the types of content to focus on creating.  There's also nothing stopping you from contacting the owner of the site that sent the initial link and informing them of an updated copy (if applicable) or other content you've created since that they might also be interested in.
  3. Who are they influential with?  If you know a site is sending you traffic, logically you can assume the people who visit that site (or the specific sub-section in the case of news-type sites) are also interested in your content (or at least more likely to be interested than standard mining techniques).  Mining the followers of that publisher for social connections to get your content in front of them is a route that can increase your success rate in link strategies ranging from guest blogging to pushing your content out via Facebook paid advertising.  Admittedly, this third area of referrer data is more akin to refining a standard link list, but it's likely a different audience than you would have encountered (and with a higher-than-standard success rate for link acquisition or other actions).
As I noted above, I plan to use the term referrer data loosely.  As if point three wasn't loose enough, we're going to quickly cover a strategy that ties nicely with this: your competitor's referrer data.

Competitor Data

You probably can't call up a competitor and ask them for their traffic referrer data (if you can, I wish I was in your sector).  For the rest of us, I highly recommend pulling backlink referrer data for your competitors using one of the many great tools out there.  I tend to use Moz Open Site Explorer and Majestic SEO personally, but there are others.
What I'm interested in here are the URLs competitors link to.  While the homepage can yield interesting information, it can often be onerous to weed through and I generally relegate that to different link time frames. 
Generally, I will put together a list of the URLs linked to, then review these as well as the pages linking to them.  This helps give us an idea of potential domains to target for links, but more importantly, they can let us know the types of relevant content that others are linking to. 
If we combine this information with the data collected above when mining our referrer data, we can be left with more domains to seek links on and broader ideas for content creation.  You'll probably also find other ways the content is being linked to. Do they make top lists?  Are they producing videos or whitepapers that are garnering links from authority sites?  All of this information meshes together to make the energies you put into your own referrer mining more effective, allowing you to produce a higher number of links per hour than you'd be able to get with your own.

Is This It?

No.  While mining your referrer data can be a great source of information regarding the types of links you have that you should be seeking more of, it's limited to the links and traffic sources you already have.  It's a lot like looking to your Analytics for keyword ideas (prior to (not provided) at least).  It can only tell you what's working of what you have already. 
A diversified link profile is the key to a healthy long term strategy.  This is just one method you can use to help find what works now and keep those link acquisition rates up while exploring new techniques.

Thursday, October 24, 2013

Facebook Launches New Retargeting Capabilities

Facebook Launches New Retargeting Capabilities - New retargeting options are coming to Facebook that won't force you to go through FBX (Facebook Ad Exchange). 

Advertisers will no longer need to manage their Facebook retargeting campaigns through demand-side platforms (DSPs). Advertisers will now be able to manage retargeting programs directly through Facebook's interface.
 
Currently, through FBX, advertisers can only serve their ads on desktops within the news feed or on the right rail. 

Advertisers also have to buy their ad space through DSPs. With tens of millions of users accessing Facebook on their mobile devices each month, this new feature will allow advertisers to target individuals on their mobile devices.

The New Features

Custom Audiences will allow advertisers to set up retargeting campaigns directly in Facebook's interface. Advertisers will also be able to overlay standard Facebook targeting options such as demographic information. This is something that is not currently available within FBX through DSPs and provides a distinct advantage.

This advertising option will also be available to target mobile devices, which make up a large share of impressions delivered on Facebook. Essentially you'll be able to retarget across multiple devices due to users needing to login to their Facebook account.

What FBX Still Does Better

While Facebook added features such as allow retargeting ads on mobile devices as well as overlaying traditional targeting tactics, there are a few ways FBX trumps this new option.

One of those is predictive buying – if an individual continuously browses for a certain product or service, FBX's predictive buying capabilities will allow advertisers to show an ad that matches what the consumer has been looking at.

How Does This Help Advertisers

The control and optimization efforts can now be done directly in the Facebook interface. While DSP's can be great for automated bidding, if you happen to run a direct response type of campaign, having those bidding controls available to your PPC manager can be beneficial.

Being able to overlay the standard targeting options within Facebook will also present advertisers with a great opportunity. If you know a certain demographic makes buying decisions or purchases, you'll be able to add that level or targeting on as well, increasing your chance of conversion.

Why Did Facebook Make This Move?

By working through DSPs, a significant amount of the revenue generated by ads on Facebook was heading to the DSPs. Facebook Chief Operating Officer Sheryl Sandberg said FBX generated "a very small part" of Facebook's revenue.

This move will help keep more ad revenue in house with Facebook and not heading off to the DSPs. This also opens up retargeting to the mobile space on Facebook which should present significant revenue opportunities as well.

source:http://searchenginewatch.com

Monday, October 21, 2013

Technical SEO for Nontechnical People


Technical SEO for Nontechnical People
Technical SEO for Nontechnical People - Asking a nontechnical person to understand technical SEO is like asking an infant to understand Pig Latin.

Many of us didn't understand the difference between client side and server side, HTML and CSS, hell, even coding and programming when we started our careers. But in this tightly integrated industry, you must take the time to learn it.

Fixing even the smallest technical issue could have more benefits than even the best links you build. While you may not be able to personally fix these issues, knowing what to look for and how to fix it – and having a developer handy to implement it – is critical.
So for all my fellow nontechnical people, here are the basics behind what you need to look out for with technical SEO.

Redirects and Status Codes

Looks Like

Whenever someone accesses your page, that page will send a response to the web server saying what's happening. There are a lot or status codes, and Moz's "Response Codes Explained With Pictures" does a way great job explaining each.

Meaning

The basics:
  • 200: Hey Google, all good. Page loads just fine.
  • 301: Hey Google, I actually permanently relocated that information over here.
  • 302: Hey Google, this page is over here for the time being, but it won't stay like that forever.
  • 404: Hey Google, this page doesn't exist. Nice try.

What to Use

Screaming Frog will pull a whole list of your URLs and let you know the status code associated with each. I also have our support team pull a list of all URLs on our database and cross-reference so I'm not missing anything.

Keep in Mind

Make sure the front-end and the back-end match. If you're permanently redirecting, the status code needs to be a 301 and not a 302. If not, then you aren't passing any value from Page A to Page B, and Page B will probably never rank.
Additionally, don't redirect everything back to the home page. It should be done on a 1-1 basis.
If the page really doesn't exist, the status code needs to throw up 404, not 200. That's called a soft 404, and it creates confusing signals between Google and your web server.

Canonicals Are Cool

Looks Like

In the source code of Page A, you have < link rel="canonical" href="http://www.example.com/page-b">.

Meaning

Hey Google, I know you're on Page A, but that content is actually best read on Page B.

When to Use

Duplicate or near duplicate content, like:
  • Duplicate home page URLs
    • www.example.com and www.example.com/default.aspx
  • Duplicate paths to the same page
    • www.example.com/news/page-a/ and www.example.com/press/page-a

Keep in Mind

Your canonicaled page is the end all, be all. It's the page that you want indexed by search engines, so that URL also needs to be in your sitemap and internal linking structure. In the example above, users and crawlers will still get to Page A, but Page B is getting all the action.

URLs Aren't Case Sensitive

Looks Like

http://www.example.com/angry-erin-attacks and http://www.example.com/Angry-Erin-Attacks

Meaning

Hey Google, I know you think you're on two different URLs, but it's really just the same content.

What to do

Your developers should be able to implement a site-wide canonical pointing all uppercase URLs to their lowercase counterpoints. That way, duplicate content is fixed and your users aren't having an interrupted experience.

Keep in mind

I like having all my URLs lowercase since it's few people rarely capitalized URLs when they're typing them into a search bar anyway. Plus it keeps in cleaner. You could 301 these, too, but I prefer to limit the number of redirects to keep things as simple as possible.

URL Parameters

Looks Like

http://www.example.com/angry-erin?color=green&size=15&id=12345&print=1

Meaning

Whenever you see a "?" in a URL, that's the start of your URL parameters.
Some parameters give more information about what's on the page making the content change, like signifying a size or a color on an ecommerce site. Other parameters don't change the content at all and are just used for tracking purposes, like saying where the referral came from or this is the print version of the page.
For example:

http://searchenginewatch.com/article/2299970/Securing-the-Future-of-SEO-Global-Brands-5-Not-Provided-Solutions?utm_content=buffera3518&utm_source=buffer&utm_medium=twitter&utm_campaign=Buffer

In the above URL, SEW has additional parameters so it knows how many people are going to that article from Twitter. If you remove everything after the "?", the page content will stay the same.

http://www.example.com/catalog/sc-gear/women-39-s-kira-2-0-triclimate.html?variationId=A5L&variationName=BOREALIS%20BLUE

In the above URL, everything after the "?" are additional filters of the same product, in this case, the color. Removing everything after the "?" will still bring you to the jacket, but not in the color that you want.

What to do

URL parameters can cause duplicate content, so you'll need to keep an eye out on what's necessary for the page content and what Google can ignore. In Google Webmaster Tools, you can tell Google how it should read your parameters, but you should implement some canonicals too.

Summary

Even with a basic understanding of what to look for in technical SEO can get you far. So many people today focus too heavily on off-page SEO, but if a site is technically flawed, it won't matter how many links you have or how good your content is.

 source: http://searchenginewatch.com