Is Google Analytics Hurting your Business?

Given that everybody in the conference hall was involved in digital marketing in some way, and how much of website visitor tracking is done through Google Analytics, you might even speculate that was a foolhardy statement and that the only thing that saved the speaker was the cordon of riot police brought in specially for this talk. But then the man on the platform was Ammon Johns – a man with almost 20 years of SEO experience who is recognised by the industry as someone with a huge amount of SEO knowledge and who speaks at some of the largest digital marketing conferences around – so the riot police were little troubled, although many eyebrows were raised.

It turns out that the main aim of the talk wasn’t actually to get everybody in the room to boycott Google, but to make us think. And that’s what I’d like you to do throughout this post – question the common wisdom that Google Analytics is the best thing since hypertext protocols and ask yourself whether it might actually be harming your business.

Why is Google Analytics so great?

It is a truth universally acknowledged that Google Analytics is brilliant for four reasons:

  1. It’s very easy to use
  2. Everyone else uses it, so it must be the best
  3. It integrates brilliantly with AdWords
  4. It’s free. Who can argue with free?

The big question is, are these really the right reasons for choosing an analytics tool? Does “easy to use” mean “easy to get actionable insights from” or something else? With Google being a hugely successful corporation, are they really giving me a huge chunk of data for free or am I paying in some other way?

Is Google Analytics actually easy to use?

Google Analytics is definitely easy to set up. It’s also easy to get data out of and it’s easy to get rid of data you don’t want. But spitting out data isn’t the point of a web analytics. The point is to provide insights that let you build testable hypothesis and so improve the performance of your platform.

We’ve all seen the Google Analytics home screen – now the Audience Overview screen – with its visitor graphs and its language breakdowns. But have you really studied it? Head over to Analytics, take a look at that Audience Overview screen and ask yourself “how can I improve my business with these data and these data alone?” I’ll give you a few minutes of thinking time.

Did you manage to find anything? I would be very surprised if you did. Now that’s quite a shocking statement: you went to the first – and so by definition most important – screen of a tool that millions of people use every day and I don’t expect you to have found anything useful. Ouch.

That’s because while Google Analytics is very easy to set up and it’s very easy to see the data it spits out, it’s actually very difficult to get real insight. Almost every valuable analysis requires creating a custom report. You want to use cohort analysis to determine whether you have taken the right approach on a channel? Custom report. You want to see which blog posts drive the most and best engagement? Set up JavaScript events then build a custom report. You want to integrate offline sales data from your CRM? No can do; you will be able to when you get Universal Analytics, but only using (all together now) a custom report.

So there are plenty of things in Analytics that could be easier. But how can we make them easier? The problem here comes not from the data being collected but from the way it’s displayed. One option is to suck the data straight in from the API to your own set of reports that can not only be branded nicely but will only show the graphs you want to see, set up in the way you want. It’s not actually all that difficult for a good developer to do, and if it saves you time each week or month then you can make a good business case for investing in such a solution.

If you can make the business case for building a custom interface for Google Analytics, though, it might be worth asking yourself the question posed at the start of this post: “is Google Analytics really the best solution out there for me or can I justify investing in something else?” Take a couple of hours to explore the web analytics ecosystem and see if you can find a solution that would make it easier to deliver real, actionable insight.

Just because everyone else uses it, is Google Analytics really the best?

I started the last section off with a challenge, so I’ll do the same here. Don’t worry, this will be a simple one with no trips off to Analytics. Ready? Define “the best”. Go!

OK, so that’s actually what a mathematician would define as “complex”: a question that’s easy to ask but difficult to answer. The reason it’s difficult to answer is twofold:

  1. This is probably the first time we’ve ever asked ourselves this question
  2. The answer depends hugely on context: who is asking questions of our data, why they want answers, who is going to do the analysis, and a whole range of other factors

The reason I asked the question is that if we can’t define what “the best” means, how can we say Google Analytics is the best solution?

There are some things it does brilliantly. Tracking visitor flow, aggregating data over multiple pages and channels, letting us look at engagement. But there are some questions it simply cannot answer. For example, what would your reply be if your boss asked:

  • “The average time spent on this landing page is two minutes. Is that because they were reading the copy or because they were comparing our product to our competitors?”
  • “How well are the videos on our site engaging visitors?
  • “People jump from their mobile, to their work PC, back to their mobile on the train home, then onto their home computer. How can we track this happening to get a real picture of cross-device behaviour?”
  • “What happens if people have cookies turned off?”

Hands up all those who said “ermmm”.

There are tools out there that can do these things:

  • Crazy Egg gives you heatmaps showing what proportion of people have scrolled down a page and how many have clicked links on a given page (I personally love Crazy Egg. No affiliation, they just make a great product).
  • Digital Analytix from comScore lets you track individuals across devices. Universal Analytics will bring in this behaviour to some extent, but only for people who sign in to their Google accounts while browsing
  • While you could cobble together a video analysis using time on page, JavaScript events, and a pinch of salt, Digital Analytix gives you data on browser behaviour during video streaming
  • Piwik is an open source (read “free and fully customisable”) analytics tool that doesn’t use cookies, so doesn’t give you the problem of not being able to track people who have turned off cookies

A screenshot from Crazy Egg used on the Optimizely blog. When a CRO tools company starts using a web analytics tool it could be interesting to take a look (Image credit: Crazy Egg)

For a lot of people those are some pretty fundamental questions that can’t be answered. But some people know enough about JavaScript – or employ people who do – that they can set up event listeners to get a portion of this data. And some people are not asking these questions. But think about whether Google Analytics has ever not given you the answer to a question, or even if you haven’t asked a question because you know it can’t be answered; if this has happened a few times then it might be a good time to head off and do that research into other providers.

Anything free is amazing. But is Analytics really free?

Now I imagine that a lot of people reading that heading have straight away thought “of course it’s really free, we don’t give them a penny”. But think about this: in using Analytics you give Google all of the data. That gives them knowledge about you and your customers, and knowledge, as we all know, is power. So you might not be paying Google cash, but you are definitely helping them keep their position as one of the most powerful companies on the planet.

But more than that, if knowledge is power and power is money then surely gaining knowledge about data and its manipulation is a great learning opportunity and one that will make you a fair return one day. As Ammon said in his talk, “Using Google Analytics doesn’t make you good with data, just with Google Analytics”. Because if you just accept what Analytics pukes out at you, are you really asking the difficult questions that will help your business to improve?

One last thought: the data that Google Analytics gets is yours for free anyway. It’s your information about people coming to your website and interacting with your services, not Google’s. Lots of companies are moving towards data warehouses now, keeping all of their information within their own domain instead of giving it to third parties. And if you have any concerns about privacy following the recent revelations about the NSA and GCHQ then you might consider them pretty sensible people.

When is “Good Enough” good enough?

This was actually going to be the title of this post, but I don’t quite have Ammon’s nerve (and it’s a great topic for a project management post so has been filed away for later use).

As we’ve seen, Google Analytics is not the best solution out there. It’s not even the best free solution out there for some people. But what it is is “good enough”. It’s good enough to get some profound insights out of if you work with it, and like Excel, even better if you can build a custom dashboard. It’s good enough if you value those insights over privacy. It’s good enough if you can’t invest the time to learn a new tool that will give you similar insights. It’s good enough if you ask it the right sort of questions.

It might be for him, but is it for you? (Image credit The Meme Wiki)

But – and it’s a big but – for you that might not be enough for you and your company. Do you work for a “data-driven organisation”? Do you want to ask hard questions, make big changes, and get big improvements as a result of the data in your hands? Do you want to stand out from all of the other companies and agencies out there who do analytics in the same way?

If “good enough” suits your needs, dismiss this post with a wave of the hand. But if you think that you might need more than “good enough” in the future, or if you really want to be a properly data-driven decision maker, or if you think that big changes will give you big results I urge you to think about your choices. Explore the options out there; even if you go back to Google Analytics, you’ll come back with more knowledge than you had before. But if you don’t go back, you can look forward to a long, exciting, and rewarding journey.

About BenjaminMorel — Benjamin Morel is an agency-based digital marketeer and project manager working for Obergine in Oxford, UK. He is passionate about inbound marketing, especially building strategies centered around data and communication. Follow him on Twitter @BenjaminMorel or on Google +.
Advertisements

Is Buying Domain Names Profitable?

Is Buying Domain Names Profitable?

This is in response to a question a fellow Moz community member once asked in Q&A, and we thought that it deserved its own article. Buying up expired domains, or purchasing keyword-driven domains is becoming more popular amongst the internet “get rich quick” crowd. The big question is: Can you make a profit by buying and selling domain names? If you get the right one, sure. If you plan on repeating the process over and over, probably not.

Something wise my father once told me “Something is only worth how much someone is willing to pay for it.” This small seemingly unimportant statement has guided me in many selling and purchasing decisions in my life. Sometimes, it makes the reality all too apparent. So, is buying a domain with the intention of selling it a good idea? Let’s break down the details, and talk to some people that actively pursue this method. Yeah, we know a guy.

1. Labor-wise, it doesn’t add up

In the grey hat SEO world, the thought is that you can take a domain that is keyword driven, do a quick optimization to get the site ranking, and sell it off at a profit. It could, and does happen daily. How much time is invested in optimizing a site to get to page one, vs how much the site will sell for? (remember that quote at the beginning of this article?). Let’s put it into simple math:

According to Sedo.com, the average cost of a purchased .com domain is around $2100-2300. Depending on how much your time is worth, you may have to take a hit on labor cost to get the domain where it needs to be in order to entice potential buyers. Here is some theoretical math for you number crunchers:

  • Optimize keyword-driven site to rank in Google: approx 30 hours @ $50 hour labor cost (using low $50 rate for sake of example) = $1500
  • Time cold-calling and email blasting potential buyers in industry niche to purchase said domain: approx 8 hours = $400
  • According to the data provided by Sedo.com, likely cost for selling said site: approx $2300 max ($1000-1500 most likely)
  • Gained vs Invested = $2300-1900= $400 profit.

I’m not against making money in any way, but $400 doesn’t really seem worth the effort and coffee expense invested (I like the good stuff). This is a very basic example used to put the costs in perspective. Most SEO providers charge more than $50 per hour, and you get what you pay for. The above example of a final labor estimate is probably much higher, or if the domain is already ranking high and the owner wants to sell, so is the initial purchase price. Since this is often repeated many times over for multiple domains, it could get time consuming, and expensive.

2. You might get someone’s dirty laundry

It’s ranking high today! What could be the problem? NO. JUST STOP. Unless you know the entire history of a domain, you may be setting yourself up for failure before you begin. SEOs (and business owners) use a variety of tactics to get a site ranking high in search results. For some of these methods, we’ll just call them “questionable”. These methods could include everything from buying links, overuse of directory submissions (non-industry related), duplicate listings, poor quality backlinks, and guest blog comments.

With a domain of this type, it could be very easy to get it to rank quickly, before the powers that be see the domain for what it is, and put it on the blacklist. While that study is being done, you could end up with a domain that has a lot of problems coming down the pipeline that you are completely unaware of.

3. It undermines your quality and reputation

If you know how to get websites to page one, why are you not marketing that fact to potential clients and consumers or would-be domain purchasers? Trying to get a keyword driven domain to rank high and sell it off for a profit isn’t a good investment, either time wise or for the long-term success of your company. Instead, use sites that you have already ranked high as an example of how awesome you are and sign them up for a monthly fee, rather than trying to sell them a “make money now” domain.

  • Demonstrate SEO prowess to potential client using existing sites as proof of results
  • Sign up client for basic SEO services at $600-1000 (depending upon site and competition) per month
  • Invest 30-40 hours in making the client’s site soar in results
  • Client is happy. Refers friends and other business owners your way
  • You get: More clients, better reputation, month recurring income, and gain a positive reputation for being a quality SEO provider.

4. It’s not sustainable income

Remember that guy we said we knew? in the second paragraph? Well, we talked to him to find out if all the bling and glamor behind selling domains was true. This is what he said:

  • US: “Is buying domains with the intent of selling them a sustainable model?”
  • HIM: “Honestly, it depends. Overall, I’d have to say no, because you never know what you are going to get in return. One week I might make $900 off one domain, but the next week I’m stuck with five nobody wants. However if you are a great salesman, you can make it work.”
  • US: “What type of domains do you see being the most sought after?”
  • HIM: “Mainly small-medium sized local businesses looking for a way to increase their ranking. Most already have a branded domain in place, and have heard that using a keyword domain can help. Or they have seen a competitor ranking using that method. The problem I face is that they don’t have a lot of money to spend, so I get lowballed on the asking price. There have been a few that make a ton of profit vs what I purchased it for, but that boils down to luck: what is for sale, when I find it, things like that.”
  • US: “When do you think it makes the most sense to buy or sell a domain?”
  • HIM: “When someone is selling the company, and have a high ranking domain already in place. Those companies stand to make the most money by selling to their competitors, who always seem to be willing to pay. (laughs)”

Selling a domain negates the fact that you can make additional money from this client, unless you start the process over again, with another domain. You could use the domains position as proof of your SEO-prowess, but once it’s already ranking and optimized, what other services can you entice them with?

5. Waiting on and finding buyers can be a pain in the arse

We did a search for GoDaddy and Sedo domain experiences, and many of them came back as negative. In one such example, Online Domain stated that “GoDaddy is destroying domain sales.” The author speaks about having to wait up to 80 days to get his domain sold, the whole time being questioned on his asking price.

But wait, there’s more. So you are looking to sell a premium listing that is not keyword-driven? Be ready to take a hit. On all premium domain sales, GoDaddy takes a 30% commission fee. This process happens before they remit the payment. Yikes. When you are already operating on a slim margin, 30% can be what makes or breaks the bank for that sale.


Author Photos are Gone: Does Google Authorship Still Have Value?

On June 25, 2014, Google’s John Mueller made a shocking announcement: Google would be removing all author photos from Google search results. According to the MozCast Feature Graph, that task was fully accomplished by June 29.

In this post I will:

  • Give a brief overview of how Google Authorship got to where it is today.
  • Cover how Google Authorship now works and appears in search.
  • Offer my take on why Author photos were removed
  • Investigate the oft-repeated claims of higher CTR from author photos
  • Suggest why Google Authorship is still important, and speculate on the future of author authority in Google Search.

A Brief History of Google Authorship

The Google Authorship program has been my wheelhouse (some might say “obsession”) since Google first announced support for Authorship markup in June of 2011. Since I am both an SEO and a content creator, Google certainly got my attention in that announcement when they said, “…we’re looking closely at ways this markup could help us highlight authors and rank search results.”

Of course, in the three years since that blog post, many search-aware marketers and content creators also jumped on the Google Authorship bandwagon. Occasional comments from prominent Google staffers that they might someday use author data as a search ranking factor, along with Bill Slawski’s lucid explanations of the Google Agent Rank patent, fueled the fire of what most came to call “author rank.”

Below is a video from 2011 with Matt Cutts and Othar Hansson explaining the possible significance of Authorship markup for Google at that time:

During the three years since Google announced support for rel=author markup, there have been many changes in how Authorship appeared in search results, but each change only seemed to buttress Google’s continued support for and improvement of the program.

In the early days of Google Authorship, almost anyone could get the coveted face photo in search by correctly setting up Authorship markup on their content and linking to that content from their Google+ profile. As time went on, Google became pickier about showing the rich snippet, and some sort of quality criteria seemed to come into play. Still, it was not too difficult to earn the author snippet.

Then at Pubcon New Orleans in October 2013, Matt Cutts announced that in the near future, Google would start cutting back on the amount of Authorship rich snippets shown in search. He said that in tests they found when they cut out 10% to 15% of the author snippets shown, “overall quality went up.” In December of that year we saw the promise fulfilled as the percentage of queries showing author photos dropped, and many individual authors either started seeing a byline-only snippet for much or all of their content, or losing Authorship snippets completely.

It was clear by then that Authorship as a search feature was a privilege, not a right, and that as much as Google seemed to want people to adopt Authorship markup, they were determined to police the quality of what was shown in search associated with that markup. But none of that prepared us for what has happened now: the complete removal of author photos from global search.

Google Authorship without Photos in Search

Here are the fundamental facts about how Authorship is used in search as of this writing:

1. The only Authorship rich snippet result now available in global search is an author byline. Google has dropped author photos entirely (except for some unique exceptions in personalized search; see below). Also, Google dropped the “in xx Google+ circles” link that showed in some cases and led to the author’s Google+ profile.

authorship without profile photos

2. Author bylines now link to Google+ profiles. Previously, at least in the US, author bylines in search results linked to a unique Google search page that would show just content from that author. This feature is no longer available.

3. Qualification for an Authorship byline now is simply having correct markup. This was a bit of a surprise given Google’s move last December to differentiate and highlight authors with better quality content who publish on trusted sites. But in a Google Webmaster Central Hangout on June 25, 2014, John Mueller indicated that now as long as the two-way verification (rel=author markup on the content site linked to author’s Google+ profile, and a link back to the content site in the author’s Google+ Contributor To links) could be correctly read by Google, a byline would likely be shown.

You can check for correct Authorship verification for any web page by entering its URL in Google’s Structured Data Testing Tool. If Authorship is correctly connected for the page, you should see a result similar to this:

eric enge authorship preview

However, it is well known that this tool isn’t perfect. For example, even though it shows Eric Enge‘s post on Copyblogger as being verified, Google has never shown an Authorship snippet for any of Eric’s posts there, and even now does not show a byline for that content. Eric is a very well-known and trusted author who gets a rich snippet for all his other content on the web, and Copyblogger is certainly a reputable site. Why his content there has never displayed an Authorship snippet remains a mystery.

In the Hangout, John Mueller went on to say that in the future they may have to reevaluate showing bylines for everyone who has correct markup, once they get more experience with the byline only results. He promised that there will be continued experimentation. If they see that people are using the bylines as a gauge of how great or trustworthy an author is, that might be impetus enough to try to re-implement some kind of quality factor into whether or not one gets a byline.

So are there actually more Authorship results in search now? If Mueller is correct that Authorship snippets are now based merely on a technically-correct connection, and there is no longer any quality factor, then wouldn’t we expect now to see more Authorship in search, even if only bylines? Not necessarily.

Moz’s Dr. Pete Meyers shared the following with me:

So, in my data set, Authorship [measured the old way – by thumbnail photos] peaked on June 23rd at 21.2% of SERPs (in our 10K data set). Measured the new way [bylines only], Authorship is showing up around 24.0% of SERPs. That could mean that, in absence of the photos, Google has allowed it to appear more often, or it could mean that there were a handful of SERPs with byline-only Authorship before. I suspect it’s the latter, but I have no data to support that.

I agree with Pete’s latter guess. The fact is that from the December 2013 “purging” of Authorship in search until the recent change, there have been two kinds of Authorship results: Those with a photo and byline, and those with byline only. I called the latter “second class Authorship,” and it looked like when Google ran its quality filter through the Authorship results, most lower-quality authors dropped to second class, byline-only results rather than being dropped altogether from Authorship results.

So it appears that the net result is no overall change in the amount of Authorship in search, just an elimination of a “first class” status for some authors.

4. Author photos may still be shown in personalized search for selected Google+ content. This was an unannounced change in Google search that showed up at the same time author photos were being eliminated from global (logged-out-of-Google) search. Now Google+ posts by people you follow on Google+ may sometimes show an author photo when you search while logged in to your Google+ account (personalized search).

The example below is an actual screen capture from my own logged-in search for “Google Plus for Business.” Joshua Berg is in my Google+ circles, and Google shows his relevant Google+ post both elevated in the results (higher than it would occur in my logged-out results) and with his profile photo.

authorship in google+

In my testing of this, I have seen that these personalized author photos for Google+ posts are most likely to show if the author is high in the “relevancy” sort in your Google+ circles, and is someone with whom you have engaged fairly frequently.

While not Authorship related, it is interesting to note that Google+ brand pages that you circle and have engaged with may now show a brand logo snippet in personalized search for their Google+ posts. While some other parts of the world have had these branded results for a while, this is entirely new for US Google searches.

google authorship for brands

I’ll have more below on what I see as the significance of these new results and what they may say about the future of Authorship and author authority in Google.

So Why Were Author Photos Removed?

So if Google was committed to continued improvement of the Authorship program, why did they drop photo snippets entirely? Was this a complete reversal, a “beginning of the end for Authorship” as some thought? Or were author photos in search simply not producing the results Google was looking for?

Before I give my take on those questions, I highly recommend Cyrus Shepard’s post ” Google Announced the End of Author Photos in Search: What You Should Know.” I agree completely with Cyrus’s take there, and won’t duplicate what he covered. Rather in the rest of this post I will try to bring some added insights and informed speculations based on my intensive observation of Google’s Authorship program over the past three years.

Let’s start with the explanation given by John Mueller in his announcement post, linked at the beginning of this article. John said:

We’ve been doing lots of work to clean up the visual design of our search results, in particular creating a better mobile experience and a more consistent design across devices. As a part of this, we’re simplifying the way Authorship is shown in mobile and desktop search results, removing the profile photo and circle count. (Our experiments indicate that click-through behavior on this new less-cluttered design is similar to the previous one.)

It sounds like Mueller is linking this change to Google’s “mobile first” initiative. Mobile first seeks to unify, as much as possible, the user experience between desktop and mobile. It is a response to the rapid increase of mobile usage worldwide. In fact, at SMX West earlier this year Google’s Matt Cutts said that he expects Google searches on mobile to exceed desktop searches before the end of 2014.

In subsequent comments on his Google+ post and elsewhere, Mueller elaborated that images in search results take up lots of bandwidth in mobile search, slowing down delivery of results on many devices. They also take up considerable screen real estate on the smaller screens of mobile devices.

But were UX and mobile concerns the only reasons for removing author photos? I seriously doubt that. If author photos were providing a significant benefit to searchers, according to Google’s data, then it is likely they would have worked on some compromise that would have made them more compatible with mobile first.

Furthermore, John Mueller himself, in the aforementioned Hangout, hinted that there were other considerations involved. For example, he commented that there may have been too many author photos for some search results, and that too much of any one feature in search is not a good user experience.

My Personal Speculation. I don’t doubt Mueller that demands by Google’s search user experience efforts may have been the main driving force behind the removal of author photos, but as I said above, I do not think it was the only reason.

I believe that after much testing and evaluation Google may have decided that author photos for now send a disproportionate signal to searchers. That is, the photos may have been indicating an implied endorsement of result quality that Google is not yet prepared to back up.

Remember that in December we saw Google reduce the number of author photos shown in search as an attempt, according to Matt Cutts, to increase the quality of those results. However, when questioned about the concept of “author rank” (Google using author trust data to influence search results), Cutts consistently speaks about the great difficulty of evaluating such quality or trust. He elaborates that finding a way to do that remains a strong goal at Google, but he doesn’t expect to see it for years to come. (For example, see my remarks on his comments at SMX Advanced last month.)

Given all that, it may be that Google, realizing that they still have a lot of work to do toward evaluating author trust and quality to a degree where they would allow those factors to influence actual search rankings, decided that even though Authorship does not currently affect rankings, the photos still might imply to searchers a trust and authority for the author of which Google could not be fully confident.

In addition, I believe that three years into the Authorship program, Google realized that they were never going to get the vast majority of authors and sites to implement Authorship markup. If author authority is to succeed as a contributor to better search results in the future, Google has to find ways to identify and verify authors and their connected content that are not tied to either markup or Google+. That also will be a long-term project.

So this may actually be merely a temporary retrenchment as Google knuckles down to the hard work of figuring out how to make author authority something truly worthwhile in search.

What About Ad Competition? When the dropping of author photos was announced, there was immediate speculation by many, including Moz’s own Rand Fishkin on Twitter, that the author photos were seen as too competitive with the AdWords ads displayed in search.

rand fishkin on authorship

It’s impossible to either prove or disprove such speculation, as only Google holds the data. I personally find it a little hard to believe that it came down to a zero sum game between author photos and ads. In other words, is it reasonable to think that was either/or; that author photos were so attractive and got clicked so much that when they appeared too many people totally ignored the ads?

Also, that speculation is based on the assumption that author photos were, in recent history, huge CTR magnets. In the next section I’ll examine those CTR claims.

What About Author Photo CTR?

One of the most oft-repeated alleged benefits of author photos in search was that they dramatically increased click-through rates (CTR), as people were drawn to those results even if they were lower on the page.

I was as guilty as anyone else in confidently proclaiming in my online articles and conference presentations that “studies have shown” this increase in CTR for Authorship results. So it shocked me as much as anyone when John Mueller in his announcement post said, “Our experiments indicate that click-through behavior on this new less-cluttered design is similar to the previous one .”

First, we should note some ambiguities in Mueller’s statement:

  • He does not actually say “click-through rate,” though that’s what most readers assumed he was talking about. He called it “click-through behavior,” which could refer to other things, such as how quickly people bounced back to the search results after clicking an author photo result. In that case, higher CTR would not be a good thing from a search quality viewpoint.
  • He does not explicitly say that the click-through behavior was for the author photo results exclusively. It could be an evaluation of overall click behavior on search pages that included author photos.
  • This could be a reference to click behavior aggregated across all queries showing author photos. If so, then it may be that while CTR was higher for photo results in some queries, overall the effect may have been a wash.

But were we ever really sure there was as huge a CTR increase for author photo results as was frequently claimed? After investigating those claims, I’m not so sure.

  • Google themselves never made a positive claim of increased CTR for author photos. A much-cited paper by Google researchers on social annotations such as face photos in search was based only on eye-tracking studies and user interviews, not actual click behavior. It actually found that image-based social annotations were not necessarily as attractive to searchers as believed, and only were attractive under certain circumstances.
  • I found hundreds of blog posts proclaiming “30-150% increase in CTR!” for Authorship. Those all seemed to trace back to one article two years ago that cited a 30% increase of CTR for rich snippet results in general. That post did not talk about Authorship specifically, nor was it made clear exactly how they determined the 30% raise.
  • Most of the other articles or “studies” purporting to show increased CTR from Authorship are based on one-off, anecdotal evidence. In other words, the authors implemented Authorship, and then saw more organic traffic to their sites. While interesting, such correlative claims at best may demonstrate a one-off accomplishment for that particular author for particular queries, but they do not prove that there was a general, or even universal, CTR boost.
  • Testing for actual CTR boost is probably impossible outside of access to Google’s own data. That’s because CTR is highly volatile by ranking position, and it is impossible to know if you’re comparing apples to apples. For a truly conclusive test, one would have to be able to randomly show the same result for the same query in an A/B split with half the results showing an author photo and half not. I don’t see any way for us to set up such a test.
  • In the Webmaster Central Hangout mentioned previously, John Mueller hinted strongly that whatever CTR boost there may have been, Google has seen it wear away over the past couple years. He mused that it is likely people became more used to seeing author photos in search over time, and so they had less impact and drawing power. If Google sees a feature not having much effect, it is natural that they would remove it.
  • Unfortunately, the Author Stats feature in Google Webmaster Tools is no help in evaluating CTR of author photo results vs. post-author photo results. Before June 28, for me it showed hundreds of pieces of content showing in search as Authorship snippets. Since June 28, only one result shows, and that is for a Search Engine Land article I wrote that made it into Google News results, where author photos can still show. Apparently the Author Stats tool was measuring only results with author photos.

google authorship graph

All that is not to say there was never any rise in CTR for any Authorship posts. But it is to say that we never really knew for sure, and we never knew how much. Most importantly, there was never any proof that any CTR boost was universal. That is, there was no reason to assume that just because your results got an author photo, they were automatically getting a CTR boost.

So Does Google Authorship Still Matter?

In a word, yes. If Google had actually lost its enthusiasm for and commitment to author identity as a future, important aspect of search, then this would have been the time to pull the band aid all the way off, rather than just removing photos. But, in fact, Authorship still works in search.

Let me conclude with some reasons why I think Authorship still has value, and that author authority is still a major priority for Google search.

1. Authors still matter. The bylines are an indication that Google still cares who created a piece of content, and thinks that is significant and useful information for searchers. Every pixel of a search result is very valuable real estate. Google realizes that, and is still willing to give up some of that territory to an author’s name.

2. Bylines are not invisible. Sure no one believes that a byline might capture the eye of someone viewing a search page to the same degree that a face photo probably did, but it does not follow that bylines are without value. More and more SEOs are advising their clients to optimize the meta descriptions for their pages. Why? Not because they are a ranking factor (they are not), but because they can have a significant effect on “selling” the searcher on clicking that result.

We’re used to hearing that the number one result for a given query usually gets the most clicks by far. But it doesn’t get all the clicks, and on some queries the top result may not be as attractive as on others. If we all believed the top result was always the best, wouldn’t we just click that “Feeling lucky?” button on Google’s home page?

The truth is that when the title of the top result doesn’t immediately grab the searcher as a sure thing to fulfill her search need, she will begin looking for other clues in the other results. Among those will be the descriptive text under the results. When an author’s name appears there, it may move the searcher to think the result is more reliable (written by a “real person”). And if that person is someone already known to and trusted by the searcher, the value goes up significantly.

3. Author and brand images now in personalized search. While limited in appearance, the fact that Google now will sometimes show an author photo or a brand image for Google+ content in personalized search indicates that they have not at all abandoned the idea that such image results can have value. It may be that they see that such highly-personalized recommendations have real value to searchers. It makes sense that if I regularly engage with Rand Fishkin on Google+, I will be more likely to value his content when I do a logged-in search with a relevant query.

This may have implications for the future of author authority in search in general. It is conceivable that even if Google does implement it and expand it for content beyond Google+ posts, that it will remain highly personalized. In other words, Google may decide that it is most reliable to boost authors with whom you already have some affinity.

4. Authorship still builds your author rank database with Google. Using Authorship markup on your best content is still the clearest way to let Google see what you create and how people respond to it. You can be sure that Google has been tracking such data all along, and will continue to do so. Even if author authority is still not a ranking factor (outside of personalized search, and some search features such as In-Depth Articles), it likely will be someday. When that day comes, if Google has a clear history of your growth as a trusted author in your field, you may have a competitive advantage.

5. Google remains committed to author authority as a search factor. As recently as SMX Advanced in May, just a few weeks before the announcement of the end of author photos, Google’s Matt Cutts reiterated his enthusiasm for author authority, while noting that it was a difficult and long-term project. For a transcript of his remarks, see my post here. Google understands that people are wired to trust other people long before they trust “brands” or websites.

About MarkTraphagen — Responsible for strategic planning and implementation of the online branding, promotion, and reputation of Stone Temple Consulting, as well as specialized consulting with selected Stone Temple clients.

Does SEO Boil Down to Site Crawlability and Content Quality?

We all know that keywords and links alone no longer cut it as a holistic SEO strategy. But there’s still plenty outside our field who try to “boil SEO down” to a naively simplistic practice – one that isn’t representative of what SEOs need to do to succeed. In today’s Whiteboard Friday, Rand champions the art and science of SEO and offers insight into how very broad the field really is.

For reference, here’s a still of this week’s whiteboard!

Video Transcription

Howdy Moz fans, and welcome to another edition of Whiteboard Friday. This week I’m going to try and tackle a question that, if you’re in the SEO world, you probably have heard many, many times from those outside of the SEO world.

I thought a recent question on Quora phrased it perfectly. This question actually had quite a few people who’d seen it. Does SEO boil down to making a site easily crawlable and consistently creating good, relevant content?

Oh, well, yeah, that’s basically all there is to it. I mean why do we even film hundreds of Whiteboard Fridays?

In all seriousness, this is a fair question, and I can empathize with the people asking it, because when I look at a new practice, I think when all of us do, we try and boil it down to its basic parts. We say, “Well, I suppose that the field of advertising is just about finding the right audience and then finding the ads that you can afford that are going to reach that target audience, and then making ads that people actually pay attention to.”

Well, yes and no. The advertising field is, in fact, incredibly complex. There are dramatic numbers of inputs that go into it.

You could do this with field after field after field. Oh, well, building a car must just mean X. Or being a photographer must just mean Y.

These things are never true. There’s always complexity underneath there. But I understand why this happens.

We have these two things. In fact, more often, I at least hear the addition of keyword research in there, that being a crawl-friendly website, having good, relevant content, and doing your keyword research and targeting, that’s all SEO is. Right? The answer is no.

This is table stakes. This is what you have to do in order to even attempt to do SEO, in order to attempt to be in the rankings to potentially get search traffic that will drive valuable visits to your website. Table stakes is very different from the art and science of the practice. That comes because good, relevant content is rarely, if ever, good enough to rank competitively, because crawl friendly is necessary, but it’s not going to help you improve any rankings. It’s not going to help you in the competitive sense. You could be extremely crawl friendly and rank on page ten for many, many search terms. That would do nothing for your SEO and drive no traffic whatsoever.

Keyword research and targeting are also required certainly, but so too is ongoing maintenance of these things. This is not a fire and forget strategy in any sense of the word. You need to be tracking those rankings and knowing which search terms and which pages, now that “not provided” exists, are actually driving valuable visits to your site. You’ve got to be identifying new terms as those come out, seeing where your competition is beating you out and what they’ve done. This is an ongoing practice.

It’s the case that you might say, “Okay, all right. So I really need to create remarkable content.” Well, okay, yes, content that’s remarkable helps. It does help you in SEO, but only if that remarkability also yields a high likelihood of engagement and sharing.

If your remarkability is that you’ve produced something wonderful that is incredibly fascinating, but no one particularly cares about, they don’t find it especially more useful, or they do find it more useful, but they’re not interested in sharing it, no one is going to help amplify that content in any way—privately, one to one, through email, or directing people to your website, or linking to you, or sharing socially. There’s no amplification. The media won’t pick it up. Now you’ve kind of lost. You may have remarkable content, but it is not the kind of remarkable that performs well for SEO.

The reason is that links are still a massive, massive input into rankings. So anything—this word is going to be important, I’m going to revisit it—anything that promotes or inhibits link growth helps or hurts SEO. This makes good sense when you think about it.

But SEO, of course, is a competitive practice. You can’t fire and forget as we talked about. Your competition is always going to be seeking to catch up to you or to one up you. If you’re not racing ahead at the right trajectory, someone will catch you. This is the law of SEO, and it’s been seen over and over and over again by thousands and thousands of companies who’ve entered the field.

Okay, I realize this is hard to read. We talked about SEO being anything that impacts potential links. But SEO is really any input that engines use to rank pages. Any input that engines use to rank pages goes into the SEO bucket, and anything that people or technology does to influence those ranking elements is what the practice of SEO is about.

That’s why this field is so huge. That’s why SEO is neuropsychology. SEO is conversion rate optimization. SEO is social media. SEO is user experience and design. SEO is branding. SEO is analytics. SEO is product. SEO is advertising. SEO is public relations. The fill-in-the-blank is SEO if that blank is anything that affects any input directly or indirectly.

This is why this is a huge field. This is why SEO is so complex and so challenging. This is also why, unfortunately, when people try to boil SEO down and put us into a little bucket, it doesn’t work. It doesn’t work, and it defeats the practice. It defeats the investments, and it works against all the things that we are working toward in order to help SEO.

When someone says to you on your team or from your client, they say, “Hey, you’re doing SEO. Why are you telling us how to manage our Facebook page?

Why are you telling us who to talk to in the media? Why are you telling us what changes to make to our branding campaigns or our advertising?” This is why. I hope maybe you’ll send them this video, maybe you’ll draw them this diagram, maybe you’ll be able to explain it a little more clearly and quickly.

Everybody Needs Local SEO

If you work in the SEO industry, you need to understand how to do Local SEO. Seriously.. I’m not kidding here… If you’re sitting there thinking “Um, no… not really” – then you’re exactly the person I’m writing this post for.

If you haven’t already, I can pretty much guarantee you that at some point in your SEO career, you’re going to do some SEO for a business that has a physical storefront. BOOM – that means Local SEO. Sure, you’ve still got to do all the traditional SEO things that you do every day for all your clients, but when you’re talking about a physical location, Local SEO is absolutely necessary.

If you’re thinking “But Greg – If I do all the SEO stuff I’m supposed to do, I’ll still get the site to rank organically…” – you still aren’t getting it. If you add some Local SEO to the mix, you can show up in organic results AND the map pack (clients love that, so you should too). Plus, showing up in the map pack or the Local Carousel is incredibly important when a business is trying to pull in customers from the immediate area. Also, the map pack results show up ABOVE the organic results on mobile, and we all know that mobile is blowing up.

So if you’ve never paid any attention to Local SEO, it’s time to start lifting, bro. I’m going to give you a simple workout plan to help you beef up your Local SEO muscles, and with a little practice, you’ll be playing with the big boys in no time.

You should already know how to optimize a website, and if you don’t, there are a ton of awesome posts here on Moz. When you’re working on your optimizations, there are some important elements that you need to concentrate on for Local SEO. These elements are extremely important on your landing pages for your Google Plus Local listings (more commonly known now as “Google My Business Places Plus Local For Business”). If your business has multiple locations, you should have a unique location landing page for each Google Plus Local listing. you’re dealing with a single location, then we’re talking about your home page – but these elements should also be locally optimized on product and services pages. 

  1. City and state in the title tag. Arguably one of the most important places to include city/state information. We’ve seen many small businesses jump up in local rankings from this alone.
  2. City and state in H1 heading. Hold on, don’t interrupt. I know it doesn’t HAVE to be an H1 heading… So whatever heading you’ve got on the page, it’s important to also have your city/state info included.
  3. City and state in URL. Obviously, this can’t happen on your home page, but on other pages, including the city/state info in the URL can be a powerful signal of local relevance.
  4. City and state in content. Clearly, it’s important to include your city/state info in your content.
  5. City and state in alt tags. We see far too many local business sites that don’t even use alt text on their images. Make sure you’ve got alt text on all your images, and make sure that you’re including city/state info in your alt text.
  6. City and state in meta description. Yes, we all know that the meta description doesn’t play into the ranking algorithm… but including city/state info can really boost clickthrough rate for local search results.
  7. Include an embedded Google Map. Including an embedded Google Map is important too, but PLEASE make sure you do it correctly. You don’t want to just embed a map that points to your address… You want to embed a map that points to your actual Google Plus Local listing.

Most of the Local SEOs who really live and breathe local agree that citations aren’t the amazing powerful weapon that they used to be… but that doesn’t mean they’re not still incredibly important. If you don’t know what a citation is, it’s commonly referred to as NAP information in Local SEO circles – Name, Address, and Phone number. Google expects local businesses to have their NAP information on certain other websites (Yelp, social media sites, etc.), so if you don’t have citations on the important sites, or your citation information is incorrect, it can really hurt how your business is ranking.

While they’re not the silver bullet for rankings that they used to be, they’re still an important signal for local relevancy. Here’s may favorite example… We were hired to do the SEO for a car dealership just outside of New Orleans last fall. The dealer spent tons of money on radio and TV ads and was very well known in the local area, but he didn’t understand why he wasn’t showing up in local searches.

Within about 30 seconds of looking at his site, we knew exactly what the problem was. The correct spelling of his dealership name is “Deal’N Doug’s Autoplex” – but he had his own business name misspelled five different ways on his home page alone:

  • Dean’N Dougs Autoplex
  • Deal’ N Doug’s Autoplex
  • Deal’N Doug’s Auto Plex
  • Dealn Dougs Autoplex
  • Deal n Dougs Autoplex

We did a quick citation evaluation, and sure enough, he had all of those misspelled names floating around in different citations. He also had several citations for “Dealin’ Doug’s Autoplex” – which is grammatically how you’d expect it to be spelled.

We figured that we had the perfect opportunity for a citation experiment. All we did during the first month of work was NAP cleanup. We corrected the business name everywhere on his site, and we made sure to manually update all of the citations that were misspelled.

In just a few weeks, he went from not ranking at all to ranking in the top spot in the map pack. When the local algorithm went through the big shakeup last October, he retained the #1 map ranking and also gained a #2 organic spot. Yes, we did a lot more optimization for him after that first month, but cleaning up the name information was enough to get him to rank #1 in his city.

Working on citations can be tedious, but it’s well worth the effort. There are tons of submission services out there, but we prefer to do everything manually, so we know 100% for sure that things are done correctly. Here’s our citation campaign workflow:

  1. Run an initial check with Moz Local. No, I wasn’t paid to say that (but if Moz wants to hook me up with some extra bacon at MozCon to thank me, I wouldn’t turn it down… cough, cough). We start with a quick check on Moz Local to see the current status of a client’s citations. It’s a great way to see a brief overview of how their NAP information is distributed online.
  2. Fix any issues found in Moz Local. It’s got all those handy links, why not use them? If there are missing citations, go get them. If you’ve got incomplete listings, follow the tips to update them.
  3. Run a citation search with Whitespark. Whitespark’s Local Citation Finder is awesome (it’s our favorite citation tool). You need to run two reports: one to check your current citations, and another to find citation opportunities. Whitespark is simply the best around for finding citation opportunities.
  4. Set up a campaign in BrightLocal. Yes, it’s a bit redundant to use BrightLocal and Whitespark at the same time… but we really love their interface. You get 3 tabs of info: active citations, pending citations, and potential citations. On each citation, you can enter specific notes, which really helps you keep track of your efforts over time. When you add in new citations from your Whitespark list, you can add them in to your “pending citations” tab. When you re-run the report later, any pending citations that have become active will move over into the active list.
  5. Keep pumping reps. Over time, you’ll add more citations, but you should always use Whitespark to check for new opportunities AND any incorrect NAP info that might appear. Keep your notes in BrightLocal so you can keep everything straight.

Reviews are an integral part of Local SEO, but they’re also vital for local clickthroughs. Now that Google displays reviews in an isolated popup (instead of taking you to the locations Google Plus Local page), users will read your reviews before they see any other information about your business.

Our process is simple, but it works well. Here’s how to get more positive reviews for any business:

    1. Set up a review page on your site. We always set up a page at domain.com/reviews for every client. It’s easy for any employees to remember, and it’s a simple URL to tell customers about. You don’t want to ask for reviews and then expect that your customers will be able to search for you on Google, navigate to your Google Plus Local page, and find the right link to click to leave a review.

      Include simple instructions for leaving a review on the page, along with a direct link to the location’s Google Plus Local page. It’s also helpful to let customers know that they’ll need a Google account to leave a review (and instructions for setting up a Google account if they don’t have one). You should always focus on Google reviews until a business gets at least 10 reviews. Once you’ve got 10 reviews on Google, you can offer other options and let customers choose the review site that they’re most comfortable with.

      PRO TIP: For Google reviews, include this string at the end of your Google Plus Local link:  ?hl=en&review=1
      Now, when customers click the link, the review window will automatically pop up when they land on your Google Plus Local page (so they don’t have to find the link!).

  1. Create a review handout. There are several review handout generators out there online, but in our experience, most of them are a bit too complicated. Instead of showing a flowchart on the handout or giving customers several options for review sites, our review handouts simply point customers to the domain.com/reviews page that we set up. 

    This allows us to create a really nice branded postcard to hand out, and regardless of our review strategy, the card never changes. 

  2. Hand the card to every customer and ASK. You can’t just hand the card over, you have to ask your customers to leave reviews. We encourage our clients to hand over the card at the last possible moment of customer interaction, so the request and the card are fresh on a customer’s mind when they leave. Don’t offer an incentives to leave reviews, just be honest and let your customers know that you’d truly like to hear their honest opinion about their experience

Even if your client has a ton of customers, make sure they understand that they won’t get a lot of reviews. We tell our clients that 1 review a month is a perfectly acceptable pace. A steady stream of reviews over time is much more important than a quick influx.

There you have it! If you follow these simply Local SEO workout tips, you’ll build your Local SEO muscle in no time. You’ll be able to provide better results to your clients, which means they’ll be happier… and happier clients means more long-term business. Everyone wins!

About Greg_Gifford — I read “Internet Marketing for Dummies” (ok, I skimmed it) and thought SEO would be a fun gig… All kidding aside, I’m a Local SEO geek… I work for an automotive software company, and my department provides hyper-local SEO and managed social media to car dealers all over the country. I also do quite a bit of freelancing in my free time for other industries. I’m a giant movie nerd, and probably have an obscure movie quote for just about any situation. I’m also a big foodie, so find me at the next SEO conference and I’ll take you to the most amazeballs restaurants in town.

Setting Up 4 Key Customer Loyalty Metrics in Google Analytics

Customer loyalty is one of the strongest assets a business can have, and one that any can aim to improve. However, improvement requires iteration and testing, and iteration and testing require measurement.

Traditionally, customer loyalty has been measured using customer surveys. The Net Promoter Score, for example, is based on the question (on a scale of one to ten) “How likely is it that you would recommend our company/product/service to a friend or colleague?”. Regularly monitoring metrics like this with any accuracy is going to get expensive (and/or annoying to customers), and is never going to be hugely meaningful, as advocacy is only one dimension of customer loyalty. Even with a wider range of questions, there’s also some risk that you end up tracking what your customers claim about their loyalty rather than their actual loyalty, although you might expect the two to be strongly correlated.

Common mistakes

Google Analytics and other similar platforms collect data that could give you more meaningful metrics for free. However, they don’t always make them completely obvious – before writing this post, I checked to be sure there weren’t any very similar ones already published, and I found some fairly dubious reoccurring recommendations. The most common of these was using % of return visitors as a sole or primary metric for customer loyalty. If the percentage of visitors to your site who are return visitors drops, there are plenty of reasons that could be behind that besides a drop in loyalty—a large number of new visitors from a successful marketing campaign, for example. Similarly, if the absolute number of return visitors rises, this could be as easily caused by an increase in general traffic levels as by an increase in the loyalty of existing customers.

Visitor frequency is another easily misinterpreted metric;  infrequent visits do not always indicate a lack of loyalty. If you were a loyal Mercedes customer, and never bought any car that wasn’t a new Mercedes, you wouldn’t necessarily visit their website on a weekly basis, and someone who did wouldn’t necessarily be a more loyal customer than you.

The metrics

Rather than starting with the metrics Google Analytics shows us and deciding what they mean about customer loyalty (or anything else), a better approach is to decide what metrics you want, then deciding how you can replicate them in Google Analytics.

To measure the various dimensions of (online) customer loyalty well, I felt the following metrics would make the most sense:

  • Proportion of visitors who want to hear more
  • Proportion of visitors who advocate
  • Proportion of visitors who return
  • Proportion of macro-converters who convert again

Note that a couple of these may not be what they initially seem. If your registration process contains an awkwardly worded checkbox for email signup, for example, it’s not a good measure of whether people want to hear more. Secondly, “proportion of visitors who return” is not the same as “proportion of visitors who are return visitors.”

1. Proportion of visitors who want to hear more

This is probably the simplest of the above metrics, especially if you’re already tracking newsletter signups as a micro-conversion. If you’re not, you probably should be, so see Google’s guidelines for event tracking using the analytics.js tracking snippet or Google Tag Manager, and set your new event as a goal in Google Analytics.

2. Proportion of visitors who advocate

It’s never possible to track every public or private recommendation, but there are two main ways that customer advocacy can be measured in Google Analytics: social referrals and social interactions. Social referrals may be polluted as a customer loyalty metric by existing campaigns, but these can be segmented out if properly tracked, leaving the social acquisition channel measuring only organic referrals.

Social interactions can also be tracked in Google Analytics, although surprisingly, with the exception of Google+, tracking them does require additional code on your site. Again, this is probably worth tracking anyway, so if you aren’t already doing so, see Google’s guidelines for analytics.js tracking snippets, or this excellent post for Google Tag Manager analytics implementations.

3. Proportion of visitors who return

As mentioned above, this isn’t the same as the proportion of visitors who are return visitors. Fortunately, Google Analytics does give us a feature to measure this.

Even though date of first session isn’t available as a dimension in reports, it can be used as a criteria for custom segments. This allows us to start building a data set for how many visitors who made their first visit in a given period have returned since.

There are a couple of caveats. First, we need to pick a sensible time period based on our frequency and recency data. Second, this data obviously takes a while to produce; I can’t tell how many of this month’s new visitors will make further visits at some point in the future.

In Distilled’s case, I chose 3 months as a sensible period within which I would expect the vast majority of loyal customers to visit the site at least once. Unfortunately, due to the 90-day limit on time periods for this segment, this required adding together the totals for two shorter periods. I was then able to compare the number of new visitors in each month with how many of those new visitors showed up again in the subsequent 3 months:

As ever with data analysis, the headline figure doesn’t tell the story. Instead, it’s something we should seek to explain. Looking at the above graph, it would be easy to conclude “Distilled’s customer loyalty has bombed recently; they suck.” However, the fluctuation in the above graph is mostly due to the enormous amount of organic traffic that’s been generated by Hannah‘s excellent blog post 4 Types of Content Every Site Needs.

Although many new visitors who discovered the Distilled site through this blog post have returned since, the return rate is unsurprisingly lower than some of the most business-orientated pages on the site. This isn’t a bad thing—it’s what you’d expect from top-of-funnel content like blog posts—but it’s a good example of why it’s worth keeping an eye out for this sort of thing if you want to analyse these metrics. If I wanted to dig a little deeper, I might start by segmenting this data to get a more controlled view of how new visitors are reacting to Distilled’s site over time.

4. Proportion of macro-converters who convert again

While a standard Google Analytics implementation does allow you to view how many users have made multiple purchases, it doesn’t allow you to see how these fell across their sessions. Similarly, if you can see how many users have had two sessions and two goal conversions, but you can’t see whether those conversions were in different visits, it’s entirely possible that some had one accidental visit that bounced, and one visit with two different conversions (note that you cannot perform the same conversion twice in one session).

It would be possible to create custom dimensions for first (and/or second, third, etc.) purchase dates using internal data, but this is a complex and site-specific implementation. Unfortunately, for the time being, I know of no good way of documenting user conversion patterns over multiple sessions using only Google Analytics, despite the fact that it collects all the data required to do this.

Contribute

These are only my favourite customer loyalty metrics. If you have any that you’re already tracking or are unsure how to track, please explain in the comments below.

About AuthorTom.Capper Analyst at Distilled London, specialising in web analytics.

One Content Metric to Rule Them All

Let’s face it: Measuring, analyzing, and reporting the success of content marketing is hard.

Not only that, but we’re all busy. In its latest report on B2B trends, the Content Marketing Institute quantified some of the greatest challenges faced by today’s content marketers, and a whopping 69% of companies cited a lack of time. We spend enough of our time sourcing, editing, and publishing the content, and anyone who has ever managed an editorial calendar knows that fires are constantly in need of dousing. With so little extra time on our hands, the last thing content marketers want to do is sift through a heaping pile of data that looks something like this:

Sometimes we want to dig into granular data. If a post does exceptionally well on Twitter, but just so-so everywhere else, that’s noteworthy. But when we look at individual metrics, it’s far too easy to read into them in all the wrong ways.

Here at Moz, it’s quite easy to think that a post isn’t doing well when it doesn’t have a bunch of thumbs up, or to think that we’ve made a horrible mistake when a post gets several thumbs down. The truth is, though, that we can’t simply equate metrics like thumbs to success. In fact, our most thumbed-down post in the last two years was one in which Carson Ward essentially predicted the recent demise of spammy guest blogging.

We need a solution. We need something that’s easy to track at a glance, but doesn’t lose the forest for the trees. We need a way to quickly sift through the noise and figure out which pieces of content were really successful, and which didn’t go over nearly as well. We need something that looks more like this:

This post walks through how we combined our content metrics for the Moz Blog into a single, easy-to-digest score, and better yet, almost completely automated it.

What it is not

It is not an absolute score. Creating an absolute score, while the math would be equally easy, simply wouldn’t be worthwhile. Companies that are just beginning their content marketing efforts would consistently score in the single digits, and it isn’t fair to compare a multi-million dollar push from a giant corporation to a best effort from a very small company. This metric isn’t meant to compare one organization’s efforts with any other; it’s meant to be used inside of a single organization.

What it is and what it measures

The One Metric is a single score that tells you how successful a piece of content was by comparing it to the average performance of the content that came before it. We made it by combining several other metrics, or “ingredients,” that fall into three equally weighted categories:

  1. Google Analytics
  2. On-page (in-house) metrics
  3. Social metrics

It would never do to simply smash all these metrics together, as the larger numbers would inherently carry more weight. In other words, we cannot simply take the average of 10,000 visits and 200 Facebook likes, as Facebook would be weighted far more heavily—moving from 200 to 201 likes would be an increase of 0.5%, and moving from 10,000 to 10,001 visits would be an increase of 0.01%. To ensure every one of the ingredients is weighted equally, we compare them to our expectations of them individually.

Let’s take a simple example using only one ingredient. If we wanted to get a sense for how well a particular post did on Twitter, we could obviously look at the number of tweets that link to it. But what does that number actually mean? How successful is a post that earns 100 tweets? 500? 2,000? In order to make sense of it, we use past performance. We take everything we’ve posted over the last two months, and find the average number of tweets each of those posts got. (We chose two months; you can use more or less if that works better for you.) That’s our benchmark—our expectation for how many tweets our future posts will get. Then, if our next post gets more than that expected number, we can safely say that it did well by our own standards. The actual number of tweets doesn’t really matter in this sense—it’s about moving up and to the right, striving to continually improve our work.

Here’s a more visual representation of how that looks:

Knowing a post did better or worse than expectations is quite valuable, but how much better or worse did it actually do? Did it barely miss the mark, or did it completely tank? It’s time to quantify.

It’s that percentage of the average (92% and 73% in the examples above) that we use to seed our One Metric. For any given ingredient, if we have 200% of the average, we have a post that did twice as well as normal. If we have 50%, we have a post that did half as well.

From there, we do the exact same thing for all the other ingredients we’d like to use, and then combine them:

This gives us a single metric that offers a quick overview of a post’s performance. In the above example, our overall performance came out to 113% of what we’d expect based on our average performance. We can say it outperformed expectations by 13%.

We don’t stop there, though. This percent of the average is quite useful… but we wanted this metric to be useful outside of our own minds. We wanted it to make sense to just about anyone who looked at it, so we needed a different scale. To that end, we took it one step farther and applied that percentage to a logarithmic scale, giving us a single two-digit score much like you see for Domain Authority and Page Authority.

If you’re curious, we used the following equation for our scale (though you should feel free to adjust that equation to create a scale more suitable for your needs):

Where y is the One Metric score, and x is the percent of a post’s expected performance it actually received. Essentially, a post that exactly meets expectations receives a score of 50.

For the above example, an overall percentage of expectations that comes out to 113% translates as follows:

Of course, you won’t need to calculate the value by hand; that’ll be done automatically in a spreadsheet. Which is actually a great segue…

The whole goal here is to make things easy, so what we’re going for is a spreadsheet where all you have to do is “fill down” for each new piece of content as it’s created. About 10-15 seconds of work for each piece. Unfortunately, I can’t simply give you a ready-to-go template, as I don’t have access to your Google Analytics, and have no clue how your on-page metrics might be set up. 

As a result, this might look a little daunting at first.

Once you get things working once, though, all it takes is copying the formulas into new rows for new pieces of content; the metrics will be filled automatically. It’s well worth the initial effort.

Ready? Start here:

Make a copy of that document so you can make edits (File > Make a Copy), then follow the steps below to adjust that spreadsheet based on your own preferences.

  1. You’ll want to add or remove columns from that sheet to match the ingredients you’ll be using. Do you not have any on-page metrics like thumbs or comments? No problem—just delete them. Do you want to add Pinterest repins as an ingredient? Toss it in there. It’s your metric, so make it a combination of the things that matter to you.
  2. Get some content in there. Since the performance of each new piece of content is based on the performance of what came before it, you need to add the “what came before it.” If you’ve got access to a database for your organization (or know someone who does), that might be easiest. You can also create a new tab in that spreadsheet, then use the =IMPORTFEED function to automatically pull a list of content from your RSS feed.
  3. Populate the first row. You’ll use a variety of functionality within Google Spreadsheets to pull the data you need in from various places on the web, and I go through many of them below. This is the most time-consuming part of setting this up; don’t give up!
  4. Got your data successfully imported for the first row? Fill down. Make sure it’s importing the right data for the rest of your initial content.
  5. Calculate the percentage of expectations. Depending on how many ingredients you’re using, this equation can look mighty intimidating, but that’s really just a product of the spreadsheet smooshing it all onto one line. Here’s a prettier version:
    All this is doing (remember Step 2 above, where we combined the ingredients) is comparing each individual metric to past performance, and then weighting them appropriately.

    And, here’s what that looks like in plain text for our metric (yours may vary):

    =((1/3)*(E48/(average(E2:E47))))+((1/3)*((F48/(average(F2:F47)))+(G48/(average(G2:G47))))/2)+((1/3)*((H48/(average(H2:H47)))+(I48/(average(I2:I47)))+(J48/(average(J2:J47)))/3))
    	

    Note that this equation goes from row 2 through row 47 because we had 46 pieces of content that served to create our “expectation.”

  6. Convert it to the One Metric score. This is a piece of cake. You can certainly use our logarithmic equation (referenced above): y = 27*ln(x) +50, where x is the percent of expectations you just finished calculating. Or, if you feel comfortable adjusting that to suit your own needs, feel free to do that as well.
  7. You’re all set! Add more content, fill down, and repeat!

Here are more detailed instructions for pulling various types of data into the spreadsheet:

Adding new rows with IFTTT

If This Then That (IFTTT) makes it brilliantly easy to have your new posts automatically added to the spreadsheet where you track your One Metric. The one catch is that your posts need to have an RSS feed set up (more on that from FeedBurner). Sign up for a free IFTTT account if you don’t already have one, and then set up a recipe that adds a row to a Google Spreadsheet for every new post in the RSS feed.

When creating that recipe, make sure you include “Entry URL” as one of the fields that’s recorded in the spreadsheet; that’ll be necessary for pulling in the rest of the metrics for each post.

Also, IFTTT shortens URLs by default, which you’ll want to turn off, since the shortened URLs won’t mean anything to the APIs we’re using later. You can find that setting in your account preferences.

Pulling Google Analytics

One of the beautiful things about using a Google Spreadsheet for tracking this metric is the easy integration with Google Analytics. There’s an add-on for Google Spreadsheets that makes pulling in just about any metric a simple process. The only downside is that even after setting things up correctly, you’ll still need to manually refresh the data.

To get started,  install the add-on. You’ll want to do so while using an account that has access to your Google Analytics.

Then, create a new report; you’ll find the option under “Add-ons > Google Analytics:”

Select the GA account info that contains the metrics you want to see, and choose the metrics you’d like to track. Put “Page” in the field for “Dimensions;” that’ll allow you to reference the resulting report by URL.

You can change the report’s configuration later on, and if you’d like extra help figuring out how to fiddle with it, check out Google’s documentation.

This will create (at least) two new tabs on your spreadsheet; one for Report Configuration, and one for each of the metrics you included when creating the report. On the Report Configuration tab, you’ll want to be sure you set the date range appropriately (I’d recommend setting the end date fairly far in the future, so you don’t have to go back and change it later). To make things run a bit quicker, I’d also recommend setting a filter for the section(s) of your site you’d like to evaluate. Last but not least, the default value for “Max Results” is 1,000, so if you have more pages than that, I’d change that, as well (the max value is 10,000).

Got it all set up? Run that puppy! Head to Add-ons > Google Analytics > Run Reports. Each time you return to this spreadsheet to update your info, you’ll want to click “Run Reports” again, to get the most up-to-date stats.

There’s one more step. Your data is now in a table on the wrong worksheet, so we need to pull it over using the VLOOKUP formula. Essentially, you’re telling Excel, “See that URL over there? Find it in the table on that report tab, and tell me what the number is next to it.” If you haven’t used VLOOKUP before, it’s well worth learning. There’s a fantastic  explanation over at Search Engine Watch if you could use a primer (or a refresher).

Pulling in social metrics with scripts

This is a little trickier, as Google Spreadsheets doesn’t include a way to pull in social metrics, and that info ins’t included in GA. The solution? We create our own functions for the spreadsheet to use.

Relax; it’s not as hard as you’d think. =)

I’ll go over Facebook, Twitter, and Google Plus here, though the process would undoubtedly be similar for any other social network you’d like to measure.

We start in the script editor, which you’ll find under the tools menu:

If you’ve been there before, you’ll see a list of scripts you’ve already made; just click “Create a New Project.” If you’re new to Google Scripts, it’ll plop you into a blank project—you can just dismiss the popup window that tries to get you started.

Google Scripts organizes what you create into “projects,” and each project can contain multiple scripts. You’ll only need one project here—just call it something like “Social Metrics Scripts”—and then create a new script within that project for each of the social networks you’d like to include as an ingredient in your One Metric.

Once you have a blank script ready for each network, go through one by one, and paste the respective code below into the large box in the script editor (make sure to replace the default “myFunction” code).

function fbshares(url) {
var jsondata = UrlFetchApp.fetch("http://api.facebook.com/restserver.php?method=links.getStats&format=json&urls="+url);
var object = Utilities.jsonParse(jsondata.getContentText());
return object[0].total_count;
Utilities.sleep(1000)
}
function tweets(url) {
var jsondata = UrlFetchApp.fetch("http://urls.api.twitter.com/1/urls/count.json?url="+url);
var object = Utilities.jsonParse(jsondata.getContentText());
Utilities.sleep(1000)
return object.count;
}
function plusones(url) {
var options =
{
"method" : "post",
"contentType" : "application/json",
"payload" :
'{"method":"pos.plusones.get","id":"p","params":{"nolog":true,"id":"'+url+'","source":"widget","userId":"@viewer","groupId":"@self"},"jsonrpc":"2.0","key":"p","apiVersion":"v1"}'
};
var response = UrlFetchApp.fetch("https://clients6.google.com/rpc?key=AIzaSyCKSbrvQasunBoV16zDH9R33D88CeLr9gQ", options);
var results = JSON.parse(response.getContentText());
if (results.result != undefined)
return results.result.metadata.globalCounts.count;
return "Error";
}

Make sure you save these scripts—that isn’t automatic like it is with most Google applications. Done? You’ve now got the following functions at your disposal in Google Spreadsheets:

  • =fbshares(url)
  • =tweets(url)
  • =plusones(url)

The (url) in each of those cases is where you’ll point to the URL of the post you’re trying to analyze, which should be pulled in automatically by IFTTT. Voila! Social metrics.

Pulling on-page metrics

You may also have metrics built into your site that you’d like to use. For example, Moz has thumbs up on each post, and we also frequently see great discussions in our comments section, so we use both of those as success metrics for our blog. Those can usually be pulled in through one of the following two methods.

But first, obligatory note: Both of these methods involve scraping a page for information, which is obviously fine if you’re scraping your own site, but it’s against the ToS for many services out there (such as Google’s properties and Twitter), so be careful with how you use these.

=IMPORTXML

While getting it set up correctly can be a little tricky, this is an incredibly handy function, as it allows you to scrape a piece of information from a page using an XPath. As long as your metric is displayed somewhere on the URL for your piece of content, you can use this function to pull it into your spreadsheet.

Here’s how you format the function:

If you’d like a full tutorial on XPaths (they’re quite useful), our friends at Distilled put together a really fantastic guide to using them for things just like this.  It’s well worth a look. You can skip that for now, if you’d rather, as you can find the XPath for any given element pretty quickly with a tool built into Chrome.

Right-click on the metric you’d like to pull, and click on “Inspect element.”

That’ll pull up the developer tools console at the bottom of the window, and will highlight the line of code that corresponds to what you clicked. Right-click on that line of code, and you’ll have the option to “Copy XPath.” Have at it.

That’ll copy the XPath to your clipboard, which you can then paste into the function in Google Spreadsheets.

Richard Baxter of BuiltVisible created a wonderful  guide to the IMPORTXML function a few years ago; it’s worth a look if you’d like more info.

Combining =INDEX with =IMPORTHTML

If your ingredient is housed in a <table> or a list (ordered or unordered) on your pages, this method might work just as well.

=IMPORTHTML simply plucks the information from a list or table on a given URL, and =INDEX pulls the value from a cell you specify within that table. Combining them creates a function something like this:

Note that without the INDEX function, the IMPORTHTML function will pull in the entire piece of content it’s given. So, if you have a 15-line table on your page and you import that using IMPORTHTML, you’ll get the entire table in 15 rows in your spreadsheet. INDEX is what restricts it to a single cell in that table. For more on this function, check out this quick tutorial.


Taking it to the next level

I’ve got a few ideas in the works for how to make this metric even better. 

Automatically check for outlier ingredients and flag them

One of the downsides of smooshing all of these ingredients together is missing out on the insights that individual metrics can offer. If one post did fantastically well on Facebook, for example, but ended up with a non-remarkable One Metric score, you might still want to know that it did really well on Facebook.

In the next iteration of the metric, my plan is to have the spreadsheet automatically calculate not only the average performance of past content, but also the standard deviation. Then, whenever a single piece differs by more than a couple of standard deviations (in either direction), that ingredient will get called out as an outlier for further review.

Break out the categories of ingredients

In the graphic above that combines the ingredients into categories in order to calculate an overall average, it might help to monitor those individual categories, too. You might, then, have a spreadsheet that looked something like this:

Make the weight of each category adjustable based on current goals

As it stands, each of those three categories is given equal weight in coming up with our One Metric scores. If we broke the categories out, though, they could be weighted differently to reflect our company’s changing goals. For example, if increased brand awareness was a goal, we could apply a heavier weight to social metrics. If retention became more important, on-page metrics from the existing community could be weighted more heavily. That weighting would adapt the metric to be a truer representation of the content’s performance against current company goals.



I hope this comes in as handy for everyone else’s analysis as it has for my own. If you have any questions and/or feedback, or any other interesting ways you think this metric could be used, I’d love to hear from you in the comments!

About Author — Trevor is the content strategist at Moz—a proud member of the content team. He manages the Moz Blog, helps craft and execute content strategy, and wrangles other projects in an effort to align Moz’s content with the company’s business objectives and to provide the most valuable experience possible for the Moz community.

Is Your Content Credible Enough to Share?

Insufficient credibility undermines digital marketing, particularly among SEOs who now produce or promote content as part of their job. People won’t share content that isn’t credible; they know the things they share reflect on them and impacts their own credibility. While the importance of credibility gets mentioned in passing, little has been said about how to actually build it, until now.

Your Guide to Establishing Credibility

You build credibility by signaling to the reader that you can be trusted. The signals of trust can come from the author, the site, and from within the content itself. Each signal will appeal to different types of readers in different contexts, but they come together to make content that is credible enough to share.

Rand mentioned credibility in his Content Marketing Manifesto as one of the things we need to build familiarity, linkability, and trust. Several studies have also shown credibility’s critical role in promoting and sharing. So, let’s build some credibility.

1. Establish expert credibility

Expert credibility comes from having knowledge others do not. People want experts they can understand and trust, especially when trying to understand complex or ambiguous topics like new technology, engineering, advanced science, or law.

Be an expert or hire an expert with insight

A Syracuse University study found “insightful” content was most correlated with users’ estimation of a blog’s credibility. You can’t offer interesting insight on a subject you know very little about, so obviously you need to be an expert or hire one.

Unless your expert has breaking news, he or she needs to provide quality analysis and opinion to add any value. Most successful non-news content is opinion and/or analysis, whether verbal, graphical, or textual.

If you’re creating video or text content for your site, the expert should also be able to clearly express complex subjects in a way readers can understand and follow. If he can’t then get a content writer to interview the expert and relay the information.

Source experts

Do not try to give your opinion as an expert in a field where you’re not one. It won’t work.

We’ve all read non-expert content on subjects where we’re knowledgeable. We know what expertly-written content looks like and can easy detect pretenders. If you pretend to be an expert and get one little detail wrong, you’ll blow all your credibility with the people who actually understand and influence the discussion. They won’t link to or share that piece of content and they may never share any of your content again. Don’t take that risk.

Rather than trying to fake expertise, try finding experts and incorporating their expertise into your post. Journalists have long understood this tactic. Even journalists who are experts use quotations from other experts in both news and analysis pieces. The front page of the Washington Post’s technology print section is usually littered with quotation marks and according-tos.

People running blogs can easily get a quote from someone knowledgeable enough to have an opinion that matters. Experts with strong opinions usually want to share them.

Be passionate to build trust

The Syracuse University study and this University of Pennsylvania study show that passion is key to judgments on credibility and sharing. Readers don’t just want an expert who can explain things; they want an expert who cares.

Experts who know what they’re talking about tend to have nuanced and sophisticated opinions about subjects they understand. Don’t undercut that understanding with a shallow piece of content. Expert pieces should be deep and thoughtful.

Legal experts who really care about Citizens United vs. Federal Election Commission simply wouldn’t take the time to write a bland essay on what the ruling said and how it might impact the future of politics. SEO experts don’t want to report on the fact that Google penalized guest post networks. They care, and want to explain why it’s good or bad.

Expert opinion shouldn’t be confused with argument, and it doesn’t require you to start a firefight with anyone who’s taken the other stance.

Cite sources

Cite the sources for all your expert insight. Citing expert sources is the most obvious way to back up your claims and gain trust. Often citing a source is as simple as linking to the webpage from which you got your information.

Don’t use weasel words like, “it has been said,” or, “many people believe,” to skirt the citation responsibility. Experienced writers and editors instinctively close the tab on any content attempting to unnecessarily blur their sources.

Show data

Sometimes, instead of breaking news, you can add to it with data. Data lends credibility to your post in a unique way because with numbers, your sources and methodology are more important than the author’s history and popularity. The data, if it’s compiled and analyzed correctly, speaks for itself.

For example, when the CableTV team heard about the potential Comcast/Time Warner merger, we knew simply sharing the news would be a waste of time. Every major news outlet would easily drown out our site, and opinion pieces where popping up everywhere. Instead, we crunched some numbers, comparing U.S. Census data with coverage data, producing a coverage and population analysis people could see and understand. A few large news organizations used the data in ongoing analysis, Reddit’s founder (Alexis Ohanian) shared the post, and roughly 60,000 people ended up seeing it.

JavaScript libraries and HTML 5 tools are springing up everywhere to help non-technical users visualize data in interesting ways. Mapping examples include SimpleMaps (used in our post), MapBox, Google Fusion Tables, etc. Graphing and other data options are all over, but this site is a good place to start. Compile data in-between writing stories related to your niche with Census data or any of these data sources so you’re ready to go when news hits. For more tips, Kane Jamison always has tips on data-driven content marketing, including the presentation below:

 

2. Harness hierarchical credibility

Hierarchical or rank-based credibility comes from a person’s position or title. High-ranking members of an organization have a better chance of being taken seriously simply by nature of their perceived authority, especially when the organization is well-known.

Have important people write important things

People lend more credibility to an article written by an unknown CEO than a writer they don’t know—even if the writer knows more about the topic than the CEO. For better or worse, people are simply influenced by official job titles and standing within hierarchy.

Your definition of what’s important may vary. Almost everything on the popular 42floors blog is written by a founder, while CEOs of larger companies will probably have less time and less interest in regular blogging.

Use executives for guest posts

I know – I’m the guy who wrote guest posting had gone too far. Google thought so too based on its latest round of penalties. I believe, however, the lack of credibility and expertise in many guest articles was a major cause for Google’s (perhaps disproportionate) response to guest blogging networks.

Don’t waste an executive’s time on small unknown sites no one would ever read. Instead, consider pitching an article written by an executive or other well-known figure to well-known sites. Trulia is a good example with high-ranking members adding guest posts for Google, The Wall Street Journal, and interviewing with sites like Business Insider. Moz, of course, is another place to see founders adding posts and video frequently.

Better job titles

If you want your content to be shared, make your authors experts in both title and in truth. Changing titles for title’s sake may sound stupid, but titles like managing editor, [subject] correspondent, [subject expert], or even [subject] writer have more gravitas than a plain “author” or “contributor.” Think about what the title says to a person reading your content (or email). The flip side: writers should actually be subject-matter experts.

You should also re-think giving quirky titles to everyone, as they can hurt credibility. I can’t imagine the Wall Street Journal quoting a “digital ninja” or “marketing cowboy” in their story – unless that story is about job titles.

Leadership quotes

You can also make use of another person’s position to lend credibility to your content. This works especially well if you’re looking for insight into a recent news event. Quotes from company executives, government officials, and other high-title positions give you something unique and show you’re not just another blogger summarizing the news built on someone else’s journalism.

3. Seek referent credibility

When someone trustworthy shares something with positive sentiment, we immediately trust the shared item. The referrer lends his or her credibility to the referee. The Moz audience will have no problem understanding referent credibility, as it’s the primary method Google uses to prioritize content that seems equally relevant to a user query. People also rely on referent credibility to decide whether a post is worth sharing. Those referrals build more credibility, and viral content is born. How do you get some referent credibility to radiate onto your content?

Publish on credible sites

This post will receive some measure of credibility simply by being published on the main Moz blog. Anything on or linked to from well-known sites and authors receives referent credibility.

Share referrals and testimonials

You’ll commonly see “as featured on” lists or testimonials from recognizable personalities. Testimonials from anyone at Google or Microsoft with an impressive-sounding position could go a long way for a B2B product. Referent credibility is the reason celebrity endorsements work.

Leveraging referent credibility in a press push generally works well if your company is involved in something newsworthy. Consider requesting and using quotes from relevant and well-known people in press releases or even outreach emails if you’ve done something worth announcing.

Analysis pieces are a little trickier: pointing out past coverage can lend some credibility to a blog post or press release, but it can also look a little desperate if done incorrectly. High relevance and low frequency are key. A good offline analogy is that person who mentions that time they met a celebrity every chance they get, whether it’s relevant or not. Name-droppers are trying (too hard) to build credibility, but it’s actually just sad and annoying. The same celebrity encounter might actually generate interest and credibility if it’s relevant to the conversation and you haven’t told the story to death. Feel free to talk about times well-known people shared or endorsed you, but make sure it’s relevant and don’t overdo it.

Appeal to credible people

When a well-known person shares your content, more links and shares often follow. Find credible people, see what they talk about and share, and then try make something great that appeals to them. This idea has already been covered extensively here on Moz.

4. Take advantage of associative credibility

People make associations between one trait and another, creating a Halo effect. For example, several studies (1, 2, 3) have found that attractive people often receive higher pay and are seen as more intelligent, when in reality there is no correlation. Users do the same thing with websites, so making your website look and feel like other credible sites is important.

Use trusted design as a guide

Don’t run in and steal the Times’ CSS file. I’m pretty sure that’s illegal. It’s also probably not going to work unless you’re running a national multi-channel newspaper. But you should be aware that people associate design elements on a site with the credibility of the site. You can help or hinder your credibility through web design in hundreds of ways. Start by looking at legitimate sites and incorporating some of their design elements into your own. Then check out some untrustworthy and unknown sites to see the difference and determine what to avoid.

Obviously you want your site to be unique, but be carefully unique. If you stray from trusted convention, know why you’re doing it. Maybe you want to kill hamburger icons on mobile – just make sure you have a well-considered alternative.

When in doubt, test

Split tests tend to focus on conversion and sales, and too often the blog/news design gets dragged along for the ride. Given the importance of content and sharing on visibility, testing the impact of site elements on sharing should be as important as the tests we do on sales funnels.

You can test different design elements as they relate to sharing by creating posts and pages with a page-level variable and a canonical tag back to the original post. Always test URLs with variables against other URLs with variables to account for site owners manually removing them. This setup may also be useful for testing different content promotion channels and methods.

Tracking results externally requires a different URL. You may use a modified URL rather than a variable, but only for single-page tests. Note that results will be a little erratic with variables people might remove, but trends will still emerge.

Consider your domain name

You have probably read a news article and wanted to share it, but then looked for a more reputable source to post to Reddit or Twitter.

Sometimes I’ll share content from a site I’ve never heard of, but usually I want the content I’m sharing to come from a site with a name that evokes trust. Everything in this article goes into a decision on whether to share, but domain name is a surprisingly large factor. When I post an article, I don’t want the first response or comment to be something snarky like, “Oh, according to goodbusinessnews4u.com – sounds legit.”

Domain will also impact click-through on social networks and social sharing sites. A couple years ago I wrote about choosing the right domain for your branding strategy, and I think its message still holds true.

Domain name will also influence what content seems appropriate. You don’t want people asking, “Why is highspeedinternet.com writing about cooking recipes?” Make sure content strategy aligns with your domain and branding strategy.

Write like a writer; build profiles

You must have credibility in your writing if you want your content to be shared. Follow these simple tips:

  • Write clearly, hire writers, or don’t waste your time on text content. Even a credible expert will have a hard time being trusted enough to share unless they write clearly with native-level grammar.
  • Build author profiles, use full names, and use author images. Nothing says, “I’m not proud of this” like a partial name without an image.
  • Build a full section about your company. Be as specific as possible, and avoid vague statements on the value your site adds.
  • Craft headlines that are easy to follow, avoid trick/cute headlines unless you have a great reason for tricking or confusing users about what the content will deliver.
  • Be consistent with surrounding articles. Jumbled topics and unrelated surrounding articles make sites look inconsistent.

Avoid clip art and stock images

Just ask Ian Lurie what he thinks about stock images. When I wrote “How Google’s Algorithm Silences Minority Opinions” I had the image in my head of Googlebot placing a gag on a user. Thankfully one of CLEARLINK‘s talented designers had a better (and less aggressive) idea:

A Google logo would have been easy, but boring. The custom image added a strong visual to the argument, emphasizing key points: a computer algorithm silencing a person, the person not caring too much. It also sent the associative message to users that the site is legitimate enough to use unique images.

Most of us can’t get custom illustrations or photographs for each post, but you should consider it for high-investment pieces or pieces you think have a good shot at success.

Final thoughts

Unless you have inside information on a rumor or are willing to burn your credibility going forward, your content must project credibility. Smaller sites and lesser-known brands have the most to gain by understanding how users and journalists make judgments on credibility and working to improve those factors. You don’t necessarily need to employ every strategy and tactic, but the best coverage and links will always require a high level of credibility. 

About AuthorCarson Ward is an online marketing manager at Clearlink, an enterprise digital marketing company.

Google Announces the End of Author Photos in Search: What You Should Know

Even so, it came as a surprise when John Mueller announced Google will soon drop authorship photos from most search results.

This one hits particularly hard, as I’m known as the guy who optimized his Google author photo. Along with many other SEOs, I constantly advise webmasters to connect their content writers with Google authorship. Up until now, would-be authors clamored to verify authorship, both for the potential of increased click-through rates, and also for greater brand visibility by introducing real people into search results.

As of today, the MozCast feature graph shows an immediate 10% decline in traditional authorship snippets, almost overnight. We expect to see this roll out further over the next several days.

How are author photos changing?

The announcement means author photos in most Google search results are going away. John Mueller indicated the change will roll out globally over the next few days.

Up until now, if you verified your authorship through Google+, and Google choose to display it, you might have seen your author photo displayed in Google search results. This included both your author photo and your Google circle count.

Going forward, Google plans to only display the author’s name in the search snippet, dropping the photo and the circle count.

Google News adds a different twist. 

In this case, Google’s plans show them adding a small author photo next to Google News snippets, in addition to a larger news photo snippet. 

At this time, we’re not sure how authorship in Google News will display in mobile results.

Why did Google drop author photos?

In his announcement, John Mueller said they were working to clean up the visual design of search results, and also to create a “better mobile experience and a more consistent design across devices.”

This makes sense in the way Google has embraced mobile-first design. Those photos take up a lot of real estate on small screens. 

On the other hand, it also leaves many webmasters scratching their heads as most seemed to enjoy the author photos and most of the web is moving towards a more visual experience.

John Mueller indicated that testing shows that “click-through behavior” with the new results is about the same, but we don’t know exactly what that means. One of the reasons authors like the photos in search results was the belief that a good photo could result in more clicks (although this was never a certainty). 

Will the new SERPs result in the same amount of clicks for authorship results? For now, it’s hard to say.

Critics argue that the one thing that will actually become more visible as a result of this change will be Google’s ads at the top and sides of the page.

What isn’t changing?

Despite this very drastic visual change in Google search results, several things are not changing:

1. Authorship is still here

As Mark Traphagen eloquently pointed out on Google+, the loss of photos does not mean Google authorship itself is going anywhere. 

“Google Authorship continues. Qualifying authors will still get a byline on search results, so Google hasn’t abandoned it.”

2. Authors’ names still appear in search results

In the new system, authors still get their name displayed in search results, which presumably clicks through to their Google+ profile. Will this be enough to sway searchers into clicking a link? Time will tell.

3. Your rankings don’t change

Authorship does not influence rankings for most search results. (exceptions for certain results like In-depth articles) Sometimes the photo led to more clicks for some people, but the new change should not alter the order of results.

4. You must still verify authorship for enhanced snippets

Google isn’t changing the guidelines for establishing authorship. This can be accomplished either through email verification or linking your content to your Google+ profile, and adding a link back to your website from your Google+ contributor section.

Tracking your authorship CTR

If you have authorship set up, you can easily track changes to your click-through rate using Google Webmaster Tools. Navigate to Labs > Author Stats to see how many time your author information has appeared in search results, along with total number of clicks and average position.

In the example above, search results associated with my authorship receive around 50,000 impressions a day, with an average of 1831 clicks, for an overall CTR of 3.6%

If you track your CTR immediately before and after the Google authorship change (by adjusting the dates in Webmaster Tools) you might notice any changes caused by the shakeup.

Keep in mind that CTR is highly determined by rank, or average position. Small fluctuations in rank can mean a large difference in the number of clicks each URL receives.

Is Google Authorship still worth it?

For many, scoring photos in search results was the only incentive people had to verify authorship. Whether or not it increased click-through rates, it was an ego boost, and it was great to show clients. With the photos gone, it’s likely fewer people will work to get verified.

Even with the photos gone, there is still ample reason to verify authorship, and I highly recommend you continue to do so. 

  • Even though a byline is much less visible than a photo, across the hundreds or thousands of search impressions you receive each day, those bylines can make a measurable difference in your traffic, and may improve your online visibility.
  • Google continues to work on promoting authoritative authors in search results, and authorship is one of the better ways for Google to establish “identity” on the web. Google continues to make statements explaining how important identity in content is, as explained by Matt Cutts both publicly and in this rarely seen interview.

Facing the future

If Google begins to incorporate more “Author Rank” signals into its search algorithm, establishing yourself as a trusted authority now could pay off big down the road. Disappearing author photos today may someday be replaced by actual higher rankings for credible authors, but there are no guarantees. 

At this point, it’s hard to say exactly where the future of authorship lies, especially given the unknown future of Google+ itself.

Personally, I will be sad to see author photos disappear. Let’s hope for something better down the road.

More from across the web:
Google Removes Author Photos From Search: Why And What Does It Mean?

About  Author — Cyrus-Shepard leads the Content and in-house SEO team at Moz. Follow him on Twitter and Google+. Read more posts by Cyrus.

Is Google Analytics Hurting your Business?

Given that everybody in the conference hall was involved in digital marketing in some way, and how much of website visitor tracking is done through Google Analytics, you might even speculate that was a foolhardy statement and that the only thing that saved the speaker was the cordon of riot police brought in specially for this talk. But then the man on the platform was Ammon Johns – a man with almost 20 years of SEO experience who is recognised by the industry as someone with a huge amount of SEO knowledge and who speaks at some of the largest digital marketing conferences around – so the riot police were little troubled, although many eyebrows were raised.

It turns out that the main aim of the talk wasn’t actually to get everybody in the room to boycott Google, but to make us think. And that’s what I’d like you to do throughout this post – question the common wisdom that Google Analytics is the best thing since hypertext protocols and ask yourself whether it might actually be harming your business.

Why is Google Analytics so great?

It is a truth universally acknowledged that Google Analytics is brilliant for four reasons:

  1. It’s very easy to use
  2. Everyone else uses it, so it must be the best
  3. It integrates brilliantly with AdWords
  4. It’s free. Who can argue with free?

The big question is, are these really the right reasons for choosing an analytics tool? Does “easy to use” mean “easy to get actionable insights from” or something else? With Google being a hugely successful corporation, are they really giving me a huge chunk of data for free or am I paying in some other way?

Is Google Analytics actually easy to use?

Google Analytics is definitely easy to set up. It’s also easy to get data out of and it’s easy to get rid of data you don’t want. But spitting out data isn’t the point of a web analytics. The point is to provide insights that let you build testable hypothesis and so improve the performance of your platform.

We’ve all seen the Google Analytics home screen – now the Audience Overview screen – with its visitor graphs and its language breakdowns. But have you really studied it? Head over to Analytics, take a look at that Audience Overview screen and ask yourself “how can I improve my business with these data and these data alone?” I’ll give you a few minutes of thinking time.

Did you manage to find anything? I would be very surprised if you did. Now that’s quite a shocking statement: you went to the first – and so by definition most important – screen of a tool that millions of people use every day and I don’t expect you to have found anything useful. Ouch.

That’s because while Google Analytics is very easy to set up and it’s very easy to see the data it spits out, it’s actually very difficult to get real insight. Almost every valuable analysis requires creating a custom report. You want to use cohort analysis to determine whether you have taken the right approach on a channel? Custom report. You want to see which blog posts drive the most and best engagement? Set up JavaScript events then build a custom report. You want to integrate offline sales data from your CRM? No can do; you will be able to when you get Universal Analytics, but only using (all together now) a custom report.

So there are plenty of things in Analytics that could be easier. But how can we make them easier? The problem here comes not from the data being collected but from the way it’s displayed. One option is to suck the data straight in from the API to your own set of reports that can not only be branded nicely but will only show the graphs you want to see, set up in the way you want. It’s not actually all that difficult for a good developer to do, and if it saves you time each week or month then you can make a good business case for investing in such a solution.

If you can make the business case for building a custom interface for Google Analytics, though, it might be worth asking yourself the question posed at the start of this post: “is Google Analytics really the best solution out there for me or can I justify investing in something else?” Take a couple of hours to explore the web analytics ecosystem and see if you can find a solution that would make it easier to deliver real, actionable insight.

Just because everyone else uses it, is Google Analytics really the best?

I started the last section off with a challenge, so I’ll do the same here. Don’t worry, this will be a simple one with no trips off to Analytics. Ready? Define “the best”. Go!

OK, so that’s actually what a mathematician would define as “complex”: a question that’s easy to ask but difficult to answer. The reason it’s difficult to answer is twofold:

  1. This is probably the first time we’ve ever asked ourselves this question
  2. The answer depends hugely on context: who is asking questions of our data, why they want answers, who is going to do the analysis, and a whole range of other factors

The reason I asked the question is that if we can’t define what “the best” means, how can we say Google Analytics is the best solution?

There are some things it does brilliantly. Tracking visitor flow, aggregating data over multiple pages and channels, letting us look at engagement. But there are some questions it simply cannot answer. For example, what would your reply be if your boss asked:

  • “The average time spent on this landing page is two minutes. Is that because they were reading the copy or because they were comparing our product to our competitors?”
  • “How well are the videos on our site engaging visitors?
  • “People jump from their mobile, to their work PC, back to their mobile on the train home, then onto their home computer. How can we track this happening to get a real picture of cross-device behaviour?”
  • “What happens if people have cookies turned off?”

Hands up all those who said “ermmm”.

There are tools out there that can do these things:

  • Crazy Egg gives you heatmaps showing what proportion of people have scrolled down a page and how many have clicked links on a given page (I personally love Crazy Egg. No affiliation, they just make a great product).
  • Digital Analytix from comScore lets you track individuals across devices. Universal Analytics will bring in this behaviour to some extent, but only for people who sign in to their Google accounts while browsing
  • While you could cobble together a video analysis using time on page, JavaScript events, and a pinch of salt, Digital Analytix gives you data on browser behaviour during video streaming
  • Piwik is an open source (read “free and fully customisable”) analytics tool that doesn’t use cookies, so doesn’t give you the problem of not being able to track people who have turned off cookies

A screenshot from Crazy Egg used on the Optimizely blog. When a CRO tools company starts using a web analytics tool it could be interesting to take a look (Image credit: Crazy Egg)

For a lot of people those are some pretty fundamental questions that can’t be answered. But some people know enough about JavaScript – or employ people who do – that they can set up event listeners to get a portion of this data. And some people are not asking these questions. But think about whether Google Analytics has ever not given you the answer to a question, or even if you haven’t asked a question because you know it can’t be answered; if this has happened a few times then it might be a good time to head off and do that research into other providers.

Anything free is amazing. But is Analytics really free?

Now I imagine that a lot of people reading that heading have straight away thought “of course it’s really free, we don’t give them a penny”. But think about this: in using Analytics you give Google all of the data. That gives them knowledge about you and your customers, and knowledge, as we all know, is power. So you might not be paying Google cash, but you are definitely helping them keep their position as one of the most powerful companies on the planet.

But more than that, if knowledge is power and power is money then surely gaining knowledge about data and its manipulation is a great learning opportunity and one that will make you a fair return one day. As Ammon said in his talk, “Using Google Analytics doesn’t make you good with data, just with Google Analytics”. Because if you just accept what Analytics pukes out at you, are you really asking the difficult questions that will help your business to improve?

One last thought: the data that Google Analytics gets is yours for free anyway. It’s your information about people coming to your website and interacting with your services, not Google’s. Lots of companies are moving towards data warehouses now, keeping all of their information within their own domain instead of giving it to third parties. And if you have any concerns about privacy following the recent revelations about the NSA and GCHQ then you might consider them pretty sensible people.

When is “Good Enough” good enough?

This was actually going to be the title of this post, but I don’t quite have Ammon’s nerve (and it’s a great topic for a project management post so has been filed away for later use).

As we’ve seen, Google Analytics is not the best solution out there. It’s not even the best free solution out there for some people. But what it is is “good enough”. It’s good enough to get some profound insights out of if you work with it, and like Excel, even better if you can build a custom dashboard. It’s good enough if you value those insights over privacy. It’s good enough if you can’t invest the time to learn a new tool that will give you similar insights. It’s good enough if you ask it the right sort of questions.

It might be for him, but is it for you? (Image credit The Meme Wiki)

But – and it’s a big but – for you that might not be enough for you and your company. Do you work for a “data-driven organisation”? Do you want to ask hard questions, make big changes, and get big improvements as a result of the data in your hands? Do you want to stand out from all of the other companies and agencies out there who do analytics in the same way?

If “good enough” suits your needs, dismiss this post with a wave of the hand. But if you think that you might need more than “good enough” in the future, or if you really want to be a properly data-driven decision maker, or if you think that big changes will give you big results I urge you to think about your choices. Explore the options out there; even if you go back to Google Analytics, you’ll come back with more knowledge than you had before. But if you don’t go back, you can look forward to a long, exciting, and rewarding journey.

About Author – Benjamin Morel is an agency-based digital marketeer and project manager working for Obergine in Oxford, UK. He is passionate about inbound marketing, especially building strategies centered around data and communication. Follow him on Twitter @BenjaminMorel or on Google +.