Battle for conversion

When you have managed to attract traffic to your website, the next problem arises: how to lead visitors to the goals which we want them to complete, in other words – how to convert inbound traffic into sales.

In this article you will find a review of several tools which could be used to increase the conversion and make users happy J.

The process of conversion improvement consists of several stages. Each of them is equally important as the information obtained from the previous stage is used for the next one.

It’s not difficult to increase the conversion

You just need to:

  • understand website goals.
    Website goals may be of two types, at least – transitional and final. What final goal means is clear: for online store it could be purchase, for Groupon – sign up, and so forth. But on the path to this goal some transitional goals may exist.Important. Your goals and goals of your users are not always the same. Fancy dress and simple process of purchase are not synonyms. Think as a user, even as an inexperienced one.For example, users may want to fly to Thailand, others’ dream destination is Paris or Barcelona – they look for tickets many times, compare hotels, but make purchase only after several months. In this case the path from transitional goal to final is really long and this chain should be tracked.
  • define entry points and their distinction.
    Users which arrived from the organic search and frequent buyers may require different information from the service.
  • consider all possible paths from entry point to the goal.

But it is not possible you may say! That is why it is required to define the most important paths and make them as easy as possible.

  • be flexible, monitor and experiment
    The most interesting part – on this stage the first three stages are run on circles, while you collect the information and study the results.

 

Tools review: how to track and adjust the process

So let’s say you are analyzing the website with information about air tickets. The main goals are: the search on site, newsletter sing up, and the most important, but suspended in time goal – air ticket purchase.  This feature – suspended goals – became available not so long ago, and it is really great option.

After we have set up the goals, we need to gather the initial conversion statistics – it will be out starting point. Depending on goals number as well as visit and transaction frequency, the initial data could be collected in a few hours or several weeks.

Setup goals may not only help evaluate the efficiency, but detect the website problems as well. For example, thanks to conversion report, you may find out that JS doesn’t work properly and half of users are not able to reach the goal.

When initial data is finally at our disposal we can start out. Let’s look at the tools which may help us in this.

We will begin with Yandex.Metrica
Yandex. Metrica offers several marvelous tools for in-page analysis thanks to which you can understand what prevents the user from reaching the target page.

Form analysis – one of the major tools used by online stores and service-based companies with unusual registration process. The instrument shows how many users began filling in the form, how many of them did to the completion and where majority abandoned the process. Excellent feature that helps create forms for people indeed.

Link map – allow you to see where in fact users click, what interests them and how they use the navigation.

Scroll map you will find out in seconds till what length the page “works”. The more content in blind zone, the more you will have to change.

Click path analysis – gives you insight on how user arrives at your website, where he goes then and how close he is to the final point when he quits the process.
The tools mentioned above fit the best for static analysis, meaning that before making such analysis, you will have to gather enough data for long period of time – week or month.
Click map without additional settings is convenient to apply for such pages as the main page. But what to do if your website has lots of pages of similar type but with different URLs, and you would like to see the overall picture of users’ behavior?  This is where urlFilter comes handy. With the help of this filter you can group several pages into one. For example, if you wish to group all the pages in the search on site you will have to write the following piece of code:
yaCounter.clickmap({

urlFilter: function(urlOrig) {

var url = window.location.pathname.toLowerCase(),

prefix = ‘http://’ + window.location.host;

if (url.substr(0, 8) === ‘/search/’) {

return prefix + url.substr(0, 11);

}

// other groups

// …

return urlOrig;

}

});
In the report you will see these pages with full URLs, but when you will view the map, the result will be common for all the pages.

Webvisor – a unique and free of charge function in Yandex.Metrica. Some time ago Webvisor was a separate product, but in 2010 Yandex purchased the code along with the team and starting from 2011 this functionality is a part of Metrica tools.

Webvisor allows to view users’ activity in motion, it records their actions – clicks, filling in the forms, moving mouse (it is a separate and fun bonus which indicates that user was bored and needed some useful information to entertain)  – and then plays it for you as a video.

Webvisor comes into play if there is a page which users steadily leave and you need to find out why they can’t continue the path and reach the goal.

Now let’s talk about Google Analytics
These days Google is transforming Analytics from just statistical instrument into A/B tests and usability experiments platform.  You will find an enormous functionality there.

Universal Analytics is a Google Analytics future. Not all the features in Universal Analytics work properly at the moment, as they are in beta version, but, no doubt, they offer great functionality. New Analytics version works faster, offers wider limits, allows sending queries from backend, and so on. Besides, new opportunities of Google Analytics will be implemented only in Universal version.
We are interested in Conversions section.

The last three ones (Reverse Goal Path, Funnel Visuzlization и Goal Flow) allow to see the overall picture of what is going on the website even for really large projects.

If additionally to this features you will set up events, you will be able to receive the picture of how user navigates through your website.

For example, Goal Flow report gives you a visual picture of how visitors arrived at your website and reached the goal, and whether they took the path you expected them to or there was another funnel.

Now let’s talk about the most advanced and cool Google Analytics functionality – ability to transmit events from server and tracking them offline!

For example, the same website offering information about air tickets, doesn’t sell anything itself – the purchase is completed on third-party website, so we can’t track the event as are not able to place tracking code there. With the time we receive sales data, but it is still unclear, what source contributed the most.

Google helps everyone who has transaction offline solve this problem.

Google unified the interface responsible for the interaction with analytics servers. All the queries come to URL www.google-analytics.com/collect and acquire the same set of parameters, regardless whether we use backend-e or frontend-e. The one distinction is that in js-e we have at our disposal already finished and useful function with the help of which we can send any events:

// connecting analytics.js

ga(‘create’, ‘UA-XXXX-Y’);

ga(‘send’, ‘event’, ‘category’, ‘action’, ‘label’, value);
On the server side thought the whole query is sent as POST payload, coded in form-urlencoded

POST /collect HTTP/1.1
Host: www.google-analytics.com

v=1
&tid=UA-XXXX-Y
&cid=555
&t=event
&ec=category
&ea=action
&el=label
&ev=value
You may notice the additional parameter cid — client ID. This is also a new opportunity in Universal Analytics. If before only Analytics could define what a visitor is, now we can manually specify our ID.
Here is the official documentation to help anyone, who would like to try offline tracking out.

This method could be applied for conferences, shops, concerts. You just give the users unique coupons and then check in Google Analytics events whether the user has made the purchase.
Working on conversion improvement could be an endless process. Once there is new chapter on the website, new functionality or special offer, it is time to start working towards better usability, higher conversion and website simplicity.

If you are already using offline analytics from Google, please tell us about this in the comments below!

 

Opinions expressed in the article are those of the guest author and not necessarily SEMPO.

Common On-Page SEO Pitfalls

A couple of weeks ago, I spoke at Turkey’s first SEO conference, SEOZone. Since our agency, Ads2people, conducts a large number of on-page audits, from very large and often multilingual corporate sites to regular blogs, I thought it would be helpful to talk about some common on-page pitfalls we see over and over again. This is an exclusive write-up for SEMPO summarizing that presentation. I hope it helps you improve your on-page SEO.

#1 Crawl Budget

Given the fact that search engines such as Google assign a certain crawl budget per domain (and sub-domain), I’m always surprised at how often site-owners simply try to push all of their content into the index. And they also often seem to be completely careless in regards to which sites are crawler-accessible at all.

To assess and fix these problems on your site, a good starting place is Google Webmaster Tools (go to: Crawl > Crawl Stats), which gives a first impression of how a site is doing. A successful graph is slightly increasing – which usually reflects that Google picks up on content being added and therefore returns a bit more frequently. Conversely, if that graph is jumping or massively decreasing, you might have a problem.

There are two ways to control search engine crawlers: using a robots.txt directive and implementing a robots meta tag into the HTML mark-up (or serve it as HTTP X-Robots header). However, the issue with both directives is that they don’t solve your (potential) crawling-budget-issues:

–          Robots Meta Tag: Implementing a proper “noindex” does prevent a given page from showing up in search results but that page will still be crawled – and therefore a crawling budget has to be used.

–          robots.txt:  Blocking a URL (or folder, etc.) does prevent the site from being crawled (and therefore does not waste crawling-budget); however, there are massive downsides. One is that pages might still (partially) show up in search results (mainly due to being linked from someplace else) and all inbound link juice will be cut-off. In other words, those links do not help your rankings.

Considering those points, you might think about combining those… but please – don’t! It simply cannot work. If a page is blocked using robots.txt, a site won’t be crawled and the meta robots tag therefore cannot be read at all!

Watch out for things like filters and sorting, pagination, and other potentially useless pages. We see so often that these are simply being pushed to the index but certainly never can or will rank for anything. Don’t waste Google’s resources on that!

As a rule of thumb: If you want to be sure not to waste crawl-budget, only have pages that really are useful (so don’t create others in the first place).  If you have others you don’t want to show up, I’d go with meta robots to at least utilize the inbound link equity.

#2 Duplicate Content

I assume everyone is familiar with duplicate content (DC) issues, but it turns out that’s not the case (if you’re not, please read this first). It always surprises me to see how many sites out there are still not performing well due to a lot of internal (partial) DC. Even though most sites these days are OK in handling session IDs and tracking parameters, here are some “classics” I’d like remind you of: HTTP vs. HTTPs is considered to be DC, products available in multiple categories (and not using a single product URL) are causing DC as well, and sub domains (like staging servers) might get you in trouble.

That said, the rel=”canonical” meta tag (or X-Robots Rel-Canonical Header) can help you fix those issues, but I think this is the third-best option to solve DC issues. In my mind, it’s really all about efficiency – so the best way to solve it is to actually make sure that you only serve contents using one single (canonicalized) URL and not multiple ones. It’s as simple as that.

I’d generally not rely on something that Google calls “a strong hint” – because it’s a hint that they might or might not consider, but essentially it’s not a forcing directive like an HTTP 301 redirect (which they simple have to follow).

Again it comes down to giving Google as few choices as possible.  Enforce single, unique URLs with amazing content and 301 redirect previously existing ones (e.g., old or multiple versions) to this (new) URL and you won’t suffer from DC issues.

#3 Proper Mark-Up

There are quite a few differing opinions on if and why proper mark-up is important. I don’t really jump into that discussion, but I’m a strong believer that doing clean and simple mark-up helps. That’s mainly due to the fact that I really don’t want to take chances that a crawler might have “issues” when trying to extract information from a site. And that’s also why I think doing schema.org mark-up is a good thing: It helps engines (not only crawlers) to actually understand (parts of) content and make sense of it. In short, to understand its meaning.

Obviously you have to consider which information you can and want to provide to Google (and others), but if you don’t give your data, they’ll get it elsewhere. So generally speaking, don’t miss out on this. It’s far more than just gaining more CTR due to more prominent results – which is great by the way – but if you combine structured data with rel=”author” and / or rel=”publisher” that the benefits are even greater. It’s basically Google moving toward understanding and assigning verified entities to sets of queries, and you surely don’t want to miss out on that. In my opinion, Google is massively moving to a point where you need to be a verified authority for a given entity and therefore will automatically benefit from all that long tail traffic that belongs to this entity – which makes a lot of sense given the fact that Google sees a massive ~20% of new queries per day.

So if you’ve not yet played around with Rich Snippet mark-up, I recommend you check-out schema.org to see what’s in store for you, get it implemented, and verify your domain and author profile with Google+ to get things started. Good luck!

If you’re interested in the slide deck, feel free to check it out on SlideShare.

About the author:

Bastian Grimm co-runs Ads2people, a full-service performance marketing agency based in Berlin, Germany, where he heads the SEO department as the VP of Search. Having a passion for software development and everything “Tech,” he loves to challenge IT and marketing departments to come up with outstanding results in search marketing. Bastian is a widely cited authority in SEO having spoken at almost every major search conference including SMXs, ISS, SES, SEOkomm, LAC, BAC, and many more events around the globe.

Find Bastian on Twitter and Google+ or contact him at bg@ads2people.de or +49 30 720209710.

 

Opinions expressed in the article are those of the guest author and not necessarily SEMPO.

4 Ways to Use the Site Operator in Google Maps

Daniel Leibson

SEO Manager – RelevantAds

Dan has been working in the web marketing space for over 4 years and has had a life long love affair with technology. His background is in SEO, web analytics, conversion rate optimization, and social media.

A while ago I read this fantastic article on 25 Killer Combos for Google’s Site: Operator. A small confession: I am a huge Dr. Pete fan. Anyway, after reading that article I spent a couple of days honing my advanced search operator skills even further. I also make members of my team learn advanced search operators, as I find them incredibly valuable. A few days ago I was talking with Dave about potential destination partners and he dropped a bomb on me:

 

Blog 11 image 1

Mind blown!!!!

This opens up many possibilities in terms of understanding what Google views as important when it comes to Maps citations. Not only that, but there is great research potential in terms of finding out actionable insights that you can use in your local search optimization tactics. I am going to walk you through several of my favorites.

Use Case #1 Checking Citations

You know what’s important in local SEO? Citations. While the exact value of certain citations compared to others or how valuable the practice of building citations in the long run is debatable, the fact remains; they are important right now. Well, guess what? You can use the site: operator to look and see which sites are providing citations in Maps. Check it out:

Blog 11 Image 2

 

This is interesting because it allows you to research and verify that certain local destination pages are providing some value to your clients or your business. BUT WAIT, THERE’S MORE! Not all locations from a site show up in maps, like in the example below with Yelp:

Blog 11 Image 3

 

 

 

 

 

 

 

The screenshot above seems to show that the previously contentious issue of Google scraping Yelp reviews has been resolved in a much more extensive way than Google Places no longer showing 3rd party reviews.

Use Case #2 Checking Citation Volume

As I mentioned previously, the value of citations in general is something that is getting talked about a lot around our office lately. One quick way to determine the value of getting your business information into a site/directory is to see the percentage of indexed pages that also show up in Google Maps. This is a simple two-step process.

Step 1:

Use a site: operator search to see how many pages of a site Google has indexed.

Blog 11 Image 4

 

Step 2:

Use a site: operator search to see how many pages of a site Google has indexed in Maps. It’s important to note that the number that Maps gives you is dependent on your view, so if you want to see total volume make sure you zoom out as far as possible.

Blog 11 Image 5

 

 

 

 

 

 

 

To be clear, you have to use this tactic with a little bit of skepticism, as not all pages in a website are local based. To that end it is possible to examine sites information architecture and determine what is present across all local listing pages and add that to your search as a regular search modifier. However, since a site that is built to contain business information will primarily contain business information you can also normalize your results by adjusting the total number of indexed pages down a few percent. Also, if you are working with a specific vertical, you can use regular search modifiers like movies and cinema in order to determine how many pages for a specific vertical are indexed in both Google’s regular index as well as their Maps index. For example:

Blog 11 Image 6

 

 

 

 

And the Maps equivalent:

Blog 11 Image 7

 

 

 

This allows you to target local destinations that provide better results, at least in terms of boosting PlacesRank, for your business or clients.

Use Case #3 Comparative Analysis of Maps Citations

Now that you know what locations are showing up in Maps for a specific site, you can do some simple analysis to figure out what content from the site Google is pulling in and/or placing more value in. For example, we recently did an analysis of Superpages.com to see what pieces of content were pulled into maps.

Blog 11 Image 8

 

 

 

 

 

 

 

If you go to the listing for that specific location on superpages.com you can see that the business description is indexed by Google into the Maps data set.

Blog 11 Image 9

 

 

Use Case #4 Competitor Analysis

This is just a combination of citation volume and comparative analysis; however, it lets you specifically target pieces of content to optimize on 3rd party destinations. You can also combine the site: operator with additional search terms. This allows you the ability to look at a specific destination site for a competitor and then compare their level of saturation in terms of Maps citations to yours.  For instance, say you are Fatburger and you checked how saturated citysearch.com is with your locations (at least in terms of Maps citations).

Blog 11 Image 10

 

With only one page worth of Maps citations the answer is not particularly good. However, what about your competitors? Are they able to gain traction where you are not?

Blog 11 Image 11

 

 

 

 

 

 

 

 

The answer to this one is just as simple: yes. With 909 Maps citations you know that it is possible to step up your attempts at optimizing the specific location listings on CitySearch.

I’m sure there are more ways to nest the site: operator with both phrase match and exact match search terms, these are just the ways we are using it internally for research. Are there any other ways that you use search on maps.google.com that you find really helpful? I have a current research project where I am trying to ascertain the correlation between the various pieces of content that Google scrapes/indexes and a Maps citation, so stay tuned. I would love to talk to anyone that is interested or has additional insights.