Measuring a successful visit to a content site

So you have a content-based site, and you want to know whether your visitors’ time on your site was successful.

You have two options:

  1. Attempt to measure this via their on-site behaviour; or
  2. Ask them, via one of the many “voice of customer” solutions.

This post will deal only with #1.

Content sites can be challenging to measure the success of a visit, simply because there’s not necessarily one path. Rather, revenue is often generated via advertising, where page views = ad impressions = revenue.

If you are trying to measure the success of your content site, there are a few ways you can go about this.

  • Page Views per Visit: Seeing a large number of PVs/Visit could indicate a visitor has found information that is useful to them and has had a successful visit. However, a lost or confused visitor would also generate a large number of page views. How do you distinguish the two?
  • Time on Site: This too could indicate a successful visit. However, it could also indicate that someone is spending time searching for (and not finding) what they want.

So how could you better measure success?

  • Focus on valuable pages. A high number of page views to actual content suggests a more successful visit than a high number of page views that might include, say, site searches. Therefore, focus on PVs/Visit (or Time Spent) to a subset of pages. This can be more valuable than site wide PVs/Visit or Time Spent.

But you can do better. First, you need to assess why your content site exists. What behaviour can a visitor perform that would indicate they successfully found what they were looking for?

  • For example, your site exists to provide information X – that’s the goal and purpose of your site. Therefore, a visitor seeing content X achieves that goal, and suggests they had a successful visit.
  • If your site exists for reasons X, Y and Z, a successful visit could be a one that saw one or more of of X, Y or Z.
  • Setting up goals or segments around these behaviours can help you measure over time whether your visitors are performing these behaviours. Can better navigation drive up the percentage of visitors successfully completing this task? Which tasks are more popular? Are you even doing a good job of communicating what your site exists for? (If very few actually complete that main task or tasks, I’d suggest probably not!)

A final note: the intention of measuring a successful visit to your site is to measure this success from the point of view of the visitor. Is your site doing a good job of providing what visitors want?

This “success” doesn’t necessarily tie to short-term revenue for a content site. After all, a successful visit might be one where the visitor comes in, finds what they’re looking for immediately, and leaves. However, that visitor might generate more ad impressions by getting completely lost on your site. Good for you … in the short term. But doesn’t mean they had a successful visit to your site, nor does it bode well for your long-term revenue.

Therefore, measurement of visit success should be analysed alongside measures of revenue success, while carefully weighing the long-term benefits of successful visits (and happy visitors) against the short-term revenue generated by “lots and lots of page views”.

A few thoughts on forecasting

It seems fitting that my first post should involve something that occupies a tremendous amount of importance (and potentially, debate) within an organisation (and certainly involves a lot of sleep loss for me personally!)


If you are new, or fairly new, to forecasting on your website, I’ll share some hard-learned truths.

You’re always going to be wrong. Always. The very nature of a forecast means you will always be wrong … and that’s okay. (Don’t get me wrong. When it actually happens, it’s thoroughly depressing, and often has you chasing your tail to find out why, but it’s still okay.) Your aim is just to be wrong as little as you can – and to be able to identify why your resulting actuals are off from your forecast. I would argue that divergence from your forecast today will make tomorrow’s re-forecast better, but only if you can explain it and learn from it.

Your forecast is only as good as the information you have. As you look back over your site’s history, if there are events you can’t explain, or inputs into your traffic that you fail to identify, your forecast will be more off than you would like it to be.

Don’t have all the information you need? Get it. Gather anyone/everyone/any information or inputs you need. If the company’s eyes are on your forecast, their gaze will stray if you frequently prove too far off. If they’re not yet looking at your forecast, they won’t ever start if you can’t prove your history of accuracy.

There is a difference between a forecast and a goal. As you start forecasting site traffic, ad space, conversion rates, lead generation (etc), you’ll need to explain this. Many times. Your forecast will be based on your site’s history and its inherent trends. But as an analyst/statistician/forecasting guru, you alone can’t identify everything that will happen in the future. (And if you can, give me your number – I have a few questions I would love answered.)

This is where your business/development/product team come in. Your forecast can estimate where your site will be in the future, based on your current trajectory. But you need inputs from others to anticipate future planned growth, that’s not evident in the data.

Let’s say your forecast suggests your site will be up 5% year-over-year. If your executive team want your site to  be up 20%, you need your business/development/product team to either a) temper this expectation, if it’s not reasonable, or b) advise how they will achieve this. If your fine-tuned, well-informed forecast suggests you’ll be up 5% year-over-year, 20% is not a forecast, it’s a goal. You can’t reach 20% YOY without at least a basic idea of how you’ll get there, and any attempt to incorporate it into your forecast without a skeleton of a plan will reflect poorly on your forecast, rather than reflecting on the failed execution of the plan for growth.

The moral of the story? Forecasting can’t occur in a silo. Analysts, business and executives must all get their “feet wet” to produce something that all are comfortable with and can rely on.

Not forecasting (yet)? Even if it’s rudimentary (forecasting your basic web traffic metrics only, e.g. Visits, Unique Visitors, Page Views) get cracking. It’s wonderful to be able to analyse, segment and test your history, but your business and executive teams will really appreciate even a lightly dotted line of where they’re headed.