My gut instinct – a notion, feeling or intuition rising from repeated exposure to a scenario in which an existing presumed knowledge base is drawn upon to make a decision.
Analytics – the application of mathematical concepts … drawn upon to make a decision.
Yeah, and once you mention math you lose a lot of people. Just ask anyone interested remotely in sports about the coach’s dilemma in the modern era. For every advocate of Sabremetrics and War-on-Ice or any number of other statistician tools there’s some staunch believer in the power of the the ‘eye test’ alone. And, they aren’t limited to sports metaphor either. Sit in any marketing meeting and discuss customer acquisition, particularity advertising and the same thing comes up. Even in education the use of analytics is causing major waves.
Now, truth be told, at the extreme neither school of thought is correct either in their valuation of themselves nor that of the value of the other side. That is why both continue to struggle when applied in the real world. That however, could be an entirely separate post.
In the technology sectors I work in data analysis plays a huge role in the matrix of decision making and if you follow the blog at all you know I come back to tracking and analysis quite a bit … which begs the question, How do you know what to track?
The short answer is everything. The long answer is it isn’t about what you’re tracking but which data you are analyzing at a given time since not all data is applicable at every point within the product lifecycle. And, within the long answer is why tracking everything should not necessarily cause analysis paralysis There’s a distinct difference between tracking and measuring. Simply put:
Tracking just builds up a huge data set.
Measuring is the analysis of changes within the data set.
As always, the single most important thing to understand understanding the business need. What I mean by this is being able to answer the question, “What do I want to know about the product right now?” “Everything,” is not an answer – it is a cop out. At different points in the product life cycle, within the context of the competitive (or regulatory) landscape and from the point of view of consumer feedback loops there will be different questions about the product’s performance – your job as a good product manager is to know what are those questions and how will you find the answer you need to know about them?
In some ways it’s like maintaining your car or house, you don’t maintain all the parts at the same time in the same ways, you don’t upgrade or update everything all at once and not everything is going to break at the same time (even though it might feel like that some days) … likewise with your product you’re not going to measure everything simultaneously, you’re going to performance KPI maintenance on some things, change features or create upgrades and measure success on some things and sometime something about it will break and you’ll have to fix it and measure the effects.
If you are tracking everything you will have reams of data to reference but if you’re only looking to the answer to one question at a time your search through the data will be uniquely focused and that is the key to success.
So how do you know what questions you are answering and how to measure the answer within the data?
Every sector or industry has high level questions about their particular product. The repetition of some of these performances questions gave birth to some standardized metrics to track like. ‘How much money am I making for each person using my product?’ rose to “Revenue Per Consumer.” ‘How much does it cost me to get one new sale?’ rose to “Cost of Acquisition.” Lifetime Value, Cost of Retention, Churn or Abandonment rate, Refund or Exchange rate, Complaint or Repair rate, etc. then become the framework as the minimum required data analysis that needs to be regularly performed that everyone is doing from your competitors to investors and industry analysts.
These are all very high level Key Performance Indicators (KPI). As you consider the needs of the consumer, of the business and of the competitive landscape in the context of the product’s life cycle you will need to answer a lot of other questions unique to your product. Each question represents a new data set to analyze.
There’s an important distinction to make here in formulating your product questions. How explains the outcome of the process. While the KPI are more about how, as you dive deeper into the usage measurements the real question is you are trying to answer is WHY. Why explains to you the reasoning behind the how outcome. It’s the distinction of intent – the user did or wants to do such-and-such because?
There’s a few ways to get to these why type questions.
First is to analyze the KPI and look for holes from the top down. Wherever the aggregate KPI is not meeting expectation begin to drill down in the data from the obvious to the obscure in order to understand what portion of the underlying metrics are under-performing. One way to accomplish this is to continually ask in the cliche little child’s mindset, “why?” over and over again as you review the data.
Supporting your why will be the other questions like who (define the targeted cohorts), where (define the targeted portion of the user experience), what (define the end action in question) and of course, where you initially began, how (the difference between the target segment and the benchmark).
Suppose you are reviewing your LTV KPI. The lifetime value seems to have dropped from the industry standard or your previous benchmark. Why did this happen? You break the data into two parts, the revenue and the length of time engaged. It turns out it’s because the lifetime itself has become shorter. Why did this happen? You might begin by segmenting the aggregate data into cohort sets, perhaps based on acquisition channel to start with (the who). You notice variations between different sales methodologies at this point, a segment of recently acquired users isn’t staying around as long as the mean for the product which is dragging down overall LTV. Why are they not staying around? You decide to break the investigation into two parts, the acquisition funnel and the engagement funnel and begin to analyse the path for these particular cohort through each. It turns out their acquisition and early use behaviors are the same but their longer term engagement show a divergent trend. Why are they behaving differently later in the life cycle? You define the different engagement targets for mature users of the product and begin to segment out again asking why are some users dropping off earlier than others looking specific behavior patterns and comparing and contrasting the cohorts against one another and the product mean. Asking why in response to the data makes you slice it over and over until all that’s left is the single, target-able and thus actionable group you can make changes for. Why is cohort ZBF acting this way when every other cohort is acting different? I hypothesize that this particular set of product attributes are not fulfilling their perceived value proposition at this point in their life cycle thus causing them to abandon the product while other uses simply seem to ignore those product attributes completely. Therefore we can test either removing the functionality causing the abandonment or improve upon it so it produces value to the user rather than instigating churn.
Perhaps you are tracking your COA KPI. You notice what appears to be a spike in cost that’s now draining your sales budget too quickly. Why is there a spike? You might segment each of your acquisition channels and notice that a particular channel is costing more than you want it to. Why is this channel performing poorly? From there you might hypothesis all the reasons costs could increase such as poorly performing creative sets, external drivers changing the ad costs like lack of inventory or more aggressive bidding or regulatory influences. The analysis leads you to the creative sets in which you ask why are they not performing. Then take apart each creative’s funnel to see if the differences between them are causing patterns in abandonment through the steps in the acquisition process. Why is a certain aspect of the funnel causing the drop off? Inspect the individual elements such as image and copy and action and begin A/B testing to isolate the exact portion that is making the ad ineffective.
Alternatively, lets suppose your ARPU KPI. Suddenly saw a boost and you’re earning more money per consumer. Why is the ARPU rising so quickly? You would want to know what activity was supporting the net positive and use it to your advantage. You start by hypothesizing the reason why looking at the drivers to monetization and compare the consumer cohorts that are earning the extra revenues to the mean and see what makes them different perhaps in their consumption pattern or where they were acquired from, etc.
These kind of scenarios allow you to use top level metrics to drill down in the data and tease out what is going right or wrong. It is reactionary because it depends on your top level KPI to red flag you on issues after they come to light, however, from a maintenance standpoint it is also the last line of defense fail-safe to your product’s complete collapse and an important mechanism to utilize.
On the other side is a proactive and methodical approach to data analysis.
Each time you release a new product update or feature it is an opportunity to measure. Chances are, you might be rolling out the new feature in response to something you previously identified in the analytics, but if not, you’ll need to set a benchmark of current behavior and create a hypothesis of what the new behavior should look like. This is what you’ll optimize your tracking around, build your funnels for analysis from and measure over a given period of time the success (or failure) of the change from.
Working backwards from this scenario though, how do you even know you need the new feature in the first place?
That would come by doing a user audit. This is done by identifying several key aspects of the product and making a matrix. Begin with the underlying product assumptions in 1) the value proposition, 2) the business need and 3) the life cycle point of the product. You then break the product down into overarching use case components, such as what my company does with games looking at 1) onboarding 2) engagement and 3) monetization. Taking into account these six elements there will be certain intersections between them that may seem more important to measure than others at a given point in time, those are the ones you begin from. Perhaps you immediately have a hypothesis that you want to research in the current data or a cursory look at the data immediately points out something that needs more granular clarity – but that may not always be the case. What if you still really don’t know what you want to know about the most?
It can be helpful to visualize this similar to story telling if you get stuck. One way I do this is to find someone who’s not already intimately familiar with the product and aloud talk them through the product experience. For example, if I were doing this exercise for a game I might begin with something like, “you see an ad in your social media feed about such and such kind of game, you click on it and go to the google play store and see this and that, you would download it and open it where this happens, you’re asked to do this and have the option to do that.”
You should notice not only the aspects you highlight but also their reactions. What you chose to include are actually framed by the hypothesis you have about the product’s usage or consumption. That’s not to imply you say to the person, “here’s what I think,” but rather, colored by your perception you explain the product from that point of view. These are places you should measure to ensure they work as you expect. You might not think about them except for the context of actually having to explain them aloud to someone else. If the person seems confused or intrigued or asks questions about any part of your explanation these are also points you should measure too.
This is simply to help get you started defining the questions you need answered if you don’t have any. Formulating the question is the next step and remember it always is about the WHY.
Why is a particular cohort supposed to behave a certain way? If they aren’t behaving in the way you’ve hypothesized, why is there a variance?
As I mentioned before, you might be putting the question up against a methodology like the user experience pathway like the one I use regarding onboarding, engagement and monetization. During the early stages of new product roll out, for example, you might want to focus on the introductory consumer experience as the starting point since it is the first interaction consumers have and will ultimately help them determine for themselves if the unknown that is your product fulfils a need for them and has a value to them. You might start by asking yourself, “Do newly acquired consumers use the product?” Then you might think, “wait, what does using the product even mean to these new consumers?” At this point, I encourage you to reference the Onboarding post I put together – the gist of which is know what the action is they are expected to take and follow the pathway to that action to see if they are indeed taking it.
Let’s say you do a quick analysis of the Onboarding and determine there’s no significant difference in how your users are responding to it for now, everyone appears to get through it the same and all of your onboarding KPI are in line with expectation.
Then, move to engagement. Ask yourself, “what do I want to know about what informed and vetted consumers want to do with my product?” Profile what an engaged user should look like – what actions are they taking and how can you measure those actions to create a statistical baseline regarding those actions. If you cannot answer this question, you have bigger problems as a product manager than I can help you with here for now.
Once you have this image of what you believe users of your product are doing it gives you some context to frame what it is you’re analyzing about what they are doing as a place to start within the vast data set you (possibly) have. If users are using something more, or less, than you expect, ask yourself why this is and begin to slice the data further to see if the data highlights something about the particular segments’ behavior.
In summation, it’s important to remember what you are measuring today might not be what is important to know about tomorrow or next month or even next year. What you track is everything because you never know when you’ll need the data. But what you analysis will be dictated by what is happening with the product itself. Your gut might tell you where to look or your experience give you some insight to why something is occurring but the data should quantitatively validate your assumptions and give you unbiased ways of measuring how the changes actually impact the product performance. Let the two work together and you’ll be a much happier manager with a much happier team.