In order to leverage data to have the biggest impact possible on your game’s performance, you need to ensure you are approaching things from the right angle. Stated differently, in order for data to be useable you need to be sure you are asking the right questions the right way. That might first require you to start by asking yourself what you are expecting from data. The only thing worse than not looking at data is to be turning to data with wishful thinking. Most of the time you’d be better off to make decisions without data (and some common sense…) rather than basing them on misleading or irrelevant data.
The important point to keep in mind here is there is an infinite amount of questions that can get an answer. Most of the time, the key challenge is not to get a result (what % of users are playing PvP on a daily basis, what are players mostly buying after an IAP, what is the average session time in Vancouver compared to Seattle, etc.). Most of the time, the challenge is to make sense of the value of that result for your decisions. Because of that, there are 3 points that are worth keeping in mind when dealing with data.
- There are some types of questions you should avoid altogether. Those are the questions where you’re asking why something is not happening.
- When looking at user behavior, don’t triangulate and directly answer the questions you have
- The most important one. You need to turn to data to help you make a decision. That means the option you’re considering needs to come first. You then turn to data to provide some indications as to how likely the action you’re considering will be successful or not.
The key takeaway here is that in order to leverage data successfully, you need to use it to support and validate decisions – not to make those decision. One of the biggest mistakes I’ve seen is to expect data to provide you the answer to the question “what should I do next”. Data is no substitute for your judgment call – and all the risks and uncertainties that entails.
1. What questions not to ask
Before jumping into the type of questions you should ask, it might be useful to consider the type of questions you should not be asking. On top of the list of questions to avoid, the biggest one is the “why not” question. A “why not” question is the type of question aiming to understand why something is not happening. “Why are so few users playing PvP?”, “Why is this feature not monetizing”, “Why did retention not go up after the last update”.
Of course, there are situations where those questions can be answered. Those are the cases where there are material impediments to an action happening. For example, a technical issue causing a crash when launching PvP, a blocker in the funnel where players are no longer able to progress, or a lack of visibility where players don’t know where to click or even if the feature exists.
Excluding those “hard” cases where it’s not materially possible for users to perform the action in question, these “why not” questions are bound to take up a lot of your analytics time and lead to no clear answer – and even fewer possible courses of action. You’re usually caught up in a wild goose chase. And the fact that we’re not wired or equipped to deal gracefully with “I don’t know” as an answer usually means this wild goose chase drags on even longer.
There is a clear reason those “why not” usually don’t get a satisfactory answer. That’s because while there are usually just a few possible explanations why something is happening (or theoretically, one explanation), there can be an infinite number of reasons why something is not happening. That’s also why in statistics you start from the null hypothesis. You don’t prove 2 things are different – the whole point of any experiment is to disprove they are similar. This is not just semantics, but a way of thinking and producing knowledge (falsificaility is key).
So, although it can be disappointing to see that the changes you implemented didn’t have the desired impact, focusing your analytics firepower on why something didn’t happen (after you’ve excluded the possibility of a material blocker) is not likely to help you move forward. Now I’m not suggesting you only look at succesful features and don’t try to get a sense of what went wrong. It’s more about not getting hung up for too long on finding the clear explanation when features or changes don’t move the needle. The “burden of proof” always lies in demonstrating something is happening – not that something is not happening. Your default starting assumption should be that you won’t move the needle. And you need to focus your efforts on understanding why the needle moved more than why it didn’t move (and when possible leverage past experiements to identify the factors that don’t move the needle).
2. Use data to exactly assess in-game behavior (don’t triangulate)
Asking why something is not happening is something you want to avoid spending too much time on. And there are some types of question you should always be turning to data for. Those are the questions concerning what players are doing and what is actually happening in your game. I’m talking here about behavioral patterns that can be objectively tracked and measured – not user intentions, motivation (or anything that has to do with user subjectivity). You don’t want to be focusing too much effort on what is not happening (and why). But you should always be looking to answer exactly questions regarding user behavior. You need to be looking at the objective and verifiable behavior that is happening in your game. When you use data, don’t compromise on that.
Of course, this seems obvious. But using data to exactly assess in-game behavior also means one thing: you shouldn’t be doing any “triangulation” when it comes to measuring patterns. By triangulation, I mean using various data points to reach a conclusion about something else happening in game.
Say for example you want to know if the usage of a given character by your customers is low in PvP. You maybe have on hand the data concerning your customer engagement in PvP, and customer ownership of that character. Say customer engagement in PvP is comparable to engagement with other game modes, and the overall ownership of that character is low. You could be tempted to conclude from those 2 data points that the usage of that character in PvP is low. But that might not be the case – maybe that character is used disproportionately in PvP (and you can start learning from that what skills/meta is most desirable in your game). If all your tracks are implemented properly, you have all the data you need to bypass speculation and directly look at usage of that character in PvP (it does however require someone writes and runs another query). And that’s the analytics effort you need to be doing. Don’t adapt your questions based on the data you have – produce the data you need to answer the question you have.
Now I’m not saying that triangulating something will always lead to a false result. Just that triangulating is making you assume and guess stuff when you have everything you need to have total certainty. The point here is that when you chose to rely on data, you should aim to be rigorous and do it right. If you have a specific question about user behavior, you should specifically answer that question and not take any shortcuts. Often there is no clear-cut answer. But when considering user behavior there is.
3. Use data to (in)validate a decision – not to make it
Another related version of the “why not” questions are the “what should I do” question. Anytime you face the question of what to do next, there is an infinite amount of options available. Of course, asking “what should we do next” is the extreme version. But fundamentally, the problem here is that we sometimes turn to analytics without providing them anything tangible to bounce off of.
The idea here is that you need to provide some kind of anchor point to your analytics efforts in order to yield results. That means that the specific product decision/feature should always come first. Once you have the idea you want to explore – defined in terms of key goals and measure of success – then you can turn to analytics to get some indication of how likely to succeed that feature is. So, first the product or game design function must do their homework and the heavy lifting. The heavy lifting here consists in clearly identifying the impact you want to have and define as clearly as possible the option you’re considering. You need to start by giving your analytics team something to work with. Although I’m not a fan of semantic debates, that’s maybe the big difference between being “data-driven” and “data-informed”.
Again, focusing on OKRs and looking at your game/feature from the perspective of the impact it will have is key. Once you’ve clearly identified the course of action you are considering, then you can leverage data to assess the chances of that decision being successful. Let’s assume your goal is to increase customer return rate. Having a clear objective and target KPI is clearly a step in the right direction (and I would even say a requirement). But if you left things at that it could still be too vague and open to leverage data efficiently to make a decision. What you need is a clear idea you have in mind, and use data to see how likely that option is to succeed. So in this case for example you could decide to have a bonus reward for every consecutive day played PvP. Providing that clear context then can help the analytics team evaluate that decision. Maybe the percentage of customers engaging with PvP is too low for that to be a viable option – or maybe it’s the most played game by your customers and one of the strongest feature to leverage to increase return-rate. Maybe previous experiments have suggested that rewards are not the best motivator for engagement. Then you know to keep focusing on PvP to get customers to return more regularly, but you go back to the drawing board to find a compelling reason to do so. Either way, until you specify what you want to achieve and how you’re thinking of achieving it, the world of possibilities is too large to look at data to make your decision.
If you don’t start by giving your analytics team something to work with, then you are not building a systematic operation – you are more dependent on the contingencies of individual genius. You can always turn to your analyst and ask him or her what to do next – but in this case it’s important to be aware that it’s not so much data informing your decision as the individual analyst making a proposal. Depending on the context and the “talent density” on your team that might be the best solution for you – analysts can sometimes be the individual contributors that generate the most impact. It’s just important to be aware of that and not confuse asking something to the analyst with being data-driven.
It can be very difficult to use data in an actionable way. It can be tempting to turn to data in the hopes that it will provide THE answer – and avoid the pain and anxiety of making a judgment call. The success of a game depends on 2 main factors: your game itself, and your audience. And while you (mostly) have control over what’s in your game, you don’t control the audience and its tastes, preferences and modes of interaction with mobile entertainment. We are dealing with an ever-changing and unpredictable audience that evolves alongside the market and is impacted by multiple factors external to your game or the mobile industry. What that means is that – unlike physics or chemistry (I’m not a physicist or chemist) – the impact of every decision is always uncertain. You can never know for sure what the impact of a decision will be. No matter how much data you throw at it. The best use of data is to inform on the potential impact of a decision – not to make that decision for you.