Data is a great tool to objectively measure what’s happening in your game in a very accurate way. The big advantage mobile games have over other forms of entertainment is the ability to track user action, and from those actions infer the key motivations of your players. There is no user input in linear entertainment apart from switching channels or turning the TV on or off. In mobile gaming, this data-driven mindset is very much internalized.
You want to leverage data to make decisions. You turn to what players have done in order to better understand the current performance of your game and infer from some key behaviors the players motivations that are driving performance. Based on those insights, you can identify areas to improve, design features to leverage user preferences and evaluate the potential of decisions you are contemplating. If you’re making an assumption on what will work, as much as possible you can turn to the historical data available to challenge or validate those assumptions and move forward with your decision (or not).
You will often be in a position where available data on current or past behavior will provide strong indications for future decisions. The important thing to keep in mind here is that looking at available data always assumes looking back. It always assumes looking at past behavior. There are also times when you want to try something radically new and there is no clear way to turn to existing data to evaluate future decisions. There will be moments where you will want to take a decision but there is no available past data that can help validate or invalidate a decision. For example, you might want to change the reward of a game mode to generate more engagement and/or spending. Should you just change the quantity of the reward you’re currently giving, or change the type of reward itself? What if you are thinking of releasing a new item/consumable/currency to increase the engagement with that game mode? There will be multiple scenarios where you have information that could provide some indication of that. But there will also be multiple scenarios where you don’t have data that could provide guidance and confirm or infirm a decision you have in mind.
In those cases where you have no historical data to inform a decision, you can still use data. But in those cases, you can’t use data in a passive or reactive way. You need to leverage data in a forward-looking way. Say for example you want to increase engagement and monetization of one of the features in your game, and your starting assumption is that changing the rewards can have a positive impact on that front. You might not know if changing the reward of a game mode could have a positive (or negative) impact. Maybe you never iterated with different configurations on your game to know if changing the rewards is the way to go – or if you should focus on quantity or what the reward is. So, in this case you want to be making specific changes in the game with the intention of producing those insights and deciding the course of action.
Leveraging data in a forward-looking way means creating a space of experimentation to test your assumptions and produce learnings. Experimenting is the specific process of tying-in feature development and data-production. It’s the way to leverage data to produce new insights – and to implement design decisions in an iterative way. This process requires you to invest resources into producing insights in order to improve performance. You’ll probably make mistakes along the way, maybe even try things that end up having adverse effects (and cost you revenue in the short-run). But it’s a long-term play where you’re willing to make mistakes in order to learn and end up with a better product. Although the idea of trying something new to get learnings (with all the risks of failure this entails) can be daunting, it’s also usually important to keep in mind there are few things you can do in your game that will cause irreversible damage. And if you’re bold enough and trying impactful things, the learnings you get can outweigh the risks and compensate for the short-term sub-optimal tests you might run. If you’re going to take actions in your game to produce insights and decide future action, you need to make sure you are doing everything you can to gain the clearest insights. And that means going for high impact in order to reach clear and reliable conclusions.
The main underlying assumption here is that your game is never in a state of completion. Improving performance is a never-ending process. Not only should you assume you won’t get things right from the start – the very notion of optimal or perfect can sometimes be counter-productive. You never know how far you can go, how much more you can improve (although sometimes after iterating 2-3 times on something and not seeing the needle move, that’s a good indication you might not be focusing on the area with the highest ROI). Improving your game is a never-ending process that requires resources – time, effort, testing bandwidth, etc. There is no free lunch, and you can’t expect to produce learnings and get the benefits of experimentation if you’re not dedicating resources and willing to take a chance.
This is true for features. And this is even more true for anything in your game that involves quantity – be it the amount of resources you give out, timers or prices. When dealing with balancing resources you should assume you will always need to test and experiment to get better results. You obviously want to start by looking at existing behavior as much as possible to get the best starting balancing possible. At the same time you should also assume you will never get the right amount on the first iteration and hold yourself accountable to iterate to find a better balancing. Here you can’t think in terms of THE optimal balancing – optimizing is a never-ending process, and you’ll never get the confirmation you need to know things cannot be improved further. But you’ll be able to see if you’re moving the needle and iteratively improve on your starting point.