Culturally speaking mobile gaming organizations are very STEM heavy. Paradoxically, I’ve rarely seen any occurrences where there was a math or stats challenge involved in making sense of user data. The reality is, a good leverage of data requires first and foremost the ability to ask the right questions. The questions that can be answered in such a way they can provide the basis for action and confirm or infirm a given hypothesis.
Perhaps because a STEM understanding of science and knowledge is prevalent, most gaming companies don’t seem aware of the benefits of having more social scientists involved in making sense of user data (to be fair, social science academia is particularly bad at highlighting the practical value it can bring to digital entertainment industries – but that’s another issue). Data only tracks objective actions. Data points are the tangible traces left by the user. The user played the mission, purchased this item, logged in at that moment, etc. But when planning ahead, deciding what feature to implement next or designing a product strategy, you need to consider not so much what users have done, by why they’ve done it. You need to leverage data to understand why users are doing what they are doing. And social science is all about make sense of human subjectivity and motivations.
If you only stay at the level of objective traces, you’re not putting yourself in a position to understand user behavior, to make assumptions on what is driving that behavior – and by extension implement new features that provide users with what they want. So, you need to use data a bit like an archeologist does. Archeologists will extrapolate from tangible artefacts the way extinct societies functioned. In mobile games it’s very similar: the observable data points are the tangible manifestation of the user’s invisible and intangible subjectivity (what are the user’s motivations, preferences, priorities, etc.).
The best way to use data is to start with a hypothesis and see if it holds water. Build a hypothesis on user motivation – based on observed data points, your experience as a player, common sense (that’s still a valid thing), etc. Ideally, you want to see a circular process, where data will inform the hypothesis, and refining the hypothesis will help specify the data to look at more precisely. So, you want to use data to corroborate your assumptions. You try to find the tangible actions that would match that hypothesis. Finally see if that scenario is what happens in reality. Your use of data needs to be motivated. You can’t blindly look at data – the question you have, the feature you want to build must be what’s driving every data question.
Say for example your hypothesis is: “users compete in events to get a reward they find desirable” (in my experience: not really). You have to use data in a way that will help you confirm or infirm that. So, the first thing you need to do is find a scenario where users would act in accordance with that hypothesis. If users played events primarily for the rewards, then you should expect to see differences in engagement when you change the rewards. In this case, you can have one event with very valuable rewards. Then you can have another event with very lackluster rewards. If users do in fact compete in events to get a reward they find desirable, then you would see fewer users would engage with the second event (and engage less). If you see user engagement only slightly differs when you change the reward, then that’s a good indication your initial hypothesis does not match what’s really happening. Once you see users motivation to compete in events is more intrinsic than extrinsic, that helps you prioritize your development pipeline and content production. It can also help you better understand how/what to monetize while your event is running.
Incidentally, that’s why building a hypothesis and trying to reject it is the way to go. You can find a lot of data that supports your hypothesis, but there can still be some piece of data you might not be looking at that would infirm it. So, you can never totally validate a hypothesis. On the other hand, you just need to find one occurrence that contradicts your hypothesis to reject it.
Using this hypothesis-driven use of data can help you highlight some of the main dynamics at the heart of monetization in mobile games. You can then design your game and your monetization strategy around these dominant patterns in the attempt to monetize your game as effectively as possible. You can see an example of that in the following post.