In a previous post I discussed how OKRs can help guide the feature development process and define clear and tangible goals and targets. One of the main requirements of OKRs is that key results are unambiguous. In that respect, clearly identifying the userbase that will be impacted by your feature/change is crucial. You can say your feature will affect all users in the same way. But that’s probably not the case. New users don’t have the same behavior patterns as older players – and they are not likely to respond to a given change or feature the same way. Also, many users give up on your app in a relatively short period of time. Those are not the users you are building a game for. If you don’t identify your target audience well, then you cannot measure the impact of what you are doing well. That in turn means it’s harder to optimize your feature development process and live ops design.
Colopl (full disclosure, I’m a huge fan) has been considering an interesting metric in its quarterly results for quite some time. While I cannot speculate as to the exact motivations for reporting these metrics I can see how they make a lot of sense – and also how they provide a good target to measure your game’s performance and the impact of any feature you might introduce.
Consistently, Colopl reports ARPQU: average revenue per quarterly user
What’s most interesting is not so much the quarterly time frame. It’s the definition of a QAU
Colopl only reports the metrics of users who have returned to the game at least once 7+ days after install. They simply exclude from their reporting (again, for this metric) any user who doesn’t survive the 7 day mark. Most likely ARPQU will be higher if you exclude users who never return to the game 7 days after install. But excluding users who haven’t displayed a minimum level of commitment makes sense – and is not (only) something that makes your metrics look better.
This makes a lot of sense for a company that mostly publishes RPG games – where early retention is traditionally lower than casual games. But I would argue that this makes sense for any mobile publisher where the main source of monetization will be IAPs (the picture might not look exactly the same for games that have a primarily ad-centric monetization strategy and/or hypercasual games). And if you follow an OKR framework clearly defining the impact your game is a key part of the process. And clearly defining the impact of you game will involve a clear definition of the affected userbase. Most of the time, your key results should focus on the impact of your feature/change on a userbase that has displayed a minimum level of commitment. And making it past the 7-day mark seems like a relevant threshold to consider someone as “committed”.
Below 3 (partial and non-exhaustive) reasons why considering the metrics of your users who made it past the 7-day mark (and ignore those who didn’t) is a good way to assess the health of your game and look at the impact of the features you implement.
1. Contribution to revenue
Users who don’t return to the game 7+ days after install account for a large portion of your installs. Perhaps more than 50% of your installs. However, they most certainly won’t account for much of your title’s revenue – probably less than 5% of a title’s total revenue will come from users who don’t return to the game 7+ days after install (payers also churn). Note this doesn’t mean excluding revenue that users spend within 7 days of install. It’s excluding the revenue attributed to users who don’t return after 7 days of installs. When you focus on users who are active 7+ days after install you are focusing on the segment of users who contribute to your bottom line the most – and excluding a lot of noise.
2. User behavior stabilizes after 7 days
In the first few days after install, user behavior is mostly conditioned by the tutorial and the early experience you’ve hand crafted. For example, it’s pretty much guaranteed that looking at user behavior on install day – time played, game actions, level up, etc. – will very closely reflect the user journey you designed. After all, users are on rails. If they are playing, then they must follow the patterns you’ve designed. Only considering users that make it past the first few days ensures you look at users who have some kind of autonomy and who can display a sense of agency. You want to measure users’ reception of your feature inasmuch as their behavior is intentional. Users who are active in your game 7 days after install display more consistency in play patterns. And that consistency reflects user volition – not onboarding or a the early experience you closely designed.
3. Reduce sensitivity to changes in install flow and UA spending
Getting a feature (or being the recipient of a boost in marketing spend) is a good thing – in absolute terms. An influx of users most likely means there is an increase in revenue. However relatively speaking, an influx in installs is accompanied by a decrease in relative metrics. You will have a lot more active users – among which a lot more “young” users – accompanied by a moderate increase in overall revenue. So, your arpdau will decrease. And chances are so will your daily conversion. This will be especially true if you are working on a game with a moderate/low DAU, and if your “elder” users monetize better than your new installs (although this is common, it’s not always the case). And it will be even more true if your influx of users is due to a feature in a tier 2 or 3 country where monetization is low to begin with. This type of fluctuation that accompanies contingent influxes of install doesn’t only occur with monetization metrics. The same thing will be true for pretty much every daily metric you look at. Your average time played might go down, a higher % of users will be opening a gacha crate (if it’s part of your tutorial), etc. Looking at the metrics of users active 7days after install is a way to exclude the contingent fluctuations associated with changes in install flow.
In conclusion, when trying to assess changes you make or features you implement, defining your key results from the perspective of your “day 7+ users” will help you ensure you are measuring the impact of your feature in the most controlled way.
- You are focusing on the userbase that matters
- You are measuring user behavior when users actually have some kind of autonomy and intentionality
- You are excluding the noise associated to contingent fluctuations in install volume and composition
6 comments