In her book published this year, “The fearless organization” Amy C. Edmondson insists on the importance of psychological safety in knowledge production environments. Every group dynamic involves some form of interpersonal risk. Everybody wants to feel smart, capable and helpful. And dealing with others is inherently accompanied by the risk of not being considered and treated as such. Psychological safety refers to “the belief that the work environment is safe for interpersonal risk taking”. Psychological safety is a state required to focus on achieving shared goals (rather than self-protection).
On the other hand, a psychological unsafe (work) environment is an environment in which there is a belief that the risk of being belittled or berated is high. In psychologically unsafe work environments, people focus on protecting themselves rather than achieving common goals. Some common behaviors of this are not asking questions to avoid looking ignorant, not admitting mistakes or weaknesses to avoid looking incompetent, not making suggestions to avoid looking disruptive, etc. These behaviors might keep people safe, meaning they reduce the risks of being belittled or berated. But these behaviors are based in fear and motivated by self-protection. So, it’s a very counter-productive use of talent.
Psychological safety is especially important for “knowledge workers” who rely on teamwork and face “VUCA” (volatility, uncertainty, complexity and ambiguity) – which corresponds pretty much to every game-making environment I’ve ever seen. Psychological safety is about candor; about focusing on the outcome you’re looking for more than an individual contribution and the way it will be perceived. You can’t achieve something big without taking the risk of making mistakes, without facing and admitting your limitations (and asking for help and learning from others), without disagreeing with others, without taking the chance to create something new. And two key behaviors are required to work together, engage in productive conflict and innovate: asking for help and admitting failure.
This emphasis on psychological safety specifically struck a chord when thinking about the way data and analytics are sometimes integrated into the product definition and development process. Specifically, I strongly believe focusing on psychological safety can contribute to having more proactive, actionable and agile data contributions. But that also means accepting the practical limits of data when it comes to determining future courses of action. And that also means embracing the unescapable need for product owners to make judgment calls – with imperfect and partial information. It would be reassuring to have data determine the best and certain course of action and not need to have any subjective evaluation and input. But when you are defining the strategy and roadmap a game you need to make uncertain choices, take a chance and commit to something – and data can’t remove that burden. You need to be able to make a decision knowing you might be making a mistake. And make that decision anyways because you believe it’s the best one.
Creating a space where analytics can make positive contributions and help make better decisions requires being clear on the nature (and limitations) of those insights, as well as the way they will be used in the decision-making process. That will put “data workers” in a better position to produce insights that are actionable (that help further our goals) rather than dedicating time and resources to protecting their personal integrity (via the exactness of their claims).
At a high-level, the scientific process is very defensive – since Descartes it’s all about producing indubitable truths. All claims must be falsifiable and reproducible – that means stated in a way they can be (repeatedly) challenged and disproved. And to be considered true a claim must overcome all potential oppositions, reservations and doubts (in addition to be conceptually sound and consistent). A true statement in the Cartesian sense is a statement that won’t admit failure; a statement that resisted all attempts to make it false. It’s more about avoiding making a false claim (being defensive) than making a true claim (being assertive, or on the offensive).
What that means is that a big portion of the insight-producing process is about being defensive: anticipating potential inconsistencies, contingencies and logical fallacies. And making sure you’re ready to defend yourself against any potential challenge heading your way (and thereby protecting the insight-producer). More than producing a true statement, the Cartesian model is all about producing a statement that can’t be disproved. You make a claim, but at the same time you need to be able to reject all contradicting assertions that will challenge your claims. If you’ve spent some time in academia (perhaps especially if you’ve since left it) you get to realize how conditioned you become not so much to making a statement, but to anticipate and defend against so many potential challenges.
Another way this “negative” and defensive insight-production process manifests itself is in the way claims are made in a very particular way. The way in which claims are made at a very low level of detail and with many caveats.
For example, consider the following 2 statements:
- 67% of players who on average have completed region 8 and who have played at least 3 days in the previous week play PvP after collecting their login bonus
- A majority of late-game players play PvP
Now the first statement describes things and details rather precisely the specific context and contingencies. Providing all these contextual parameters (what player progression, engagement level and specific play patterns, etc.) also helps anticipate any potential challenge to that statement. On one hand it’s being thorough and rigorous. On another it’s being defensive and building a bullet-proof statement in the attempt to avoid challenges. Once you start anticipating (often imaginary) challenges, you enter a never-ending spiral. And what this adds in indubitability it removes in terms of legibility and actionability. It’s hard to get what the point is behind all these caveats. And it’s hard to know from that claim (and all the caveats that go with it) what the next course of action should be. And building that bullet-proof case and anticipating so many challenges requires time and effort (that could be used producing other insights).
Especially in the context of games, making general claims is what helps move forward and achieve the desired objective (implement a feature or balance things in a way to impact player behavior in a new way). But by definition a general claim is not specific or precise – and it’s messy, fuzzy and ambiguous. On the other hand, an exaggerated focus on precision and universality becomes more about being defensive. The more precise a claim, the less generalizable it is, the less actionable it becomes. But the harder it becomes to challenge it.
The need to be conscientious, thorough and rigorous is of course crucial. But focusing on being bullet-proof – or in fact the simple belief you can ever be bullet-proof – requires a lot of effort that diverts from the actionable and operational part of the insight. When considering the impact of analytics and data on product decisions, one of the biggest problems occurs when analytics is focusing more on the “negative” part of insight producing process than the actual insight itself. Being precise is critically important. But the goal of data in the product development process is not to be exact. It’s to help make the right decision. You always have to question the ROI of the higher degree of precision you want to go after – and embrace the fact that you’ll never get it right at 100%.
The “rigorist approach” is absolutely applicable and justifiable for “scientific” endeavors – take for example chemistry or rocket science. But not in games. And not because building planes is important but making games is not. It’s because in games data and analytics are producing insights about human behavior. There are patterns in human behavior, but no indubitable and universal truths. There are no statements about player behavior that will be generally and universally true. There are no absolute, general and timeless truths of player behavior. The preferences, trends, fashions evolve constantly. What is popular today probably won’t be popular the same way in 3 years from now. The performance of Harry Potter Wizards Unite is also impacted by the fact that Pokemon Go exists in the market and has affected player expectations. Adding a same alliance feature 5 years ago or today probably won’t have the same impact (alliance features are more of a standard feature today than they were 5 years ago). That AB test you did 4 years ago, if you were to run it again today, would you get the same results? Probably not. And that’s because the userbase is evolving and consumer expectations and preferences are changing.
Being clear, explicit and unambiguous is something important when considering ORKs. Key Results are supposed to be clear and unambiguous (in other words precise). But Key results are a) tools used to indicate whether or not you are on track to achieving your objective (what matters is the objective, not the tracking) and b) forward looking (you don’t use key results to describe a situation. You use them to help you move forward). When it comes to producing an insight that can guide future action – especially to implement something new that doesn’t exist – the general conclusion will be more helpful than its scientific accuracy. Insights formulated in a general way have a high degree of uncertainty and fuzziness (and a chance of error) that doesn’t sit well with scientific standards of truth production. But general statements are much more practical and can provide the basis for a principled, high-level decision. “Let’s focus more on PvP for our endgame because that’s what our elder players like” is a clear and easily understandable message. Is it possible to make such a bold statement and follow the highest standards of scientific truth? Probably not. But those are the kind of insights to need in order to make decisions and move forward.
Improving psychological safety for the data function means creating an environment where you can focus on the actionability of the insight rather than its universality. It means data insights are focusing more on being assertive and actionable than defensive and complying with unrealistic standards of truth.
Data insights need to aim to be thorough, rigorous and precise. But in the delicate balance between accuracy of the insights and actionability of the insight, actionability always has to be the priority. Data and analytics can help guide future decisions when they extrapolate more general principles from specific occurrences in the past. Especially when you want to create something new. Data people can’t let the need to be bullet proof guide their insights and conclusions. The insight and its integrity shouldn’t become an end in itself. The outcome is always the decision that the insight helps inform.
Usually what you gain in actionability, you lose in certainty. One way to help improve psychological safety is to make it ok for people in a team to make mistakes. When considering data analysts, that means acknowledging how every insight will be inherently partial, incomplete and uncertain. So, developing an efficient use of data consists first and foremost in establishing a culture of data in which you alleviate the fear of making a mistake. That can be done in numerous ways. The first (although more theoretical one) is to insist on the fact that truths about social phenomenon don’t have the same level of certainty and universality found in natural sciences. A second – more practical and tangible way – is to emphasize the fact that being actionable is more important than being perfect.
Focusing on actionability means you focus on the outcome of the insight rather than what the insight is. If the extra degree of confidence won’t impact the decision, then you have to be comfortable living with an insight that falls short of the highest standards of scientific truth. And if you’re nitpicking over a 0.5% improvement (assuming you have enough players to have statistical significance in the first place), then you’re probably not contemplating something that has that big an impact in the first place. Being actionable means that it’s ok to accept a certain degree of uncertainty if it means fast implementation. Making a mistake quickly is often better than spending time to get a more precise insight (which will always be imperfect anyways). Few mistakes will have an irreversible impact. More importantly, you actually usually get a higher degree of confidence when you observe an actual mistake rather than analyzing a past data and assessing how a future feature might go wrong.
In this respect the recently published book on Bill Campbell “Trillion Dollar Coach” (another great book I highly recommend) resonates a lot with Edmonsdson’s book on psychological safety: “Failure to make a decision can be as damaging as a wrong decision. There’s indecision in business all the time, because there’s no perfect answer. Do something, even if it’s wrong”
Focusing on psychological safety when producing data insights doesn’t mean anything goes. Edmondson insists on the fact that psychological safety doesn’t mean lowering standards. It doesn’t mean someone’s performance will never be below acceptable levels. You focus on psychological safety because you create an environment in which people can give their best and are not afraid to do what they believe is the best option. This is another theme from Trillion Dollar Coach (the parallel between both books became more clear as I was writing this post). Psychological safety, creating a culture of trust where it’s ok to make a mistake is first and foremost a way to achieve operational excellence. You focus on psychological safety because that’s the environment you need for people to take risks and be willing to be proven wrong. And you can’t achieve anything innovative and great unless people on the team are willing to move out of their comfort zone, take a chance and make uncertain decisions. Don’t create an environment in which analytics is mainly concerned with producing an insight that is bullet proof. Create an environment in which analysts are focused on providing you an insight you can do something with.