Deciding Right, Despite Our Minds

Cognitive postmortem of a product decision.

Product management is an elusive thing to capture in clear-cut definitions, but it is one thing for sure: making decisions — and a lot of them, all the time. From small tactical choices you make on the fly with your team to laying the groundwork for strategic ones, you find yourself weaving strands in a complicated web of product decisions on a daily basis. Although often times you may not be the actual person with the authority to put a stamp on a given direction, it is most likely your job to facilitate, guide and build consensus behind quality judgements.

As making and preparing choices is such a core aspect to this job, one can only benefit from cultivating an awareness of the inner mechanisms of the human cognition. Of specific interest are the parts that can break down, turn counter-productive or secretly conspire against you and your colleagues, increasing the risk of arriving at decisions that are sub-optimal or even at odds with the desired outcome.

Popular books on decision psychology and a growing number of related articles have been warning us for decades against the potential baddies built-into the structure of our information-processing hardware: cognitive biases, logical fallacies, sensitivity to social pressure or even mundane physiological conditions like fatigue, which can all heavily influence and often misguide our decisions.

Most of these mental aspects developed as part of evolution and they definitely serve a purpose, but take them out of the context of mother nature’s battle royale for survival, activate them in a room full of engineers and a PM trying to chart the way forward for a piece of software, and they can quickly become problematic.

Major choices in product are particularly prone to switch on many of the cognitive risk factors that can muddle judgements. What follows is an attempt to point out some of the typical buggy features in the mental wiring of product teams by examining their effects at play in the case of a challenging product decision.

The dilemma

The team I worked with faced the classic problem of build VS buy. We needed a certain capability, and much of what we wanted could be purchased off the shelf; whilst our team was also fully capable of developing the desired feature-set in a native solution. What lent particular gravitas to the choice is that the results were to define an important aspect in one of our flagship products. We narrowed down the choices to one commercial candidate and one framework that we potentially could build in-house instead. Then a fairly long process ensued during which we battled out which option to go with.

From the luxury of hindsight, I am confident in saying that we made the right decision, but we did so despite numerous mental pressures that constantly pushed against the effectiveness of the process. Some of these were fairly easily detectable, whilst others only appeared to me in retrospect.

Biases served cold

Cognitive scientists differentiate between so-called cold and hot cognition. The former refers to the act of simply using our brain for its usual business of perception, forming memories and making judgements; whereas the latter also injects motivations into the process.

Cold cognition is riddled with biases that are natural byproducts of the way we process information in general; whereas hot cognition may throw a spanner in the works by pulling decisions towards a personal goal lurking in the back of our minds.

I think what makes cold biases particularly dangerous is that they are simply on all the time; and their unmotivated nature also means it’s hard to trace them back to someone’s self-interest in the decision-making process, thereby making it harder to correct for them.

There are dozens of different mental short-cuts and glitches identified by cognitive studies— here, I highlight and examine three major ones that I believe we were particularly liable to fall victim of.

Confirmation bias

Oh boy. The mother of all biases. The bane of decision making. Confirmation bias is a mental energy-saving feature that nudges you to look for and accept information supporting your own preconceptions whilst making you more likely to ignore evidence or data contradicting your point of view. It is essentially cherry-picking stuff that feels comfortable and non-challenging, meaning information that is nicely aligned with what you already hold in your mind as the accepted truth.

Did this affect our product team? Well…

If our skulls were transparent and the activation of confirmation bias was signalled by a red flash, some of our meetings would have been like walking into an intense strobe party with only the beats and the fog missing.

Parking the joke, this is one stubborn bias that affected us all to varying degrees, even the most level-headed and rational individuals.

In terms of preconceptions, we had at least two major camps in our debate. Senior stakeholders and PMs, myself included, were pretty much on the side of the commercially available option; whereas most of our developers were very much in favour of creating a native framework. I think this is pretty typical a divide and there’s nothing wrong in having a well-formed opinion to start with — in fact, it is inevitable and by reflecting our different experiences, it brings value to the process.

Things get muddy, however, when these starting points solidify and then we just use all subsequent arguments to hold those up instead of giving chance for others to shape them with their own reasonable points.

I caught myself and my colleagues doing this many a times. It is particularly easy to fall for this when the discussions happen in person, in sporadic email chains or in a company chatroom. Partly because these forums are conducive to bring out other biases such as availability or anchoring (see below), but also due to missing a transparent and dispassionate framework for the analysis of the big picture rather than just smaller slices of it.

Our first attempt to escape the unproductive cycle and steer the discussions forward by pulling all the major arguments into a written document was not a great success. It was largely the author’s (my) fault: the text was too long, the narrative too superfluous and looking back at it, it is now clear to me that it was shaped to support my points and left way too little space for seriously considering the alternatives. No wonder the document became a battle field, scorched by passionate arguments and people latching onto anything that could have been used to go against the other camp. It felt to me at the time that little actual progress was made.

It was not true — there were benefits, but they were of the slow-working and subtle kind. The written form and structure did force us to consider more aspects to the problem at hand and highlight a lack of data in certain areas. It also allowed to get us closure and consensus on some points, no matter how small or hard won those might have been. The document also served as the basis for a much simpler decision-matrix as the next step, which really brought us closer to a well-reasoned final judgement.

To me, the most important outcome was that by observing the reactions of my colleagues and then reflecting deeper on my own ones, I realised just how much of this debate is generally affected by confirmation bias; and that I actively needed to flex my mental muscles to overcome this, because although I am working on the solution, I may also be part of the problem.

It’s not easy. It takes considerable effort and energy to force ourselves to willingly take on more cognitive dissonance rather than trying to reduce it. This is, however, a prerequisite to seriously considering others’ opinions.

In practical terms, we managed to get around most of the issues by focusing on stuff we could more objectively measure: we gathered more data, made prototypes, compared performances etc. I think we also made good strides in our mental conduct as we generally paid more attention to each other once we all became better informed about the different aspects and implications of this complex choice.

Open and frequent communication, even if sometimes appearing to bear no direct results, is the tide that lifts all boats: the more you do it, the more the team’s knowledge converge, blunting the edge of confirmation bias.

In the end, I got won over by good arguments and the solid evidence supporting the let’s build team’s case; and once I did, it was actually quite rewarding to embrace the exact opposite side I started the long debate with.

Availability

This is the mind’s tendency to recall the most recent and easily accessible piece of example, fact or information about something and then give it a disproportionate amount of importance when compared to older or less directly present evidence. It means favouring new or readily available info over dated or secondary info.

In our team, the availability bias manifested itself in concrete ways. Prior to our given product conundrum, we did utilise the off-the-shelf solution on a couple of occasions, but our most recent experience was a negative one: we encountered bugs, deployment issues and general headaches that left a sour memory in our developers who were part of the project. This, in turn, heavily shaped their thinking whenever the topic came up, despite the fact that we also had successful applications of the same tech in other client contexts before; that some of the issues might have been part of a learning curve; and that the industry at large had many success stories. These more positive instances, however, were naturally harder to recall, mostly because our team was less directly involved in those; and because the freshness of the recent pains were largely overshadowing other considerations.

This mental bias got pretty entrenched throughout the whole process and it took conscious effort from our team to prise this latest episode from the most defining factors in our discussions.

Anchoring

This is another trick of the mind that makes you more likely to grab a piece of information (often one that you received first), hold on to it, and then evaluate everything else in light of that. Anchoring occurs when you rely way too heavily on one aspect of a problem whilst downplaying the relevance of other characteristics.

In our case, we attached quite an obvious anchor to the performance of the user interface. Our prior experiments and prototypes showed a significant difference between the commercial and potential in-house solutions in terms of responsiveness for the end-user. This was an important aspect for good reason, but the gains we saw for the native tech were big enough that there was a tendency to over-emphasise this point at the expense of other serious considerations.

I felt that pulling our awareness back up a couple levels was sometimes challenging, especially in the beginning. Once more and more nuanced information and data got embedded in our heads, people gradually became more sensitive to the bigger picture. The summary matrix we created also helped: by relegating performance to just one of the boxes among a dozen others made it harder to avoid the realisation that we have a complex decision at hand. In addition, we were actively challenged on this anchor by other colleagues at the business side and as a result I believe we handled this risk pretty well.

Clash of motivations

Besides cold biases, decisions are also affected by the personal objectives we bring to selecting and evaluating information.

No decision is ever made in a motivational vacuum, because any processing a brain performs is always contaminated by at least some degree of self-interest from the person owning that brain.

In our case, these motivational divides were pretty easy to detect. Software engineers want to code and solve problems by coding. They are predisposed to be flinching at the use of proprietary software, let alone one that forces them to rely on certain technology they had negative encounters with in the past. They tend to prize flexibility in the tech above all and often struggle to appreciate the potential gains in filling the gaps with something limited which they cannot dissect and modify at will.

Managers and business leaders, on the other hand, focus on resource-allocation and matching the business priorities in the most cost-effective way. They are more inclined to consider commercial options, especially if the area is not one of strictly core competency; and if there seems to be an industry-wide trend demonstrating successful applications of the candidate solution. They frown at committing precious team labour to a new, unplanned domain and seek for a watertight business case before considering to do so.

Then there are more prosaic motivations that were at play too, for instance, familiarity. Personally, I just liked the commercial solution. I had a longer history with it, had some good applications under my belt using it, and therefore I was likely to look at it favourably.

Motivations are powerful and they are needed for a business to operate. We hire developers to code; we recruit managers to keep their eyes on the business prize. Personal goals, however, can obstruct decision-making when they go unchecked and become a filter for you through which other opinions cannot reach you. Or not in earnest anyway.

In our concrete case, motivational biases were heavily felt and they often contributed to how the two opposing sides formed arguments and where they put emphasis on. However, I think we did a fairly good job at disarming the negative effects with honesty. It’s not magic — all you need to do is pay attention and acknowledge these biases. Whenever we detected that some discussions were hijacked by someone’s personal angles, we called them out on those and course-corrected. These often took the form of humorous comments and admissions, which are great ways to dissipate tensions.

Gotta stop thinking!

Another mental interference we had to face stemmed from what is called the need for cognitive closure. This refers to an individual’s desire to cease taking in new information, stop thinking about the problem and arrive at a decision. It is a natural inner pressure in everyone and it is beneficial to a certain degree as decisions have to be made in a finite amount of time in order to give way to action.

Individuals, however, differ radically in how long they are willing to keep their minds open before complexity overwhelms them. Some people like to keep it short and they freeze on a decision early on. Cognitive studies associate this kind of personality with a lower tolerance for ambiguous information. Others prefer inspecting a problem from every angle, gathering as much evidence as possible. Although a flexible mind is certainly a good trait to have, if the hoarding of information gets to the extreme, it can lead to a state of confusion and indecision.

Freezing was not much of a problem for us as our team’s cognitive profile was very much on the inquisitive side; but as the decision got drawn out more and more, we started showing signs of a mental impasse. This was largely due to the fact that the we had to assimilate a lot of new information rapidly that often radically altered the picture. For instance, the commercial candidate was not static and new features were added to it in quick succession, some of which had major impacts on certain points we were evaluating as the cornerstones of our decision.

This constant reprocessing and the resulting lengthy process was a source of frustration for us. I remember one of our developers aptly capturing this sentiment by jokingly exclaiming after weeks of wresting with this issue that ‘we simply cannot decide anything!’. We were on the right track, but it was very important for us to acknowledge this problem so we could move forward with increased focus. We did this by drawing stricter timelines for gathering the missing data and make the decision boil down to a few key variables instead of agonising over every single point.

Biological machines

Our cognitive capacity is a major factor in decision-making. The more complex the choice, the more mental reserves you need to be able to resist the pressures of shortcuts and force yourself to examine the problem in all its ambiguous details.

The pragmatic truth is that this capacity largely hinges on simple conditions of the body: the levels of energy we have; how tired we are; how hungry we are; how much cognitive load we have on us from other sources. Whether we like it or not, decisions are heavily affected by our physiology and keeping that in mind can often steer the process more effectively.

For instance, I observed that our team made much better arguments with a fresh mind and distracting factors like whipped up emotions were much easier to rise in the late evenings.

Once people get tired and closer to their next meal, emails get pointer, comments sharper, ironic remarks less subtle. This in turn can snowball into tense exchanges that rarely lead into anything other than dead-end arguments or next-day corrections which could have been easily avoided.

It is important to monitor and keep track of these bodily factors. After a few mistakes, I resolved myself to resist the urge to reply to that one nagging comment; to write that one hasty and snarky email. Sending out anything that does not appear to be the product of your best self with a full stomach in the bright day of light, should be considered twice.

What can we do?

Although our brain may lay some traps along the way, it is a great decision-making tool (the best one we know of thus far). Below I share ten pieces of practical advice that may be helpful to mitigate for some of the less desired quirkiness of the mind when making major decisions.

  1. Be genuinely ready to be swayed over by your colleagues. Confirmation bias is the number one enemy and an open mind is your best weapon against it.
  2. If you find your argument or opinion disproved, embrace this as a success and progress for the team rather than taking it as a personal defeat.
  3. Rely on data and objective measures as much as possible. It takes the personal aspect out of the debate and lends the arguments a degree of credibility that is harder to ignore.
  4. Humour is a potent and free tool at your disposal that can disarm tensions. Use it liberally.
  5. If you detect motivational biases, be transparent in pointing them out. It is entirely possible to do this in a friendly way that will move the discussions forward.
  6. Pay attention to group dynamics. Fault lines should be prevented before they fully form by using strong communication to build a platform of common knowledge, understanding and shared purpose.
  7. Never, ever reply to edgy comments when you’re tired, hungry or under extreme stress.
  8. When you feel the strongest emotions, push back the strongest against them. If you need to blow off some steam, do some sports or write emails and don’t send them.
  9. Keep your cognitive process open, but be mindful of the dangers of indecision. At the end of the day, a decision has to be made so try to establish concrete and achievable sub-goals to get there.
  10. Be aware of mental biases and try to put this knowledge in practice actively. Observe yourself, try to implement different mitigation tactics. If you can help yourself, you’re more likely to be able to help your team as well.

Comments

comments