Biases served cold
Cognitive scientists differentiate between so-called cold and hot cognition. The former refers to the act of simply using our brain for its usual business of perception, forming memories and making judgements; whereas the latter also injects motivations into the process.
Cold cognition is riddled with biases that are natural byproducts of the way we process information in general; whereas hot cognition may throw a spanner in the works by pulling decisions towards a personal goal lurking in the back of our minds.
I think what makes cold biases particularly dangerous is that they are simply on all the time; and their unmotivated nature also means it’s hard to trace them back to someone’s self-interest in the decision-making process, thereby making it harder to correct for them.
There are dozens of different mental short-cuts and glitches identified by cognitive studies— here, I highlight and examine three major ones that I believe we were particularly liable to fall victim of.
Oh boy. The mother of all biases. The bane of decision making. Confirmation bias is a mental energy-saving feature that nudges you to look for and accept information supporting your own preconceptions whilst making you more likely to ignore evidence or data contradicting your point of view. It is essentially cherry-picking stuff that feels comfortable and non-challenging, meaning information that is nicely aligned with what you already hold in your mind as the accepted truth.
Did this affect our product team? Well…
If our skulls were transparent and the activation of confirmation bias was signalled by a red flash, some of our meetings would have been like walking into an intense strobe party with only the beats and the fog missing.
Parking the joke, this is one stubborn bias that affected us all to varying degrees, even the most level-headed and rational individuals.
In terms of preconceptions, we had at least two major camps in our debate. Senior stakeholders and PMs, myself included, were pretty much on the side of the commercially available option; whereas most of our developers were very much in favour of creating a native framework. I think this is pretty typical a divide and there’s nothing wrong in having a well-formed opinion to start with — in fact, it is inevitable and by reflecting our different experiences, it brings value to the process.
Things get muddy, however, when these starting points solidify and then we just use all subsequent arguments to hold those up instead of giving chance for others to shape them with their own reasonable points.
I caught myself and my colleagues doing this many a times. It is particularly easy to fall for this when the discussions happen in person, in sporadic email chains or in a company chatroom. Partly because these forums are conducive to bring out other biases such as availability or anchoring (see below), but also due to missing a transparent and dispassionate framework for the analysis of the big picture rather than just smaller slices of it.
Our first attempt to escape the unproductive cycle and steer the discussions forward by pulling all the major arguments into a written document was not a great success. It was largely the author’s (my) fault: the text was too long, the narrative too superfluous and looking back at it, it is now clear to me that it was shaped to support my points and left way too little space for seriously considering the alternatives. No wonder the document became a battle field, scorched by passionate arguments and people latching onto anything that could have been used to go against the other camp. It felt to me at the time that little actual progress was made.
It was not true — there were benefits, but they were of the slow-working and subtle kind. The written form and structure did force us to consider more aspects to the problem at hand and highlight a lack of data in certain areas. It also allowed to get us closure and consensus on some points, no matter how small or hard won those might have been. The document also served as the basis for a much simpler decision-matrix as the next step, which really brought us closer to a well-reasoned final judgement.
To me, the most important outcome was that by observing the reactions of my colleagues and then reflecting deeper on my own ones, I realised just how much of this debate is generally affected by confirmation bias; and that I actively needed to flex my mental muscles to overcome this, because although I am working on the solution, I may also be part of the problem.
It’s not easy. It takes considerable effort and energy to force ourselves to willingly take on more cognitive dissonance rather than trying to reduce it. This is, however, a prerequisite to seriously considering others’ opinions.
In practical terms, we managed to get around most of the issues by focusing on stuff we could more objectively measure: we gathered more data, made prototypes, compared performances etc. I think we also made good strides in our mental conduct as we generally paid more attention to each other once we all became better informed about the different aspects and implications of this complex choice.
Open and frequent communication, even if sometimes appearing to bear no direct results, is the tide that lifts all boats: the more you do it, the more the team’s knowledge converge, blunting the edge of confirmation bias.
In the end, I got won over by good arguments and the solid evidence supporting the let’s build team’s case; and once I did, it was actually quite rewarding to embrace the exact opposite side I started the long debate with.
This is the mind’s tendency to recall the most recent and easily accessible piece of example, fact or information about something and then give it a disproportionate amount of importance when compared to older or less directly present evidence. It means favouring new or readily available info over dated or secondary info.
In our team, the availability bias manifested itself in concrete ways. Prior to our given product conundrum, we did utilise the off-the-shelf solution on a couple of occasions, but our most recent experience was a negative one: we encountered bugs, deployment issues and general headaches that left a sour memory in our developers who were part of the project. This, in turn, heavily shaped their thinking whenever the topic came up, despite the fact that we also had successful applications of the same tech in other client contexts before; that some of the issues might have been part of a learning curve; and that the industry at large had many success stories. These more positive instances, however, were naturally harder to recall, mostly because our team was less directly involved in those; and because the freshness of the recent pains were largely overshadowing other considerations.
This mental bias got pretty entrenched throughout the whole process and it took conscious effort from our team to prise this latest episode from the most defining factors in our discussions.
This is another trick of the mind that makes you more likely to grab a piece of information (often one that you received first), hold on to it, and then evaluate everything else in light of that. Anchoring occurs when you rely way too heavily on one aspect of a problem whilst downplaying the relevance of other characteristics.
In our case, we attached quite an obvious anchor to the performance of the user interface. Our prior experiments and prototypes showed a significant difference between the commercial and potential in-house solutions in terms of responsiveness for the end-user. This was an important aspect for good reason, but the gains we saw for the native tech were big enough that there was a tendency to over-emphasise this point at the expense of other serious considerations.
I felt that pulling our awareness back up a couple levels was sometimes challenging, especially in the beginning. Once more and more nuanced information and data got embedded in our heads, people gradually became more sensitive to the bigger picture. The summary matrix we created also helped: by relegating performance to just one of the boxes among a dozen others made it harder to avoid the realisation that we have a complex decision at hand. In addition, we were actively challenged on this anchor by other colleagues at the business side and as a result I believe we handled this risk pretty well.
Clash of motivations
Besides cold biases, decisions are also affected by the personal objectives we bring to selecting and evaluating information.
In our case, these motivational divides were pretty easy to detect. Software engineers want to code and solve problems by coding. They are predisposed to be flinching at the use of proprietary software, let alone one that forces them to rely on certain technology they had negative encounters with in the past. They tend to prize flexibility in the tech above all and often struggle to appreciate the potential gains in filling the gaps with something limited which they cannot dissect and modify at will.
Managers and business leaders, on the other hand, focus on resource-allocation and matching the business priorities in the most cost-effective way. They are more inclined to consider commercial options, especially if the area is not one of strictly core competency; and if there seems to be an industry-wide trend demonstrating successful applications of the candidate solution. They frown at committing precious team labour to a new, unplanned domain and seek for a watertight business case before considering to do so.
Then there are more prosaic motivations that were at play too, for instance, familiarity. Personally, I just liked the commercial solution. I had a longer history with it, had some good applications under my belt using it, and therefore I was likely to look at it favourably.
Motivations are powerful and they are needed for a business to operate. We hire developers to code; we recruit managers to keep their eyes on the business prize. Personal goals, however, can obstruct decision-making when they go unchecked and become a filter for you through which other opinions cannot reach you. Or not in earnest anyway.
In our concrete case, motivational biases were heavily felt and they often contributed to how the two opposing sides formed arguments and where they put emphasis on. However, I think we did a fairly good job at disarming the negative effects with honesty. It’s not magic — all you need to do is pay attention and acknowledge these biases. Whenever we detected that some discussions were hijacked by someone’s personal angles, we called them out on those and course-corrected. These often took the form of humorous comments and admissions, which are great ways to dissipate tensions.
Individuals, however, differ radically in how long they are willing to keep their minds open before complexity overwhelms them. Some people like to keep it short and they freeze on a decision early on. Cognitive studies associate this kind of personality with a lower tolerance for ambiguous information. Others prefer inspecting a problem from every angle, gathering as much evidence as possible. Although a flexible mind is certainly a good trait to have, if the hoarding of information gets to the extreme, it can lead to a state of confusion and indecision.
Freezing was not much of a problem for us as our team’s cognitive profile was very much on the inquisitive side; but as the decision got drawn out more and more, we started showing signs of a mental impasse. This was largely due to the fact that the we had to assimilate a lot of new information rapidly that often radically altered the picture. For instance, the commercial candidate was not static and new features were added to it in quick succession, some of which had major impacts on certain points we were evaluating as the cornerstones of our decision.
This constant reprocessing and the resulting lengthy process was a source of frustration for us. I remember one of our developers aptly capturing this sentiment by jokingly exclaiming after weeks of wresting with this issue that ‘we simply cannot decide anything!’. We were on the right track, but it was very important for us to acknowledge this problem so we could move forward with increased focus. We did this by drawing stricter timelines for gathering the missing data and make the decision boil down to a few key variables instead of agonising over every single point.