Human factors affecting software quality

VALA Group
8 min readFeb 23, 2021

--

This is a continuation to the blog post: “What Software Testing Is Not and What It Can Not Achieve”.

INTRODUCTION

The more time I have spent testing, the more obvious it has become that, in order to become a really great tester, technical skills and expertise are not enough. One needs to have good communication skills and, at least, a basic understanding of how the human mind works, and how that might affect decisions that may have a serious impact on software quality.

There can be any number of such factors but here are a few of the more common ones, in my experience (in no particular order):

  1. Conflict of interest
  2. Cognitive laziness
  3. False confidence
  4. Automatic assumptions
  5. Ego

In this blog post, I will talk more about these factors.

CONFLICT OF INTEREST

Now this is a devious one. One definition of a conflict of interest is: “A situation in which the concerns or aims of two different parties are incompatible” (source: Oxford Lexico). How might this affect software quality?

Here’s one possible scenario: the company has promised to pay an extra annual bonus to the project manager if certain quality criteria are met by a specific date. Now, if the software is close to meeting said criteria but several major bugs are discovered just a couple of days before the deadline, they might threaten the manager’s chances at receiving the extra bonus. In such a case there might be serious temptation on the project manager’s part to either hide or downplay the severity of the bugs discovered in order to meet the bonus criteria even though that would likely have a negative impact on the software quality.

Another example might be a tester finding out weird software behavior at 5:30pm on a Friday afternoon, just as they are about to leave the office for the weekend, with a new release being due to go live first thing on Monday morning. Now, does the tester stay late and start investigating — something that might take hours depending on the issue — or do they just leave the software as is, and then go: “oh hey, look what I just found”, after the release?

Both of these two examples are about situations where personal best interest conflicts with the project’s best interest but there can be other kinds of conflicts of interest, too. For example, ego clashes can cause problems in a situation where there is no real conflict of interest, per se, but two or more involved parties have a huge desire to show off. Think two footballers playing in the same team but with both of them wanting to be the one to make a goal, for example. They have the exact same target but both players want to be the star of the team by scoring the winning goal.

COGNITIVE LAZINESS

Daniel Kahneman, a Nobel-prize winning psychologist, writes extensively about this one in his book “Thinking, Fast and Slow”. Put into a very brief, simplified, form, the more primitive parts of the brain, shaped by millions of years of evolution, have evolved into a very efficient system capable of making split-second decisions. Historically, a creature’s life or death may have depended on it. In the book, Kahneman calls this “System 1” — a fictional system within the brain (in terms of physiology — it’s not an actual part like, say, the hypothalamus) that is responsible for “fast thinking”, or quick, intuitive or instinctive, decision making. This is coupled with “System 2” — another fictional system that is responsible for “slow thinking”, or all of the careful, deliberate consideration most people do every now and then.

Monica Sauro, unsplash.com

A crucial difference between the two is that System 1 is an always-on autopilot mode while System 2 is only on when consciously engaged.

Another important factor here is that System 1 does not care about details or accuracy — it just quickly builds up a cute story, using whatever data is available right now, completely ignoring the amount, or reliability, of the data available. As a result, the ideas offered to one’s consciousness by System 1 are often wildly inaccurate and just plain wrong. This is especially devious because, in most cases, people do not even realise they just made an instinctive decision and, when they do not realise that, they can not question it either. Problems abound.

The unfortunate, albeit human, reality here is that everyone, no matter how alert, will fall prey to this every now and then because the brain wants to take the path of least resistance (read: least effort). Deliberate consideration is effortful — it takes time and energy — and the brain does not want to maintain this effortful state for extended periods of time.

The greatest testers I know have diligently worked and trained themselves over the years to try and identify when they have just made an instinctive assumption or decision so they can stop and question it. Since this falling back to System 1 is more likely to happen when a person’s mental acuity is lowered, due to being tired or hungry, for example, it might be a good idea to revisit one’s late night working session’s thoughts, ideas, and assumptions after a good night’s sleep and a proper meal.

FALSE CONFIDENCE

This is also known as complacency. One definition of complacency is: “self-satisfaction especially when accompanied by unawareness of actual dangers or deficiencies” (source: Merriam-Webster). I am confident that the risks this kind of behavior may pose on the quality of a product, and even the entire project, are quite apparent. In psychology, this may, in extreme cases, also be called the Dunning-Kruger effect. In plain English it simply means that a person grossly overestimates their own level of skill, expertise and/or knowledge on a subject or, in other words: they are blissfully unaware of how little they know, while being absolutely convinced that they are, in fact, experts on the subject.

A certain United Statesian president readily springs to mind here.

AUTOMATIC ASSUMPTIONS

“What’s 1+1?”

The likely immediate answer springing to mind is, of course, 2. However, without knowledge of the context at hand that can be a very hasty, and risky, assumption to make. Do you know what the number base being used here is? If we are talking about binary, then the answer is, in fact, 10.

Automatic assumptions are, potentially, one of the riskiest human factors to affect decision-making, outcomes and product quality in any context. This is System 1 in full swing, by the way These mistakes happen when one does not question their immediate, instinctive, reactions or thoughts and just allows themselves to go on autopilot.

As a very tangible example of this, the Mars Climate Orbiter, a 125-million-dollar space probe, was lost in 1999 when it crashed into the surface of Mars. The reason behind the crash: two teams, one working in the U.S. and another one in Europe, simply didn’t take unit conversions into account. One team was using miles, feet and pounds, while the other team was using kilometers, meters, and kilograms. Everyone in both teams simply assumedthat the numbers they saw were units in the measuring system they themselves used. In hindsight, it’s not very difficult to see a problem here when trying to send a multi-million dollar space probe to another planet and make it land with near-surgical precision.

Chuttersnap, unsplash.com

Finally, here’s a little puzzle for you to consider (if you’re interested, do contact me and we can discuss this in more detail): “You are carrying a calculator. You drop it. It might be broken. How might you test it?”

EGO

Another difficult one, in many ways. For example, a boss being repeatedly contradicted in a meeting by an engineer. Facial muscles start twitching, teeth start grinding, annoyance starts building up and the top-most thought in the boss’ head being: “how dare he?! I am the CEO!” What is the likely end result? Potentially, the engineer’s points being ignored or overridden, despite the engineer having superior (technical) knowledge and understanding of the subject at hand and, as a result, a new risk or possible point of failure being introduced into the project.

Of course, this works both ways: while the said engineer may have superior technical knowledge and understanding, they might not have a clue about the business side — an equally critically important aspect of the whole — that the CEO likely has superior knowledge and understanding of. Not all decisions can be technical if you want to build a successful product.

As a quick little off-topic sidetrack, history is full of examples of a technically superior product losing the competition to one that’s cheaper or more convenient to use, or has superior marketing. VHS winning over Betamax is one — Betamax had superior image quality but VHS had longer recording times and, in Europe, VHS recorders and tapes were considerably cheaper to buy.

Back on topic: egotistical behavior can be destructive to software (and project) quality in many ways. The above-mentioned boss choosing to go against the engineer’s recommendations simply for being repeatedly contradicted and feeling like they have to send a message: “I call the shots here, so we will do this my way!” This kind of behavior is not sensible, or responsible, but it is all the more human.

Likely the most problematic situations arise when there are two or more such people working in the same project. Ego clashes can have catastrophic consequences.

From a psychological point of view, exerting ego in this way is often a sign of either insecurity or a sense of superiority. Such a person behaves the way they do in order to assert dominance. Whether it’s because the person feels they are better or more important than the others, or because they have a nagging fear or feeling of being worthless, is an entirely different story and way outside of the scope of this blog post. Whatever the underlying reasons, that is not something to be taken lightly.

FINAL WORDS

The above-mentioned human factors can all be deciding factors in whether or not a project is successful and results in a product that actually solves the problems it was meant to solve. There are likely dozens more but, as I stated in the introduction, these are just some of the more common ones that I have come across in my working life.

All of these factors or, at least, their negative effects can be avoided, to a degree, through simple humility and careful, deliberate contemplation coupled with quite a bit of critical thinking. No one and nothing lies to us and deceives us as much as our own brain so I would strongly advise learning to question its suggestions, and look deeper under the surface into why you think or feel the way you do, because it might very well be your System 1 autopilot making decisions for you, without sufficient data to be doing so.

Avoid getting in trouble — engage System 2!

About the writer:

Petteri Lyytinen, Software Detective

Petteri is a software testing enthusiast who has been doing testing professionally since 2004. He is an active member in the global testing community and has authored various publications ranging from blog posts to a joint book project with other testing professionals from around the world. Petteri was “Finnish Tester of the Year” nominee in 2011. He currently lives in Estonia but quite often works in the Helsinki metropolitan area.

--

--

VALA Group
VALA Group

Written by VALA Group

Happiest company striving for sustainable impact.

No responses yet