The Best Testers Are Scientists

Gabriel Jiva
4 min readApr 17, 2021

It doesn’t take long to appreciate a great software tester. And it doesn’t matter if she’s a manual tester or writes automated tests, because what really matters are the types of tests being run: curious tests. Tests that don’t just discover a bug and quickly document it away in a ticket, along with the state of the whole world at the time of discovery. But instead, tests that try to find the exact circumstances in which the bug occurs.

The more defined those circumstances, the more helpful the ticket is to the developer and, ideally, will take their mind right away to the exact function that is responsible for the bug. In those cases, you can almost see the light bulb go off:

Tester: I’ve only seen the bug on the audio configuration screen, and it usually crashes the app after single-clicking the “source” input, but I’ve seen it a couple of times from the “save” button too. And it seems to only happen after a fresh install on Android 10.

Dev: ohhhh! That’s because the way we handle configuration in Android 10 changed and the file the audio source is saved in doesn’t exist anymore!

This is exactly the kind of dev reaction you want to a bug report. It’s an immediate diagnosis of the problem, which was only made possible by a very well-researched and described bug. But notice how that description could, with changes only to the jargon, have been written by an entomologist:

Entomologist: I’ve only seen the bug on a tiny island off the coast of Madagascar, and it’s usually blue with green spots, but I’ve seen a couple of them with yellow spots too. And it seems to only come out right after sunset in the wet season.

Photo by Margaret Milteer on Unsplash

Which is kind of obvious when you think about it, because what do scientists do? They test the software that is our reality. Galileo’s gravity experiments is one of the more famous in history (and likely didn’t happen), but what is it, in software terms? He wanted to know if the rules of our universe took weight into account when pulling things toward the Earth. A previous power user, Aristotle, figured that the heavier a thing was, the faster it would fall. But that user failed to actually do any testing. So thank God that talented testers, like John Philoponus and Simon Stevin, came along and figured out that things mostly fall at the same rate through air, and then bothered to update the documentation.

What Aristotle did was assume the software worked in a certain way. Granted that he didn’t have the requirements to reference, but he probably noticed that you have to kick a heavy ball harder to go the same distance as a lighter ball, and he figured that the Earth kicks all things equally hard. That’s the equivalent of our tester above seeing the “source” input work on Android 9 and not bothering to test it on 10. Or seeing that it worked on the video configuration screen and not bothering to test it on the audio one too.

And that’s okay, because Aristotle was not a tester. He was more like a fanboy blogger. But what testers should be, is bonafide scientists, like Simon Stevin, who follow the scientific method:

  1. Ask a question
  2. Form a hypothesis
  3. Make a prediction, based on your hypothesis
  4. Run a test
  5. Analyze the results

In our example with the “source” input, after the tester saw it the first time, she probably did something like this:

  1. “why did it crash?”
  2. “maybe it was because I pressed the ‘source’ input”
  3. “if so, that’ll make it crash again”
  4. Relaunched the app, tried it, it crashed again.
  5. “okay, that was definitely the reason”

Aristotle might stop there and file the bug: “app crashes when using the ‘source’ input”. And the developer would try replicating it on their Android 9 phone and kick the ticket back with “couldn’t replicate”, and that whole cycle would be a waste of time. But our tester asked another question:

  1. “does it crash on this other phone?”
  2. “if it doesn’t, it’s a more nuanced bug”
  3. “I think it’ll crash though”
  4. Tried it on the other phone: it didn’t crash
  5. “what’s different about this phone?”

And she continued the scientific process like that, asking more and more pertinent questions, until the environment that our bug exists in was fully described. Which is exactly what you want in a bug report, because anything less will, in aggregate, be a productivity weevil, wasting both developer and tester time with double replication efforts and conjectures about the tester’s environment and back and forths. A clear, complete bug report does wonders for productivity.

So then, why not just teach all your testers the scientific method? Because it doesn’t work in the real world. We all learn the scientific method, but few of us become scientists. And I imagine that, just like in any profession, a not-insignificant number of scientists aren’t good scientists. Knowing things like the scientific method is necessary, but not sufficient to make a good scientist. You also need creativity, in order to ask the interesting questions, and more importantly, curiosity to keep the process going until it’s natural conclusion — to uncover the whole plot.

Tangentially, curiosity is a hugely important trait in great developers, too. But for testers, even more so.

Originally published at https://www.gabrieljiva.com on April 17, 2021.

--

--

Gabriel Jiva
0 Followers

Software. Management. Shmanagement.