Last week I started testing an update to a complex legacy process. At first, my head was spinning (it still kind of is). There are so many inputs and test scenarios...so much I don’t understand. Where to begin?
I think doing something half-baked now is better than doing something fully-baked later. If we start planning a rigorous test based on too many assumptions we may not understand what we’re observing.
In my case, I started with the easiest tests I could think of:
- Can I trigger the process-under-test?
- Can I tell when the process-under-test completes?
- Can I access any internal error/success logging for said process?
- If I repeat the process-under-test multiple times, are the results consistent?
If there were a spectrum that showed a focus between learning by not manipulating and learning by manipulating something-under-test, it might look like this:
My tests started on the left side of the spectrum and worked right. Now that I can get consistent results, let me see if I can manipulate it and predict its results:
- If I pass ValueA to InputA, do the results match my expectations?
- If I remove ValueA from InputA, do the results return as before?
- If I pass ValueB to InputA, do the results match my expectations?
As long as my model of the process-under-test matches my above observations, I can start expanding complexity:
- If I pass ValueA and ValueB to InputA and ValueC and ValueD to InputB, do the results match my expectations?
- etc.
Now I have something valuable to discuss with the programmer or product owner. “I’ve done the above tests. What else can you think of?”. It’s much easier to have this conversation when you’re not completely green, when you can show some effort. It’s easier for the programmer or product owner to help when you lead them into the zone.
That worst is over. The rest is easy. Now you can really start testing!
Sometimes you just have to do something to get going. Even if it’s half-baked.