What is Testing
Providing validation to ensure that a product is fit for intended purpose and all parties are made aware of unintended side effects.How Much Testing Should I do?
You want confidence that logical flows work.You want confidence that negative flows don't work.
You want to be creative in trying to break things.
What does a Tester need to think about?
A tester needs to use the application like a user. This allows you to form realistic expectations relating to what a feature/product should allow you to do.A tester also needs to understand the technical implementation like a developer. This allows you to come up with creative ways to break a feature/product, and possibly to identify what shortcuts you can take in testing as well (it's very inefficient to test everything all the time, more on this later).
What to Check
Feature Testing
Identify some acceptance criteria and a few things which you want to know about as part of working on the story.Regression Testing
These are tests which provide sufficient confidence that your new product/feature has no adverse effects elsewhere. What are some of the other functions/features of the component that has been changed? Maybe just check that some of them work.Exploratory Testing
When a new feature/product is released it can be quite good just to have an exploratory session where you use the system and just play around with changing things almost at random.While normally everything will work as intended, sometimes you'll unearth some funky behaviour. Protip: follow anything that looks suspicious or broken. Bugs do tend to cluster together too, so it can be really good to check each feature or integration point briefly (can system A talk to system B in the right way) rather than getting bogged down with all conceivable edge cases on a single area.
How Much Testing to do
It's inefficient to test all the things all the time.People sometimes have rules “If it failed once put a test in place!”, but really it depends how much testing is needed. It's easy to fall in to Test-paralysis where a release is held up as completely redundant tests are executed just so “everyone is doing their job”.
This is influenced by how many users there are (a few, vs millions), the effects of a bug existing (wrong text displayed, or a satellite will fall out of the sky), and the likelihood of there being bugs (is the change trivial, or has something been fundamentally altered).
We can guide our intuition on this by understanding the technical implementation, the code changes which have occurred since the last good release, what bugs have been identified in the past, what testing is already in place (unit, UI, other?).
As a tester you have to think about those things, and build an idea of what to check before you release. This isn't something I can teach, and should really be used to drive discussions with the team and wider company to reach consensus/understanding of the testing required at each stage.
Here we seem to have small release cycles which means the quality levels can never drop very far (if you have a lot of changes between releases then quality naturally drops quite a bit). By not having a big quality problem it means we don't need to spend weeks performing massive regression cycles to catch/fix all the bugs. This seems trivial. Oh how painful the old ways were.
No comments:
Post a Comment