Admittedly there aren't hard and fast rules which one should follow to make a test 'good', but merely some tricks for young players to be weary of to ensure the test that they are creating is fit for purpose in the present and the future.
Somewhat idealized end goals:
- Have a bug result in one and only one test failing
- Have only one artifact to update when a single piece of functionality is changed
- Have others be able to follow your work quickly and with great comfort
New terminology
Verificational and navigational steps: in a test script there are really two types of test steps, one is verifying the behaviour of a specific feature which you are interested in, and one is navigating you to a place where you can verify from. These are verificational steps and navigational steps respectively. For the latter it doesn't matter how the script navigates around or whether defects are encountered en route, the only thing a test script is concerned with is verifying what it was intended to verify.
In a nutshell, I'd put the 'good heuristics' as:
- Scripts
- Atomic - they shouldn't be larger than they need to be
- A fail should represent one single piece of functionality failing
- A single piece of functionality failing should result in one fail
- A test shouldn't encompass so much that one missing piece of functionality prevents another twenty pieces of functionality from being clearly tested/reported/tracked
- Steps
- Clear - anyone should understand what you're saying
- Succinct - it's quick to read, quick to understand
- Contain one and only one action to perform per step
- Contain at most one item to check in the expected result
- No confusing logic, if possible
- Strictly no branching in verification steps (a single test cannot do X or Y, test x1 can do X, test y1 can do Y)
- Link to defects - if something is failing then link the test to a defect and vice versa
- Identify repeated content
- Repeated steps should be put in a central location and referenced, rather than written out repeatedly
- A clear description of pre and post conditions
- Avoid verification steps where possible - this is added complexity, disastrous should a step ever be excluded in the future (who will know to test it), and will result in multiple 'fails' being reported
- Data
- Pull test data out of tests and place it in a central place
- Have a script creation process
- Ensure assets are reviewed
- Have a script maintenance process
- Ensure assets are up to date
- Give tests attributes
- Status is a good attribute, so we can see whether the test is 'ready', or 'in design', or 'awaiting review', or whatever
- Priority - you are unlikely to ever have sufficient time to execute all tests, and when you need to choose which ones to run you often already low on time. Decide before hand
- Execute the tests
- !! it seems obvious, but it actually does need to be proactively scheduled and done
- Have traceability
- Meaningfully link tests' pass/fail statuses to functionality/requirements allowing pretty charts of the system's health to be generated
- Store relevant knowledge (in steps, defects, or even wikis)
- If someone has to ask a question about anything, realize that they aren't going to be the only person to ponder it, so, put the answer somewhere useful
- If you have to ask a question also put the answer somewhere useful

No comments:
Post a Comment