Thursday, 27 June 2013

The value of testing / how can we approach it

I'm constantly torn between having and not having test assets. On one hand, it's valuable to show what is being tested[0], be able to ramp up testing effort by delegating tests to a cheap execution source[1] (e.g. students), and to remind the tester what actually requires verification [2].

Also, should we require automation, it can be useful to define what we want covered as, from past experience, automation teams under time pressure will be forced to make sacrifices if they cannot show clearly what truly needs to be covered. Actually, knowing what needs to be covered also requires some domain knowledge, and it may not be efficient for that knowledge to reside within the automation team.

Also, by defining what is to be tested clearly allows us to know the scope and effort required. There are few organisations who will tolerate not having this information, and understandably so. The first question any Product Owner will ask is 'what did you test', and 'about three days worth, looks sweet' is rarely a satisfactory answer.

So tests seem like a good thing.

But on the other hand, they are out of date the moment they are written, are a duplication of assets required for developers and when wrong, can consume considerably more brain-time than working out the functionality from scratch.

Well the big question is should we use them then? And if so, how?

Fundamentally the answer is always that it depends on what's going on. How big the project is. The priorities and deadlines of the company. So many variables. Let's just run with assuming we want to be efficient and pragmatic, and I'll float my ideal scenario for my ideal team.

Maybe it requires us to look at what the role of testing is anyway. Most organisations I have encountered seem to have it as a check to ensure that software contains as few bugs as possible, and the role of the tester starts and ends at that. I feel like more is required though. We should be building in quality from the start. Our goal should actually be to assist the developers in creating code with confidence, at the present time and in the future. So how can we do this? Waterfall, where we test after development has been working for a few weeks - lol - no. Developers need regular, meaningful feedback fast. So this leads us to Continuous Integration, which I won't delve in to here. This does mean that we identify some scenarios in a new feature and get the developers to create unit sized tests to validate these.

We must have some tests which we must conduct to show that software is working. I don't think we can ever escape that. I just can't see a way out of it. We just need to verify the critical stuff before software goes out the door. So we should identify these as regression tests, and keep these to an absolute minimum. I'm loathe to having documentation residing in multiple places (requirements, test assets, user manual), so I'd recommend marking a note inline in whatever the requirements doc is saying that something needs testing, and what. This might sound an over simplification, but if you have some form of a document which explains what should happen then there really shouldn't be any need to copy this information elsewhere, which is what companies invest vast sums of money to do. Or make a mark elsewhere, but just think it through and avoid duplication.

So we have some manual tests owned by QA, and we have some unit sized automated tests owned by Dev. The final role would be to perform a bit of ad hoc testing as a feature is completed just to ensure that it is satisfactory, which doesn't necessarily have to be driven or conducted by a QA, and I think we're done. We will have some light documentation indicating what existing features are to be tested, how and when, and this should be sufficiently traceable for everyone that is so insistent on visibility (which is important). The only downside is that we're still duplicating the requirements in to some tests in a sense - the automated tests - which, being a duplication, annoys me. Perhaps BDD would be better (briefly mentioned below, but will be elaborated on in subsequent Blog post).

Since there are some assets in one form or another we need to track these. To help, it is useful to tag tests with how they're to be executed, so we'd want something to convey it's a one off feature test, a regular regression test, and whether or not it has been automated.

I have worked in a place which used Bugzilla for bugs and stories; basically for all of our requirements and manual test assets. The great thing about using this for stories is that we could insert Use Cases, which we did in the form of 'Given...When...Then...' and we made the developers execute these manually before handing a story over to QA, easy to do because they were right there in the very simple requirements doc. But let's focus on the cool point there.. They ran our tests! This was great. If they missed something obvious then shame on them. It' worked phenomenally well. We would then prepare some of these to become BDD/unit tests (actually the majority of these, but in hindsight a smaller subset could've been wiser). Sadly this was only 99.9%-of-the-way-there BDD, as we still copied our Use Cases over to a text file which was interpreted and used to drive the nunit tests. Still, pretty cool, and a bit more efficient than having thousands of detailed manual tests.


Robot Framework, Basic Setup

Plug: Robot Framework is quick to setup, easy to write tests for, and super fast to triage failures in. The last point really sets it apart...