Tuesday, 13 March 2012

Single Fastest Form of Manual Testing

The title of this post promises big things, and I assure the reader that I am not blowing smoke when I boast of possessing the secret to the fastest form of manual testing under the sun.
Well we're use to the model of the time elapsed since code was created vs the cost of fixing a bug. It's extremely relevant to what I'm talking about so here it is again.
 
Cost of fixing a bug
Well enough of the theory, on to the massive time-saving secret.

It's simple actually.

All it is is sitting down next to a developer for 5-10 minutes before they commit code and have a play with it.

This Over the Shoulder test approach was used with at my last company with great success. The benefits include:
    
  • Bugs will be found while sitting there, even if the developer 'tested it'. Testers will look with more tenacity than most developers, I suspect this is largely because a developer's primary goal is to create, a tester's, to break
  • Bugs found at this stage are cheap to fix. Undoubtedly cheaper than if they'd been found a few hours later
  • Frustration felt by the developer can be vented in a healthy fashion (I've heard some interesting expletives relating to requirements and time pressures whilst doing this exercise...)
  • Feedback to developer is almost instantaneous / issues which would have undoubtedly arisen incur no context switching penalty for the developer
    • A subtle benefit, there is no bug-process for a developer to get used to placing work in and out of when the feedback is this quick; it's reinforcing a sense of accountability
  • Conversation on dubious behavior is brought to light quickly
  • The developer is educated to what they missed without it being a big deal
  • No one needs to go through bug reports
  • The developer and tester have a chance to talk about what's been done - this simple chat can trigger all kinds of 'ohh, I didn't think about that' type statements, from either party
  • The developer/tester relationship is strengthened just that little bit extra by talking (unless they fought, which is good to get out of the system)

The representative graph of cost to fix over time becomes:
Cost of fixing a bug with Over the Shoulder moment added

The Poem of the Good Tester, pt 1

Since there is an abundance of blogged opinion on what makes a good tester I'm merely going to reiterate other's thoughts via the slightly more delightful medium of poetry.

Our tale starts off quite slow
With a project all down with woe
No manager was humming
As no releases were coming
Ah yes, a problem that we all know

Upon inspection there's functional delay
Because Test are barring, oi vhey
We ask 'what's the matter
As it's all in a tatter'
"The testers are wild!" you say

Well let's start off on this first thing
Listen closely, I'd like you to sing
Regardless management, dev, analyst
I don't give the faintest
It's to each other that you must Ping


Tune in for parts 1, 2, and, ..., n -1, n.

And if you're lucky, n+1 ...

The Fundamental Principals of a Good Test

Good tests are an oddity in so far as you will only appreciate the heuristics of a good test by encountering things which have gone wrong due to a 'bad' test.
Admittedly there aren't hard and fast rules which one should follow to make a test 'good', but merely some tricks for young players to be weary of to ensure the test that they are creating is fit for purpose in the present and the future.

Somewhat idealized end goals:

  • Have a bug result in one and only one test failing
  • Have only one artifact to update when a single piece of functionality is changed
  • Have others be able to follow your work quickly and with great comfort

New terminology
Verificational and navigational steps: in a test script there are really two types of test steps, one is verifying the behaviour of a specific feature which you are interested in, and one is navigating you to a place where you can verify from. These are verificational steps and navigational steps respectively. For the latter it doesn't matter how the script navigates around or whether defects are encountered en route, the only thing a test script is concerned with is verifying what it was intended to verify. 

In a nutshell, I'd put the 'good heuristics' as:
  • Scripts
    • Atomic - they shouldn't be larger than they need to be
      • A fail should represent one single piece of functionality failing
      • A single piece of functionality failing should result in one fail
      • A test shouldn't encompass so much that one missing piece of functionality prevents another twenty pieces of functionality from being clearly tested/reported/tracked
  • Steps
    • Clear - anyone should understand what you're saying
    • Succinct - it's quick to read, quick to understand
    • Contain one and only one action to perform per step
    • Contain at most one item to check in the expected result
    • No confusing logic, if possible
    • Strictly no branching in verification steps (a single test cannot do X or Y, test x1 can do X, test y1 can do Y)
    • Link to defects - if something is failing then link the test to a defect and vice versa
  • Identify repeated content
    • Repeated steps should be put in a central location and referenced, rather than written out repeatedly
    • A clear description of pre and post conditions
    • Avoid verification steps where possible - this is added complexity, disastrous should a step ever be excluded in the future (who will know to test it), and will result in multiple 'fails' being reported
  • Data
    • Pull test data out of tests and place it in a central place
  • Have a script creation process
    • Ensure assets are reviewed
  • Have a script maintenance process
    • Ensure assets are up to date
  • Give tests attributes
    • Status is a good attribute, so we can see whether the test is 'ready', or 'in design', or 'awaiting review', or whatever
    • Priority - you are unlikely to ever have sufficient time to execute all tests, and when you need to choose which ones to run you often already low on time. Decide before hand
  • Execute the tests
    • !! it seems obvious, but it actually does need to be proactively scheduled and done
  • Have traceability
    • Meaningfully link tests' pass/fail statuses to functionality/requirements allowing pretty charts of the system's health to be generated
  • Store relevant knowledge (in steps, defects, or even wikis)
    • If someone has to ask a question about anything, realize that they aren't going to be the only person to ponder it, so, put the answer somewhere useful
    • If you have to ask a question also put the answer somewhere useful
As I said, the above arn't hard and fast rules but more of a guide as to what's worth considering when creating your own test scripts.

A glance at HP’s QC11 ‘ALM’ - BPT module


For those short on time the most significant material is in black

 
Elaborated this becomes Hewlett Packard’s Quality Centre eleven ‘Application Lifecycle Management’ Business Process Testing module.


 
A few days ago I was asked whether I knew what BPT was, sadly all Google could refer me to were HP sales docs or inane forum posts. This is intended to be as fast an overview as possible of what QC’s BPT is with mini-delves in to relevant areas. So, let's take a quick look at our surroundings:

Core Test Objectives
  1. As a business I want requirements to be verified.
  2. As a business I want the cost of script creation and script maintenance to be as low as possible.

HP’s QC has always achieved (1) but can struggle on (2) without super-stringent process adherence. BPT makes (2) a bit more structured and a bit easier to achieve.

A breakdown of the core BPT-relevant concepts in QC
  • Requirements
  • Tests
    • Manual Test Script / steps to verify something
      • Reusable Template / reusable steps to perform some action
    • Business Component / think of this as a reusable test script
      • Business Flow / think of this as another reusable test script
      • Business Process / think of this as the test scripts cello taped together to verify something
  • Test run
    • Pass/fail...
The process to date sans BPT has typically been (a) make requirements > (b) make tests for requirements > (c) run the tests. The addition of BPT essentially is a replacement of (b).

Business Process Testing is a fast and efficient way of creating, organizing and executing tests. It is extremely similar to using manual reusable templates to build up a script, but it extends the idea by pairing seamlessly with automation, and including an easy, user friendly interface for building up tests.

The beauty of it is that your test set becomes like Lego. A few engineers make the bricks and then anyone can effortlessly create the tests. And I genuinely mean anyone: testers, business analysts, managers.

As with all of these things, simply simplifying an activity makes it easier to adopt, easier to follow, and reduces user error along the way.


 
New workflow

Create requirements > create business processes (replacing test scripts) > create relevant business components (the Lego to make the business process complete) > run the thing

Exciting part

Okay, so my background is automation and this part made me giddy with excitement. The business components (Lego blocks) can be automated using QTP, and in doing so work seamlessly within QC.

 
This means a person who wants a test for something but doesn’t understand automation can come along, throw the blocks together, and voila, they have created a new test. Going further, if they then press a little green button their test will execute immediately. It is an automated test thrown together in no time. Done. QED, in fact. *

*It is important to note that in no way should this be misconstrued to undermine the great deal of work which is needed in automating the business components in the first place; the individual components will take longer to automate than a normal test script because they must be so modular and robust. And not to forget that they will need to be maintained.

Overview

Pros
  • Test-creation extremely efficient
  • Test-maintenance extremely efficient
  • Anyone can create tests

Cons
  • Anyone can create tests
  • Some granularity might become cumbersome, or just get lost - would this be overkill for a thorough function test of a new UI feature?
Moots
  • Still need to be clever about how data is stored
  • Still need to be clever about how testing is done



 
Further research required

  • Is it really wise to have verification steps stored in a reusable document / what’s the best practise to ensure that they are never accidentally deleted resulting in a false pass
  • Does this get confusing with multiple data-dependencies between all of the Lego blocks / how can this be managed
  • Does this merely compliment or actually replace the need for ‘test scripts’ in QC

Robot Framework, Basic Setup

Plug: Robot Framework is quick to setup, easy to write tests for, and super fast to triage failures in. The last point really sets it apart...